From dms at danplanet.com Thu Apr 1 00:00:12 2021 From: dms at danplanet.com (Dan Smith) Date: Wed, 31 Mar 2021 17:00:12 -0700 Subject: [all] Gate resources and performance In-Reply-To: (Wesley Hayutin's message of "Wed, 31 Mar 2021 11:02:37 -0600") References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: Hi Wes, > Just wanted to check back in on the resource consumption topic. > Looking at my measurements the TripleO group has made quite a bit of > progress keeping our enqued zuul time lower than our historical > average. Do you think we can measure where things stand now and have > some new numbers available at the PTG? Yeah, in the last few TC meetings I've been saying things like "let's not sample right now because we're in such a weird high-load situation with the release" and "...but we seem to be chewing through a lot of patches, so things seem better." I definitely think the changes made by tripleo and others are helping. Life definitely "feels" better lately. I'll try to circle back and generate a new set of numbers with my script, and also see if I can get updated numbers from Clark on the overall percentages. Thanks! --Dan From missile0407 at gmail.com Thu Apr 1 00:59:58 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 1 Apr 2021 08:59:58 +0800 Subject: launch VM on volume vs. image In-Reply-To: References: Message-ID: Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: > Hi, > > With Ceph as the backend storage, launching a VM on volume takes much > longer than launching on image. Why is that? > Could anyone elaborate the high level workflow for those two cases? > > > Thanks! > Tony > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Apr 1 01:15:49 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Mar 2021 19:15:49 -0600 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Wed, Mar 31, 2021 at 6:00 PM Dan Smith wrote: > Hi Wes, > > > Just wanted to check back in on the resource consumption topic. > > Looking at my measurements the TripleO group has made quite a bit of > > progress keeping our enqued zuul time lower than our historical > > average. Do you think we can measure where things stand now and have > > some new numbers available at the PTG? > > Yeah, in the last few TC meetings I've been saying things like "let's > not sample right now because we're in such a weird high-load situation > with the release" and "...but we seem to be chewing through a lot of > patches, so things seem better." I definitely think the changes made by > tripleo and others are helping. Life definitely "feels" better > lately. I'll try to circle back and generate a new set of numbers with > my script, and also see if I can get updated numbers from Clark on the > overall percentages. > > Thanks! > > --Dan > > Sounds good.. I'm keeping an eye in the meantime w/ http://dashboard-ci.tripleo.org/d/Z4vLSmOGk/cockpit?viewPanel=71&orgId=1 SELECT max("enqueued_time") FROM "zuul-queue-status" WHERE ("pipeline" = 'gate' AND "queue" = 'tripleo') and http://dashboard-ci.tripleo.org/d/Z4vLSmOGk/cockpit?viewPanel=398&orgId=1&from=now-6M&to=now SELECT max("enqueued_time") FROM "zuul-queue-status" WHERE ("pipeline" = 'gate') AND time >= 1601514835817ms GROUP BY time(10m) fill(0);SELECT mean("enqueued_time") FROM "zuul-queue-status" WHERE ("pipeline" = 'check') AND time >= 1601514835817ms GROUP BY time(10m) fill(0) 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Apr 1 01:32:29 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Mar 2021 19:32:29 -0600 Subject: [tripleo][ci] jobs in retry_limit or skipped Message-ID: Greetings, Just FYI.. I believe we hit a bump in the road in upstream infra ( not sure yet ). It appears to be global and not isolated to tripleo or centos based jobs. I have a tripleo bug to track it. https://bugs.launchpad.net/tripleo/+bug/1922148 See #opendev for details, it looks like infra is very busy working and fixing the issues atm. http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2021-03-31.log.html#t2021-03-31T10:34:51 http://eavesdrop.openstack.org/irclogs/%23opendev/latest.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvd at redhat.com Thu Apr 1 02:04:00 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Wed, 31 Mar 2021 22:04:00 -0400 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: +1 On Wed, Mar 31, 2021 at 9:51 AM Alex Schultz wrote: > +1 > > On Wed, Mar 31, 2021 at 3:30 AM Takashi Kajinami > wrote: > >> Hello, >> >> >> I'd like to propose Alan Bishop (abishop) for the core team of >> puppet-cinder >> and puppet-glance. >> Alan has been actively involved in these 2 modules for a few years >> and has implemented some nice features like multiple backend support in >> glance, >> cinder s3 backup driver and etc, which expanded adoption of >> puppet-openstack. >> He has also provided good reviews on patches for these 2 repos based >> on his understanding about our code, puppet and serverspec. >> >> He is an active contributor to cinder and has deep knowledge about it. >> In addition He is also a core review in TripleO, which consumes our >> puppet modules, >> and mainly covers storage components like cinder and glance, so he is >> familiar >> with the way how these two components are deployed and configured. >> >> I believe adding him to our board helps us improve our review of these >> two modules. >> >> I'll wait for one week to hear any feedback from other core reviewers. >> >> Thank you, >> Takashi >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Thu Apr 1 02:18:42 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 1 Apr 2021 02:18:42 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , Message-ID: Thank you Eddie! It makes sense. Creating a snapshot is much faster than copying image to a volume. Tony ________________________________________ From: Eddie Yen Sent: March 31, 2021 05:59 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu > 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From iwienand at redhat.com Thu Apr 1 03:17:11 2021 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Apr 2021 14:17:11 +1100 Subject: Retiring the planet.openstack.org service Message-ID: Hello, We plan to retire the planet.openstack.org RSS aggregation service soon. The host is running an unsupported distribution, and we have not found any open source alternatives that seem to be currently maintained and deployable. On consideration of our limited infra resources, we feel it will be better to sunset this service at this time. It is likely that effort will be better spent getting information to more relevant channels in 2021, such as social media sites, etc. I have extracted an OPML file of the active blogs in [1]; most every feed reader can import this file. Thanks, -i [1] https://review.opendev.org/c/opendev/system-config/+/784191 From missile0407 at gmail.com Thu Apr 1 05:47:06 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 1 Apr 2021 13:47:06 +0800 Subject: launch VM on volume vs. image In-Reply-To: References: Message-ID: BTW, If the source image is based on compression or thin provision type (like VDI, QCOW2, VMDK, etc.) It will take a long time to create no matter boot on image or volume. Nova will convert the image based on these type first during creation. Because Ceph RBD doesn't support. Make sure all the images you upload is based on RBD format (or RAW format in other word), unless the virtual size of image is small. . Tony Liu 於 2021年4月1日 週四 上午10:18寫道: > Thank you Eddie! It makes sense. Creating a snapshot is much faster > than copying image to a volume. > > Tony > ________________________________________ > From: Eddie Yen > Sent: March 31, 2021 05:59 PM > To: Tony Liu > Cc: openstack-discuss at lists.openstack.org > Subject: Re: launch VM on volume vs. image > > Hi Tony, > > In Ceph layer, IME, launching VM on image is creating a snapshot from > source image in Nova ephemeral pool. > If you check the RBD image created in Nova ephemeral pool, all images have > their own parents from glance images. > > For launching VM on volume, it will "copy" the image to volume pool first, > resize to specified disk size, then connect and boot. > Because it's not create a snapshot from image, so it will take much longer. > > Eddie. > > Tony Liu > 於 > 2021年4月1日 週四 上午8:09寫道: > Hi, > > With Ceph as the backend storage, launching a VM on volume takes much > longer than launching on image. Why is that? > Could anyone elaborate the high level workflow for those two cases? > > > Thanks! > Tony > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Apr 1 05:58:45 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 1 Apr 2021 08:58:45 +0300 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Wed, Mar 31, 2021 at 8:04 PM Wesley Hayutin wrote: > > > On Wed, Feb 10, 2021 at 1:05 PM Dan Smith wrote: > >> > Here's the timing I see locally: >> > Vanilla devstack: 775 >> > Client service alone: 529 >> > Parallel execution: 527 >> > Parallel client service: 465 >> > >> > Most of the difference between the last two is shorter async_wait >> > times because the deployment steps are taking less time. So not quite >> > as much as before, but still a decent increase in speed. >> >> Yeah, cool, I think you're right that we'll just serialize the >> calls. It may not be worth the complexity, but if we make the OaaS >> server able to do a few things in parallel, then we'll re-gain a little >> more perf because we'll go back to overlapping the *server* side of >> things. Creating flavors, volume types, networks and uploading the image >> to glance are all things that should be doable in parallel in the server >> projects. >> >> 465s for a devstack is awesome. Think of all the developer time in >> $local_fiat_currency we could have saved if we did this four years >> ago... :) >> >> --Dan >> >> > Hey folks, > Just wanted to check back in on the resource consumption topic. > Looking at my measurements the TripleO group has made quite a bit of > progress keeping our enqued zuul time lower than our historical average. > Do you think we can measure where things stand now and have some new > numbers available at the PTG? > > /me notes we had a blip on 3/25 but there was a one off issue w/ nodepool > in our gate. > > Marios Andreou has put a lot of time into this, and others as well. > Kudo's Marios! > Thanks all! > o/ thanks for the shout out ;) Big thanks to Sagi (sshnaidm), Chandan (chkumar), Wes (weshay), Alex (mwhahaha) and everyone else who helped us merge those things https://review.opendev.org/q/topic:tripleo-ci-reduce - things like tightening files/irrelevant_files matches, removal of older/non voting jobs, removal of upgrade master jobs and removal of layout overrides across tripleo repos (using the centralised tripleo-ci repo templates everywhere instead) to make maintenance easier so it is more likely that we will notice and fix new issues moving forward regards, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Apr 1 06:52:08 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 1 Apr 2021 08:52:08 +0200 Subject: [neutron] Drivers team meeting 02.04.2021 cancelled Message-ID: <20210401065208.6l7c3g4fnweqsy4m@p1.localdomain> Hi, As tomorrow's Good Friday is public holiday in many countries, at least in Europe, and I will also be on PTO, let's cancel the drivers meeting. Have a great holidays and see You on the meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From balazs.gibizer at est.tech Thu Apr 1 07:26:54 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 01 Apr 2021 09:26:54 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> <78MQQQ.FMXMGIRLYEMQ@est.tech> Message-ID: On Wed, Mar 31, 2021 at 16:54, Herve Beraud wrote: > Hello Balazs, > > Now that the os-brick changes on nova are merged do you plan to > propose a RC2? > > https://review.opendev.org/c/openstack/nova/+/783674 Hi Herve, Yes we will propose an RC2 from nova to release the os-brick fix. I've now created the release patch[1] but we agreed with Sylvain that we are not rushing to actually make a release this week so that if anything else pops up then we can include those as well into the RC2. Cheers, gibi [1] https://review.opendev.org/c/openstack/releases/+/784201 > > Le lun. 29 mars 2021 à 17:43, Balazs Gibizer > a écrit : >> >> >> On Mon, Mar 29, 2021 at 16:05, Balazs Gibizer >> >> wrote: >> > >> > >> > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita >> > wrote: >> >> Hello Requirements Team, >> >> >> >> The Cinder team recently became aware of a potential data-loss >> bug >> >> [0] that has been fixed in os-brick master [1] and backported to >> >> os-brick stable/wallaby [2]. We've proposed a release of >> os-brick >> >> 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to >> >> include 4.4.0 in the wallaby release. >> >> >> >> We have three jobs running tempest with os-brick source in master >> >> that have passed with [1]: os-brick-src-devstack-plugin-ceph >> [4], >> >> os-brick-src-tempest-lvm-lio-barbican [5],and >> >> os-brick-src-tempest-nfs [6]. The difference between os-brick >> >> master (at the time the tests were run) and stable/wallaby since >> >> the 4.3.0 tag is as follows: >> >> >> >> master: >> >> d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg >> change >> >> (Gorka Eguileor) >> >> 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to >> >> modifying offset" (Zuul) >> >> 28545c7 4 months ago RBD: catch read exceptions prior to >> modifying >> >> offset (Jon Bernard) >> >> 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" >> >> (Zuul) >> >> 7cfdb76 6 weeks ago Dropping explicit unicode literal >> >> (tushargite96) >> >> 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack >> Release >> >> Bot) >> >> ab57392 3 weeks ago Update master for stable/wallaby (OpenStack >> >> Release Bot) >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver >> >> connection information compatibility fix" (Zuul) >> >> >> >> stable/wallaby: >> >> f86944b 3 days ago Add release note prelude for os-brick 4.4.0 >> >> (Brian Rosmaita) >> >> c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg >> change >> >> (Gorka Eguileor) >> >> 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for >> stable/wallaby >> >> (OpenStack Release Bot) >> >> f3f93dc 3 weeks ago Update .gitreview for stable/wallaby >> >> (OpenStack Release Bot) >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver >> >> connection information compatibility fix" (Zuul) >> >> >> >> This gives us very high confidence that the results of the tests >> run >> >> against master also apply to stable/wallaby at f86944b. >> >> >> >> Thank you for considering this request. >> >> >> >> (I've included Nova here because the bug occurs when the >> >> configuration option that enables multipath connections on a >> >> compute is changed while volumes are attached, so if this RFE is >> >> approved, nova might want to raise the minimum version of >> os-brick >> >> in wallaby to 4.4.0.) >> >> >> > >> > Thanks for the heads up. After the new os-brick version is >> released I >> > will prepare a version bump patch in nova on master and >> > stable/wallaby. This also means that nova will release an RC2. >> >> I've proposed the nova patch on master to bump min os-brick to >> 4.3.1 in >> nova[1] >> >> [1] https://review.opendev.org/c/openstack/nova/+/783674 >> >> > >> > Cheers, >> > gibi >> > >> >> >> >> [0] https://launchpad.net/bugs/1921381 >> >> [1] https://review.opendev.org/c/openstack/os-brick/+/782992 >> >> [2] https://review.opendev.org/c/openstack/os-brick/+/783207 >> >> [3] https://review.opendev.org/c/openstack/releases/+/783641 >> >> [4] >> >> >> https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb >> >> [5] >> >> >> https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 >> >> [6] >> >> >> https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa >> > >> > >> > >> > >> >> >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From hberaud at redhat.com Thu Apr 1 07:30:22 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 09:30:22 +0200 Subject: [cinder][nova][requirements] RFE requested for os-brick In-Reply-To: References: <5cb4665e-2ef2-8a2d-5426-0a420125d821@gmail.com> <78MQQQ.FMXMGIRLYEMQ@est.tech> Message-ID: Make sense thank you Le jeu. 1 avr. 2021 à 09:27, Balazs Gibizer a écrit : > > > On Wed, Mar 31, 2021 at 16:54, Herve Beraud wrote: > > Hello Balazs, > > > > Now that the os-brick changes on nova are merged do you plan to > > propose a RC2? > > > > https://review.opendev.org/c/openstack/nova/+/783674 > > Hi Herve, > > Yes we will propose an RC2 from nova to release the os-brick fix. I've > now created the release patch[1] but we agreed with Sylvain that we are > not rushing to actually make a release this week so that if anything > else pops up then we can include those as well into the RC2. > > Cheers, > gibi > > [1] https://review.opendev.org/c/openstack/releases/+/784201 > > > > > Le lun. 29 mars 2021 à 17:43, Balazs Gibizer > > a écrit : > >> > >> > >> On Mon, Mar 29, 2021 at 16:05, Balazs Gibizer > >> > >> wrote: > >> > > >> > > >> > On Mon, Mar 29, 2021 at 08:50, Brian Rosmaita > >> > wrote: > >> >> Hello Requirements Team, > >> >> > >> >> The Cinder team recently became aware of a potential data-loss > >> bug > >> >> [0] that has been fixed in os-brick master [1] and backported to > >> >> os-brick stable/wallaby [2]. We've proposed a release of > >> os-brick > >> >> 4.4.0 from stable/wallaby [3] and are petitioning for an RFE to > >> >> include 4.4.0 in the wallaby release. > >> >> > >> >> We have three jobs running tempest with os-brick source in master > >> >> that have passed with [1]: os-brick-src-devstack-plugin-ceph > >> [4], > >> >> os-brick-src-tempest-lvm-lio-barbican [5],and > >> >> os-brick-src-tempest-nfs [6]. The difference between os-brick > >> >> master (at the time the tests were run) and stable/wallaby since > >> >> the 4.3.0 tag is as follows: > >> >> > >> >> master: > >> >> d4205bd 3 days ago iSCSI: Fix flushing after multipath cfg > >> change > >> >> (Gorka Eguileor) > >> >> 0e63fe8 2 weeks ago Merge "RBD: catch read exceptions prior to > >> >> modifying offset" (Zuul) > >> >> 28545c7 4 months ago RBD: catch read exceptions prior to > >> modifying > >> >> offset (Jon Bernard) > >> >> 99b2c60 2 weeks ago Merge "Dropping explicit unicode literal" > >> >> (Zuul) > >> >> 7cfdb76 6 weeks ago Dropping explicit unicode literal > >> >> (tushargite96) > >> >> 9afa1a0 3 weeks ago Add Python3 xena unit tests (OpenStack > >> Release > >> >> Bot) > >> >> ab57392 3 weeks ago Update master for stable/wallaby (OpenStack > >> >> Release Bot) > >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > >> >> connection information compatibility fix" (Zuul) > >> >> > >> >> stable/wallaby: > >> >> f86944b 3 days ago Add release note prelude for os-brick 4.4.0 > >> >> (Brian Rosmaita) > >> >> c70d70b 3 days ago iSCSI: Fix flushing after multipath cfg > >> change > >> >> (Gorka Eguileor) > >> >> 6649b8d 3 weeks ago Update TOX_CONSTRAINTS_FILE for > >> stable/wallaby > >> >> (OpenStack Release Bot) > >> >> f3f93dc 3 weeks ago Update .gitreview for stable/wallaby > >> >> (OpenStack Release Bot) > >> >> 91a1cca 3 weeks ago (tag: 4.3.0) Merge "NVMeOF connector driver > >> >> connection information compatibility fix" (Zuul) > >> >> > >> >> This gives us very high confidence that the results of the tests > >> run > >> >> against master also apply to stable/wallaby at f86944b. > >> >> > >> >> Thank you for considering this request. > >> >> > >> >> (I've included Nova here because the bug occurs when the > >> >> configuration option that enables multipath connections on a > >> >> compute is changed while volumes are attached, so if this RFE is > >> >> approved, nova might want to raise the minimum version of > >> os-brick > >> >> in wallaby to 4.4.0.) > >> >> > >> > > >> > Thanks for the heads up. After the new os-brick version is > >> released I > >> > will prepare a version bump patch in nova on master and > >> > stable/wallaby. This also means that nova will release an RC2. > >> > >> I've proposed the nova patch on master to bump min os-brick to > >> 4.3.1 in > >> nova[1] > >> > >> [1] https://review.opendev.org/c/openstack/nova/+/783674 > >> > >> > > >> > Cheers, > >> > gibi > >> > > >> >> > >> >> [0] https://launchpad.net/bugs/1921381 > >> >> [1] https://review.opendev.org/c/openstack/os-brick/+/782992 > >> >> [2] https://review.opendev.org/c/openstack/os-brick/+/783207 > >> >> [3] https://review.opendev.org/c/openstack/releases/+/783641 > >> >> [4] > >> >> > >> > https://zuul.opendev.org/t/openstack/build/30a103668e4c4a8cb6f1ef907ef3edcb > >> >> [5] > >> >> > >> > https://zuul.opendev.org/t/openstack/build/bb11eef737d34c41bb4a52f8433850b0 > >> >> [6] > >> >> > >> > https://zuul.opendev.org/t/openstack/build/3ad3359ca712432d9ef4261d72c787fa > >> > > >> > > >> > > >> > > >> > >> > >> > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar at redhat.com Thu Apr 1 08:23:46 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Thu, 1 Apr 2021 13:53:46 +0530 Subject: [tripleo][ci] jobs in retry_limit or skipped In-Reply-To: References: Message-ID: On Thu, Apr 1, 2021 at 7:02 AM Wesley Hayutin wrote: > > Greetings, > > Just FYI.. I believe we hit a bump in the road in upstream infra ( not sure yet ). It appears to be global and not isolated to tripleo or centos based jobs. > > I have a tripleo bug to track it. > https://bugs.launchpad.net/tripleo/+bug/1922148 > > See #opendev for details, it looks like infra is very busy working and fixing the issues atm. > > http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2021-03-31.log.html#t2021-03-31T10:34:51 > http://eavesdrop.openstack.org/irclogs/%23opendev/latest.log.html > Zuul got restarted, jobs have started working fine now. if there is no job running against the patches, please recheck your patches slowly as it might flood the gates. Thanks, Chandan Kumar From oliver.wenz at dhbw-mannheim.de Thu Apr 1 09:38:48 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 1 Apr 2021 11:38:48 +0200 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <5b1439a3-06f5-633c-cce3-08ae62a8ddc3@dhbw-mannheim.de> > So according to the issue, you get 503 while trying to reach > 10.0.3.212:6002/os-objects, which is swift_account_port. > Are there any logs specificly for swift-account? > > Also I guess some adjustments are required for swift as well for this > mechanism to work. > > Eventually I believe the original issue you saw might be related to this > doc: > https://docs.openstack.org/keystone/latest/admin/manage-services.html#configuring-service-tokens Hi Dmitriy, I tried setting 'service_token_roles_required = false' in glance-api.conf but the error is still there. I also checked the account server and I'm seeing lots of 404's: Apr 01 09:31:33 bc1bl12 account-server[694822]: 10.0.3.212 - - [01/Apr/2021:09:31:33 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx3794c20a6ef5476f9fcf6-00606592f5" "proxy-server 13882" 0.0007 "-" 694822 - Apr 01 09:32:03 bc1bl12 account-server[694816]: 10.0.3.212 - - [01/Apr/2021:09:32:03 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "tx5491e866bc6447d8917f9-0060659313" "proxy-server 13882" 0.0008 "-" 694816 - Apr 01 09:32:03 bc1bl12 account-server[694814]: 10.0.3.212 - - [01/Apr/2021:09:32:03 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "txc98a949e82c940d19656b-0060659313" "proxy-server 13882" 0.0007 "-" 694814 - Apr 01 09:32:33 bc1bl12 account-server[694817]: 10.0.3.212 - - [01/Apr/2021:09:32:33 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "txd9ec46414b8e4a98a2b94-0060659331" "proxy-server 13882" 0.0008 "-" 694817 - Apr 01 09:32:33 bc1bl12 account-server[694817]: 10.0.3.212 - - [01/Apr/2021:09:32:33 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx972d968e71e94eb683558-0060659331" "proxy-server 13882" 0.0011 "-" 694817 - Apr 01 09:32:43 bc1bl12 account-server[694823]: 10.0.3.212 - - [01/Apr/2021:09:32:43 +0000] "HEAD /os-objects/220/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx4d02362089c4497982940-006065933b" "proxy-server 14171" 0.0008 "-" 694823 - Apr 01 09:33:03 bc1bl12 account-server[694822]: 10.0.3.212 - - [01/Apr/2021:09:33:03 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "tx440963ca3e7948ff872ab-006065934f" "proxy-server 13882" 0.0007 "-" 694822 - Apr 01 09:33:03 bc1bl12 account-server[694823]: 10.0.3.212 - - [01/Apr/2021:09:33:03 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx547157ac468e476395a0e-006065934f" "proxy-server 13882" 0.0006 "-" 694823 - Apr 01 09:33:33 bc1bl12 account-server[694814]: 10.0.3.212 - - [01/Apr/2021:09:33:33 +0000] "HEAD /os-objects/22/.misplaced_objects" 404 - "HEAD http://localhost/v1/.misplaced_objects?format=json" "txac5c0207e90a4606a7758-006065936d" "proxy-server 13882" 0.0007 "-" 694814 - Apr 01 09:33:33 bc1bl12 account-server[694820]: 10.0.3.212 - - [01/Apr/2021:09:33:33 +0000] "GET /os-objects/22/.misplaced_objects" 404 - "GET http://localhost/v1/.misplaced_objects?format=json&marker=&end_marker=&prefix=" "tx0d3afe2f421647f2a117e-006065936d" "proxy-server 13882" 0.0006 "-" 694820 - Kind regards, Oliver From vuk.gojnic at gmail.com Thu Apr 1 10:11:05 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Thu, 1 Apr 2021 12:11:05 +0200 Subject: [ironic] IPA image does not want to boot with UEFI Message-ID: Hello everybody, I am using Ironic standalone to provision the HPE Gen10+ node via iLO driver. Ironic version is 16.0.1. Server is configured with UEFI boot mode. Everything on Ironic side works fine. It creates ISO image, powers the server on and configures it to boot from it. Here is the what /var/log/ironic/ironic-conductor.log says: 2021-03-31 17:46:25.541 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "cleaning" from state "manageable"; target provision state is "available" 2021-03-31 17:46:32.066 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power off' is completed in 4 seconds. 2021-03-31 17:46:32.088 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power off by power off. 2021-03-31 17:46:34.510 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 pending boot mode is uefi. 2021-03-31 17:46:37.248 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Set the node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ed25569f-c107-4fe0-95cd-74fcad9ab3f0.iso?filename=tmpqze8ogiw.iso successfully. 2021-03-31 17:46:48.367 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power on' is completed in 8 seconds. 2021-03-31 17:46:48.388 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power on by rebooting. 2021-03-31 17:46:48.404 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "clean wait" from state "cleaning"; target provision state is "available" The Grub2 starts and after I select the option “boot_partition", it starts booting and immediately freezes showing just black screen with static red underscore character. I have tried with pre-built IPA images (see below) as well as with custom IPA images made with Ubuntu 18.04 and 20.04 (built using ironic-python-agent-builder) but it is all the same. Does somebody have idea what is the problem with IPA and UEFI in this particular scenario? Output of “openstack baremetal node show” command: allocation_uuid: null automated_clean: null bios_interface: no-bios boot_interface: ilo-uefi-https chassis_uuid: null clean_step: {} conductor: 10.23.137.234 conductor_group: '' console_enabled: false console_interface: no-console created_at: '2021-03-21T13:54:25+00:00' deploy_interface: direct deploy_step: {} description: null driver: ilo5 driver_info: ilo_address: 10.23.137.137 ilo_bootloader: https://ironic-images/Images/esp.img ilo_deploy_kernel: https://ironic-images/Images/ipa-centos8-stable-victoria.kernel ilo_deploy_ramdisk: https://ironic-images/Images/ipa-centos8-stable-victoria.initramfs ilo_password: '******' ilo_username: Administrator snmp_auth_priv_password: '******' snmp_auth_prot_password: '******' snmp_auth_user: iloinspect driver_internal_info: agent_continue_if_ata_erase_failed: false agent_enable_ata_secure_erase: true agent_erase_devices_iterations: 1 agent_erase_devices_zeroize: true agent_erase_skip_read_only: false agent_secret_token: '******' agent_secret_token_pregenerated: true clean_steps: null disk_erasure_concurrency: 1 last_power_state_change: '2021-03-31T17:46:37.894667' extra: {} fault: clean failure inspect_interface: ilo inspection_finished_at: '2021-03-21T13:57:33+00:00' inspection_started_at: null instance_info: deploy_boot_mode: uefi instance_uuid: null last_error: null lessee: null maintenance: true maintenance_reason: management_interface: ilo5 name: null network_data: {} network_interface: noop owner: null power_interface: ilo power_state: power on properties: cpu_arch: x86 cpus: 64 local_gb: 2979 memory_mb: 262144 protected: false protected_reason: null provision_state: clean wait provision_updated_at: '2021-03-31T17:46:48+00:00' raid_config: {} raid_interface: no-raid rescue_interface: no-rescue reservation: null resource_class: null retired: false retired_reason: null storage_interface: noop target_power_state: null target_provision_state: available target_raid_config: {} traits: [] updated_at: '2021-03-31T17:46:48+00:00' uuid: ed25569f-c107-4fe0-95cd-74fcad9ab3f0 vendor_interface: no-vendor Many thanks! Vuk Gojnic Deutsche Telekom Technik GmbH Services & Plattforms (T-SP) Tribe Data Center Infrastructure (T-DCI) Super Squad Cloud Platforms Lifecycle (SSQ-CP) Vuk Gojnic Kubernetes Engine Squad Lead -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Apr 1 11:24:10 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 13:24:10 +0200 Subject: [oslo][release] devstack-plugin-(kafka|amqp1) retirement Message-ID: Hello Osloers, Our devstack plugins (kafka and amqp1) didn't show a great amount of activity since ussuri, does it still make sense to maintain them? The latest available SHAs for the both projects comes from Victoria (merged in this period). Can we retire them or simply retire them from the coordinated releases? We (the release team) would appreciate some feedback about this point. Let's open the debat. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From destienne.maxime at gmail.com Thu Apr 1 12:44:21 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Thu, 1 Apr 2021 14:44:21 +0200 Subject: [neutron][nova] Port binding fails when creating an instance Message-ID: Hello, I spent a lot of time troubleshooting my issue, which I described here : https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding To summarize, when I want to create an instance, binding fails on compute node, the dhcp agent seems to give an ip to the VM but I have an error. I don't know where to dig, besides what I have done. Thanks a lot for your help ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Apr 1 13:34:03 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 1 Apr 2021 13:34:03 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <91f14b7ab30747fcb4e32c40c4559bd2@ncwmexgp009.CORP.CHARTERCOM.com> I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From stephenfin at redhat.com Thu Apr 1 13:49:23 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 01 Apr 2021 14:49:23 +0100 Subject: How to customize the xml used in libvirt from GUI/opestack command line? In-Reply-To: References: Message-ID: <8318224c1ad723fc092e694a8959acb715ecb0cc.camel@redhat.com> On Tue, 2021-03-30 at 18:30 +0800, Evan Zhao wrote: > Hi there, > > I googled this question and found two major answers: > 1. ssh to the compute node and use `virsh dumpxml` and `virsh > undefine/define`, etc. > 2. edit nova/virt/libvrit/config.py directly. > > However, it's trivial to ssh to each node and do the modification, and > I prefer not to touch the nova source code, is there any better ways > to achieve this? > > I expect to edit the namespace of a certain element and append an > additional element to the xml file. > > Any information will be appreciated. We purposefully don't allow this as it's not feasible to support. If you're using a vendor to provide your OpenStack packages, there's a good chance they won't allow you to do this either. Any modifications to the libvirt XML won't be persisted when you use any operation that results in the XML being rebuilt (cold migration, shelving, hard reboot, ...), while any chances to 'config.py' mean you have a fork on your hands that you're going to have to maintain for the life of the deployment. Neither are great situations to be in. You haven't described the _specific_ problem you're trying to resolve here. There's a possibility that nova may already have a feature to solve this problem. If not, there's a chance that your problem is a problem that other users are facing and therefore could warrant a new feature. If you raise your specific feature request here or IRC (#openstack-nova), we'll be more than happy to provide guidance. Cheers, Stephen From kgiusti at gmail.com Thu Apr 1 13:55:42 2021 From: kgiusti at gmail.com (Ken Giusti) Date: Thu, 1 Apr 2021 09:55:42 -0400 Subject: [oslo][release] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: Message-ID: Hi Herve, On Thu, Apr 1, 2021 at 7:25 AM Herve Beraud wrote: > Hello Osloers, > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > activity since ussuri, does it still make sense to maintain them? > > The latest available SHAs for the both projects comes from Victoria > (merged in this period). > > Can we retire them or simply retire them from the coordinated releases? > > We (the release team) would appreciate some feedback about this point. > > Let's open the debat. > The only consumer of these plugins that I'm aware of is oslo.messaging [0]. They are needed in order to run devstack-tempest testing against the non-rabbitmq backends. Perhaps they should be integrated into the oslo.messaging project itself, if possible? [0] https://codesearch.opendev.org/?q=devstack-plugin-amqp1&i=nope&files=&excludeFiles=&repos= > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Apr 1 14:15:21 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 1 Apr 2021 14:15:21 +0000 Subject: [Interop][Refstack] this Friday meeting Message-ID: Team, This Friday is Good Friday and some people have a day off. Should we cancel this week meeting? Please, respond so we can see if we will have quorum. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 1 14:19:48 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Apr 2021 07:19:48 -0700 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Greetings, Two questions: 1) Are the ESP image contents signed, or are they built using one of the grub commands? 2) Is the machine set to enforce secure boot at this time? On Thu, Apr 1, 2021 at 3:14 AM Vuk Gojnic wrote: > > Hello everybody, > > > > I am using Ironic standalone to provision the HPE Gen10+ node via iLO driver. Ironic version is 16.0.1. Server is configured with UEFI boot mode. > > > > Everything on Ironic side works fine. It creates ISO image, powers the server on and configures it to boot from it. > > > > Here is the what /var/log/ironic/ironic-conductor.log says: > > > > 2021-03-31 17:46:25.541 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "cleaning" from state "manageable"; target provision state is "available" > > 2021-03-31 17:46:32.066 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power off' is completed in 4 seconds. > > 2021-03-31 17:46:32.088 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power off by power off. > > 2021-03-31 17:46:34.510 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 pending boot mode is uefi. > > 2021-03-31 17:46:37.248 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Set the node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ed25569f-c107-4fe0-95cd-74fcad9ab3f0.iso?filename=tmpqze8ogiw.iso successfully. > > 2021-03-31 17:46:48.367 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power on' is completed in 8 seconds. > > 2021-03-31 17:46:48.388 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power on by rebooting. > > 2021-03-31 17:46:48.404 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "clean wait" from state "cleaning"; target provision state is "available" > > > > The Grub2 starts and after I select the option “boot_partition", it starts booting and immediately freezes showing just black screen with static red underscore character. > > > > I have tried with pre-built IPA images (see below) as well as with custom IPA images made with Ubuntu 18.04 and 20.04 (built using ironic-python-agent-builder) but it is all the same. > > > > Does somebody have idea what is the problem with IPA and UEFI in this particular scenario? > > > > Output of “openstack baremetal node show” command: > > > > allocation_uuid: null > > automated_clean: null > > bios_interface: no-bios > > boot_interface: ilo-uefi-https > > chassis_uuid: null > > clean_step: {} > > conductor: 10.23.137.234 > > conductor_group: '' > > console_enabled: false > > console_interface: no-console > > created_at: '2021-03-21T13:54:25+00:00' > > deploy_interface: direct > > deploy_step: {} > > description: null > > driver: ilo5 > > driver_info: > > ilo_address: 10.23.137.137 > > ilo_bootloader: https://ironic-images/Images/esp.img > > ilo_deploy_kernel: https://ironic-images/Images/ipa-centos8-stable-victoria.kernel > > ilo_deploy_ramdisk: https://ironic-images/Images/ipa-centos8-stable-victoria.initramfs > > ilo_password: '******' > > ilo_username: Administrator > > snmp_auth_priv_password: '******' > > snmp_auth_prot_password: '******' > > snmp_auth_user: iloinspect > > driver_internal_info: > > agent_continue_if_ata_erase_failed: false > > agent_enable_ata_secure_erase: true > > agent_erase_devices_iterations: 1 > > agent_erase_devices_zeroize: true > > agent_erase_skip_read_only: false > > agent_secret_token: '******' > > agent_secret_token_pregenerated: true > > clean_steps: null > > disk_erasure_concurrency: 1 > > last_power_state_change: '2021-03-31T17:46:37.894667' > > extra: {} > > fault: clean failure > > inspect_interface: ilo > > inspection_finished_at: '2021-03-21T13:57:33+00:00' > > inspection_started_at: null > > instance_info: > > deploy_boot_mode: uefi > > instance_uuid: null > > last_error: null > > lessee: null > > maintenance: true > > maintenance_reason: > > management_interface: ilo5 > > name: null > > network_data: {} > > network_interface: noop > > owner: null > > power_interface: ilo > > power_state: power on > > properties: > > cpu_arch: x86 > > cpus: 64 > > local_gb: 2979 > > memory_mb: 262144 > > protected: false > > protected_reason: null > > provision_state: clean wait > > provision_updated_at: '2021-03-31T17:46:48+00:00' > > raid_config: {} > > raid_interface: no-raid > > rescue_interface: no-rescue > > reservation: null > > resource_class: null > > retired: false > > retired_reason: null > > storage_interface: noop > > target_power_state: null > > target_provision_state: available > > target_raid_config: {} > > traits: [] > > updated_at: '2021-03-31T17:46:48+00:00' > > uuid: ed25569f-c107-4fe0-95cd-74fcad9ab3f0 > > vendor_interface: no-vendor > > > > > > Many thanks! > > > > Vuk Gojnic > > > > Deutsche Telekom Technik GmbH > > Services & Plattforms (T-SP) > > Tribe Data Center Infrastructure (T-DCI) > > Super Squad Cloud Platforms Lifecycle (SSQ-CP) > > > > Vuk Gojnic > > Kubernetes Engine Squad Lead From gmann at ghanshyammann.com Thu Apr 1 14:24:14 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 01 Apr 2021 09:24:14 -0500 Subject: [Interop][Refstack] this Friday meeting In-Reply-To: References: Message-ID: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- > > Team, > This Friday is Good Friday and some people have a day off. > Should we cancel this week meeting? > Please, respond so we can see if we will have quorum. Thanks Arkady, I will be off from work and would not be able to join. -gmann > Thanks, > Arkady > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > > From openstack at nemebean.com Thu Apr 1 15:02:43 2021 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 1 Apr 2021 10:02:43 -0500 Subject: [oslo][release] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: Message-ID: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> On 4/1/21 6:24 AM, Herve Beraud wrote: > Hello Osloers, > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > activity since ussuri, does it still make sense to maintain them? > > The latest available SHAs for the both projects comes from Victoria > (merged in this period). > > Can we retire them or simply retire them from the coordinated releases? These have never been released and are no longer branched. What is their involvement in the coordinated release at this point? > > We (the release team) would appreciate some feedback about this point. > > Let's open the debat. > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From jimmy at openstack.org Thu Apr 1 15:04:03 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 1 Apr 2021 10:04:03 -0500 Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> Message-ID: <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > From fungi at yuggoth.org Thu Apr 1 15:04:53 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Apr 2021 15:04:53 +0000 Subject: [tripleo][ci][infra] jobs in retry_limit or skipped In-Reply-To: References: Message-ID: <20210401150453.pkdhllydqltpgnhr@yuggoth.org> On 2021-04-01 13:53:46 +0530 (+0530), Chandan Kumar wrote: > On Thu, Apr 1, 2021 at 7:02 AM Wesley Hayutin wrote: > > > > Greetings, > > > > Just FYI.. I believe we hit a bump in the road in upstream infra ( not sure yet ). It appears to be global and not isolated to tripleo or centos based jobs. > > > > I have a tripleo bug to track it. > > https://bugs.launchpad.net/tripleo/+bug/1922148 > > > > See #opendev for details, it looks like infra is very busy working and fixing the issues atm. > > > > http://eavesdrop.openstack.org/irclogs/%23opendev/%23opendev.2021-03-31.log.html#t2021-03-31T10:34:51 > > http://eavesdrop.openstack.org/irclogs/%23opendev/latest.log.html > > > > Zuul got restarted, jobs have started working fine now. > if there is no job running against the patches, please recheck your > patches slowly as it might flood the gates. It's a complex situation with a few problems intermingled. First, the tripleo-ansible-centos-8-molecule-tripleo-modules job seemed to have some bug of its own causing frequent disconnects of the job node leading to retries. Also some recent change in Zuul seems to have introduced a semi-slow memory leak which, when we run into memory pressure on the scheduler, causes Zookeeper disconnects which trigger mass build retries. Further, because the source of the memory leak has been really tough to nail down, live debugging directly in the running process has been applied, and this slows the scheduler by orders of magnitude when engaged, triggering similar Zookeeper disconnects as well. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Thu Apr 1 15:28:25 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 17:28:25 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> Message-ID: Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > Hello Osloers, > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > activity since ussuri, does it still make sense to maintain them? > > > > The latest available SHAs for the both projects comes from Victoria > > (merged in this period). > > > > Can we retire them or simply retire them from the coordinated releases? > > These have never been released and are no longer branched. What is their > involvement in the coordinated release at this point? > Yes these deliverables are tagless so no released at all, however, they are coordinated so they are branched during each series. Though, as said Ben, those deliverables haven't been branched during the previous series (victoria), we suppose that they have been simply forgotten inadvertently, I proposed a patch to fix that point [1]. However we don't have so many things to branch for the current series [2], no new commits have been merged during the last 7 months (Victoria at this period), so, the question is, do we have reasons to keep them under the coordinated releases umbrella if nothing new happens in this area. I proposed a patch for Wallaby too, but I'm not convinced that's the right solution. Maybe Ken is right and maybe that's time to merge these plugins with oslo.messaging, however, I don't know if it's feasible from a devstack point of view. Adding the QA team to this thread topic to discuss that last point with them. Thanks for your replies. [1] https://review.opendev.org/c/openstack/releases/+/784371 [2] https://review.opendev.org/c/openstack/releases/+/784376 > > > > We (the release team) would appreciate some feedback about this point. > > > > Let's open the debat. > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Apr 1 15:34:19 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 1 Apr 2021 15:34:19 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 9:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From ltoscano at redhat.com Thu Apr 1 15:42:26 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 01 Apr 2021 17:42:26 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> Message-ID: <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > Hello Osloers, > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > > activity since ussuri, does it still make sense to maintain them? > > > > > > The latest available SHAs for the both projects comes from Victoria > > > (merged in this period). > > > > > > Can we retire them or simply retire them from the coordinated releases? > > > > These have never been released and are no longer branched. What is their > > involvement in the coordinated release at this point? > > Yes these deliverables are tagless so no released at all, however, they are > coordinated so they are branched during each series. > > Though, as said Ben, those deliverables haven't been branched during the > previous series (victoria), we suppose that they have been simply forgotten > inadvertently, > > I proposed a patch to fix that point [1]. Other devstack plugins are branchless (devstack-plugin-ceph, devstack-plugin- nfs), couldn't those be branchless too? -- Luigi From hberaud at redhat.com Thu Apr 1 15:44:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 17:44:21 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> Message-ID: >From my point of view I would argue that yes, however, I don't have the big picture. Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a écrit : > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit > : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > > Hello Osloers, > > > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > > > activity since ussuri, does it still make sense to maintain them? > > > > > > > > The latest available SHAs for the both projects comes from Victoria > > > > (merged in this period). > > > > > > > > Can we retire them or simply retire them from the coordinated > releases? > > > > > > These have never been released and are no longer branched. What is > their > > > involvement in the coordinated release at this point? > > > > Yes these deliverables are tagless so no released at all, however, they > are > > coordinated so they are branched during each series. > > > > Though, as said Ben, those deliverables haven't been branched during the > > previous series (victoria), we suppose that they have been simply > forgotten > > inadvertently, > > > > I proposed a patch to fix that point [1]. > > Other devstack plugins are branchless (devstack-plugin-ceph, > devstack-plugin- > nfs), couldn't those be branchless too? > > > -- > Luigi > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Apr 1 16:05:29 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 18:05:29 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> Message-ID: Well, as suggested Luigi let's drop these deliverables (within Wallaby and for the next series). https://review.opendev.org/c/openstack/releases/+/784376 I kept the Victoria branching but that will be the last one. https://review.opendev.org/c/openstack/releases/+/784371 Let me know what you think Le jeu. 1 avr. 2021 à 17:44, Herve Beraud a écrit : > From my point of view I would argue that yes, however, I don't have the > big picture. > > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a écrit : > >> On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: >> > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a >> écrit : >> > > On 4/1/21 6:24 AM, Herve Beraud wrote: >> > > > Hello Osloers, >> > > > >> > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of >> > > > activity since ussuri, does it still make sense to maintain them? >> > > > >> > > > The latest available SHAs for the both projects comes from Victoria >> > > > (merged in this period). >> > > > >> > > > Can we retire them or simply retire them from the coordinated >> releases? >> > > >> > > These have never been released and are no longer branched. What is >> their >> > > involvement in the coordinated release at this point? >> > >> > Yes these deliverables are tagless so no released at all, however, they >> are >> > coordinated so they are branched during each series. >> > >> > Though, as said Ben, those deliverables haven't been branched during the >> > previous series (victoria), we suppose that they have been simply >> forgotten >> > inadvertently, >> > >> > I proposed a patch to fix that point [1]. >> >> Other devstack plugins are branchless (devstack-plugin-ceph, >> devstack-plugin- >> nfs), couldn't those be branchless too? >> >> >> -- >> Luigi >> >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vuk.Gojnic at telekom.de Thu Apr 1 10:02:19 2021 From: Vuk.Gojnic at telekom.de (Vuk.Gojnic at telekom.de) Date: Thu, 1 Apr 2021 10:02:19 +0000 Subject: [ironic] IPA does not want boot with UEFI Message-ID: Hello everybody, I am using Ironic standalone to provision the HPE Gen10+ node via iLO driver. Ironic version is 16.0.1. Server is configured with UEFI boot mode. Everything on Ironic side works fine. It creates ISO image, powers the server on and configures it to boot from it. Here is the what /var/log/ironic/ironic-conductor.log says: 2021-03-31 17:46:25.541 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "cleaning" from state "manageable"; target provision state is "available" 2021-03-31 17:46:32.066 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power off' is completed in 4 seconds. 2021-03-31 17:46:32.088 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power off by power off. 2021-03-31 17:46:34.510 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 pending boot mode is uefi. 2021-03-31 17:46:37.248 2618460 INFO ironic.drivers.modules.ilo.common [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Set the node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ed25569f-c107-4fe0-95cd-74fcad9ab3f0.iso?filename=tmpqze8ogiw.iso successfully. 2021-03-31 17:46:48.367 2618460 INFO ironic.drivers.modules.ilo.power [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] The node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 operation of 'power on' is completed in 8 seconds. 2021-03-31 17:46:48.388 2618460 INFO ironic.conductor.utils [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Successfully set node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 power state to power on by rebooting. 2021-03-31 17:46:48.404 2618460 INFO ironic.conductor.task_manager [req-b6a19234-5f4b-4852-ab41-4102b9016bb0 - - - - -] Node ed25569f-c107-4fe0-95cd-74fcad9ab3f0 moved to provision state "clean wait" from state "cleaning"; target provision state is "available" The Grub2 starts and after I select the option "boot_partition", it starts booting and immediately freezes with following screen (just red static underscore): [cid:image002.jpg at 01D726EE.A920A5C0] I have tried with pre-built IPA images (see below) as well as with custom IPA images made with Ubuntu 18.04 and 20.04 (built using ironic-python-agent-builder) but it is all the same. Does somebody have idea what is the problem with IPA and UEFI in this particular scenario? Output of "openstack baremetal node show" command: allocation_uuid: null automated_clean: null bios_interface: no-bios boot_interface: ilo-uefi-https chassis_uuid: null clean_step: {} conductor: 10.23.137.234 conductor_group: '' console_enabled: false console_interface: no-console created_at: '2021-03-21T13:54:25+00:00' deploy_interface: direct deploy_step: {} description: null driver: ilo5 driver_info: ilo_address: 10.23.137.137 ilo_bootloader: https://ironic-images/Images/esp.img ilo_deploy_kernel: https://ironic-images/Images/ipa-centos8-stable-victoria.kernel ilo_deploy_ramdisk: https://ironic-images/Images/ipa-centos8-stable-victoria.initramfs ilo_password: '******' ilo_username: Administrator snmp_auth_priv_password: '******' snmp_auth_prot_password: '******' snmp_auth_user: iloinspect driver_internal_info: agent_continue_if_ata_erase_failed: false agent_enable_ata_secure_erase: true agent_erase_devices_iterations: 1 agent_erase_devices_zeroize: true agent_erase_skip_read_only: false agent_secret_token: '******' agent_secret_token_pregenerated: true clean_steps: null disk_erasure_concurrency: 1 last_power_state_change: '2021-03-31T17:46:37.894667' extra: {} fault: clean failure inspect_interface: ilo inspection_finished_at: '2021-03-21T13:57:33+00:00' inspection_started_at: null instance_info: deploy_boot_mode: uefi instance_uuid: null last_error: null lessee: null maintenance: true maintenance_reason: management_interface: ilo5 name: null network_data: {} network_interface: noop owner: null power_interface: ilo power_state: power on properties: cpu_arch: x86 cpus: 64 local_gb: 2979 memory_mb: 262144 protected: false protected_reason: null provision_state: clean wait provision_updated_at: '2021-03-31T17:46:48+00:00' raid_config: {} raid_interface: no-raid rescue_interface: no-rescue reservation: null resource_class: null retired: false retired_reason: null storage_interface: noop target_power_state: null target_provision_state: available target_raid_config: {} traits: [] updated_at: '2021-03-31T17:46:48+00:00' uuid: ed25569f-c107-4fe0-95cd-74fcad9ab3f0 vendor_interface: no-vendor Many thanks! Vuk Gojnic Deutsche Telekom Technik GmbH Services & Plattforms (T-SP) Tribe Data Center Infrastructure (T-DCI) Super Squad Cloud Platforms Lifecycle (SSQ-CP) Vuk Gojnic Kubernetes Engine Squad Lead -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 21619 bytes Desc: image002.jpg URL: From gmann at ghanshyammann.com Thu Apr 1 16:23:58 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 01 Apr 2021 11:23:58 -0500 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> Message-ID: <1788e3fa253.bba36eee1422244.3611802948159890528@ghanshyammann.com> ---- On Thu, 01 Apr 2021 10:28:25 -0500 Herve Beraud wrote ---- > > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > Hello Osloers, > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > activity since ussuri, does it still make sense to maintain them? > > > > The latest available SHAs for the both projects comes from Victoria > > (merged in this period). > > > > Can we retire them or simply retire them from the coordinated releases? > > These have never been released and are no longer branched. What is their > involvement in the coordinated release at this point? > > Yes these deliverables are tagless so no released at all, however, they are coordinated so they are branched during each series. > Though, as said Ben, those deliverables haven't been branched during the previous series (victoria), we suppose that they have been simply forgotten inadvertently, > I proposed a patch to fix that point [1]. > However we don't have so many things to branch for the current series [2], no new commits have been merged during the last 7 months (Victoria at this period), so, the question is, do we have reasons to keep them under the coordinated releases umbrella if nothing new happens in this area. > I proposed a patch for Wallaby too, but I'm not convinced that's the right solution. > Maybe Ken is right and maybe that's time to merge these plugins with oslo.messaging, however, I don't know if it's feasible from a devstack point of view. I do not think there is any difference in devstack plugin's location. Those can be part of the project repo or separate repo. Most of the devstack plugins are part of the project repo. Few devstack plugins in QA project are in separate repo because they are in not related to a specific project. So I will say to move them to related project repo like oslo.messaging or retire them if no one need which can be decided by Oslo team i think. -gmann > Adding the QA team to this thread topic to discuss that last point with them. > Thanks for your replies. > > [1] https://review.opendev.org/c/openstack/releases/+/784371[2] https://review.opendev.org/c/openstack/releases/+/784376 > > > > > We (the release team) would appreciate some feedback about this point. > > > > Let's open the debat. > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From gmann at ghanshyammann.com Thu Apr 1 16:27:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 01 Apr 2021 11:27:09 -0500 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> Message-ID: <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> ---- On Thu, 01 Apr 2021 11:05:29 -0500 Herve Beraud wrote ---- > Well, as suggested Luigi let's drop these deliverables (within Wallaby and for the next series). > > https://review.opendev.org/c/openstack/releases/+/784376 Having them branchless or branched is completely depends on devstack plugins maintainer and the nature of the setting they do. If they are installing/updating the setting for branched service then yes it makes sense to be branched as devstack is branched. If they are very general setting like ceph or so then branchless also work. >From devstack or QA point of view, both ways are fine. -gmann > I kept the Victoria branching but that will be the last one. > https://review.opendev.org/c/openstack/releases/+/784371 > Let me know what you think > > Le jeu. 1 avr. 2021 à 17:44, Herve Beraud a écrit : > From my point of view I would argue that yes, however, I don't have the big picture. > > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a écrit : > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a écrit : > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > > Hello Osloers, > > > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount of > > > > activity since ussuri, does it still make sense to maintain them? > > > > > > > > The latest available SHAs for the both projects comes from Victoria > > > > (merged in this period). > > > > > > > > Can we retire them or simply retire them from the coordinated releases? > > > > > > These have never been released and are no longer branched. What is their > > > involvement in the coordinated release at this point? > > > > Yes these deliverables are tagless so no released at all, however, they are > > coordinated so they are branched during each series. > > > > Though, as said Ben, those deliverables haven't been branched during the > > previous series (victoria), we suppose that they have been simply forgotten > > inadvertently, > > > > I proposed a patch to fix that point [1]. > > Other devstack plugins are branchless (devstack-plugin-ceph, devstack-plugin- > nfs), couldn't those be branchless too? > > > -- > Luigi > > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From hberaud at redhat.com Thu Apr 1 16:36:38 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Apr 2021 18:36:38 +0200 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> Message-ID: Le jeu. 1 avr. 2021 à 18:27, Ghanshyam Mann a écrit : > ---- On Thu, 01 Apr 2021 11:05:29 -0500 Herve Beraud > wrote ---- > > Well, as suggested Luigi let's drop these deliverables (within Wallaby > and for the next series). > > > > https://review.opendev.org/c/openstack/releases/+/784376 > > Having them branchless or branched is completely depends on devstack > plugins maintainer and the nature > of the setting they do. If they are installing/updating the setting for > branched service then yes it makes sense to > be branched as devstack is branched. If they are very general setting like > ceph or so then branchless also > work. > > From devstack or QA point of view, both ways are fine. > Thanks Ghanshyam. I've no idea if they need specific settings but given the activity of these projects since two series they don't seem to be in that plugin zone. I personally think that "branchless" could fit well to them. Let's wait for PTL/maintainers reviews. > -gmann > > > I kept the Victoria branching but that will be the last one. > > https://review.opendev.org/c/openstack/releases/+/784371 > > Let me know what you think > > > > Le jeu. 1 avr. 2021 à 17:44, Herve Beraud a écrit > : > > From my point of view I would argue that yes, however, I don't have the > big picture. > > > > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano a > écrit : > > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: > > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec a > écrit : > > > > On 4/1/21 6:24 AM, Herve Beraud wrote: > > > > > Hello Osloers, > > > > > > > > > > Our devstack plugins (kafka and amqp1) didn't show a great amount > of > > > > > activity since ussuri, does it still make sense to maintain them? > > > > > > > > > > The latest available SHAs for the both projects comes from > Victoria > > > > > (merged in this period). > > > > > > > > > > Can we retire them or simply retire them from the coordinated > releases? > > > > > > > > These have never been released and are no longer branched. What is > their > > > > involvement in the coordinated release at this point? > > > > > > Yes these deliverables are tagless so no released at all, however, > they are > > > coordinated so they are branched during each series. > > > > > > Though, as said Ben, those deliverables haven't been branched during > the > > > previous series (victoria), we suppose that they have been simply > forgotten > > > inadvertently, > > > > > > I proposed a patch to fix that point [1]. > > > > Other devstack plugins are branchless (devstack-plugin-ceph, > devstack-plugin- > > nfs), couldn't those be branchless too? > > > > > > -- > > Luigi > > > > > > > > > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Apr 1 17:15:30 2021 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 1 Apr 2021 12:15:30 -0500 Subject: [oslo][release][QA] devstack-plugin-(kafka|amqp1) retirement In-Reply-To: References: <479b09cf-08f9-144a-c062-eabed280836b@nemebean.com> <3802622.1xdlsreqCQ@whitebase.usersys.redhat.com> <1788e428ec5.ce2ae7971422373.5079022999856477663@ghanshyammann.com> Message-ID: On 4/1/21 11:36 AM, Herve Beraud wrote: > > > Le jeu. 1 avr. 2021 à 18:27, Ghanshyam Mann > a écrit : > >  ---- On Thu, 01 Apr 2021 11:05:29 -0500 Herve Beraud > > wrote ---- >  > Well, as suggested Luigi let's drop these deliverables (within > Wallaby and for the next series). >  > >  > https://review.opendev.org/c/openstack/releases/+/784376 > > Having them branchless or branched is completely depends on devstack > plugins maintainer and the nature > of the setting they do. If they are installing/updating the setting > for branched service then yes it makes sense to > be branched as devstack is branched. If they are very general > setting like ceph or so then branchless also > work. > > From devstack or QA point of view, both ways are fine. > > > Thanks Ghanshyam. > > I've no idea if they need specific settings but given the activity of > these projects since two series they don't seem to be in that plugin zone. > > I personally think that "branchless" could fit well to them. Let's wait > for PTL/maintainers reviews. I replied on the reviews, but I think branchless is what we intended for the Oslo plugins. We just missed the step of removing the deliverable file. I can't comment on the containers one because that was never ours (AFAIK). > > > -gmann > >  > I kept the Victoria branching but that will be the last one. >  > https://review.opendev.org/c/openstack/releases/+/784371 >  > Let me know what you think >  > >  > Le jeu. 1 avr. 2021 à 17:44, Herve Beraud > a écrit : >  > From my point of view I would argue that yes, however, I don't > have the big picture. >  > >  > Le jeu. 1 avr. 2021 à 17:42, Luigi Toscano > a écrit : >  > On Thursday, 1 April 2021 17:28:25 CEST Herve Beraud wrote: >  > > Le jeu. 1 avr. 2021 à 17:02, Ben Nemec > a écrit : >  > > > On 4/1/21 6:24 AM, Herve Beraud wrote: >  > > > > Hello Osloers, >  > > > > >  > > > > Our devstack plugins (kafka and amqp1) didn't show a great > amount of >  > > > > activity since ussuri, does it still make sense to > maintain them? >  > > > > >  > > > > The latest available SHAs for the both projects comes from > Victoria >  > > > > (merged in this period). >  > > > > >  > > > > Can we retire them or simply retire them from the > coordinated releases? >  > > > >  > > > These have never been released and are no longer branched. > What is their >  > > > involvement in the coordinated release at this point? >  > > >  > > Yes these deliverables are tagless so no released at all, > however, they are >  > > coordinated so they are branched during each series. >  > > >  > > Though, as said Ben, those deliverables haven't been branched > during the >  > > previous series (victoria), we suppose that they have been > simply forgotten >  > > inadvertently, >  > > >  > > I proposed a patch to fix that point [1]. >  > >  > Other devstack plugins are branchless (devstack-plugin-ceph, > devstack-plugin- >  > nfs), couldn't those be branchless too? >  > >  > >  > -- >  > Luigi >  > >  > >  > >  > >  > -- >  > Hervé BeraudSenior Software Engineer at Red Hatirc: > hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > >  > -----BEGIN PGP SIGNATURE----- >  > >  > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >  > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >  > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >  > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >  > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >  > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >  > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >  > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >  > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >  > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >  > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >  > v6rDpkeNksZ9fFSyoY2o >  > =ECSj >  > -----END PGP SIGNATURE----- >  > >  > >  > >  > -- >  > Hervé BeraudSenior Software Engineer at Red Hatirc: > hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > >  > -----BEGIN PGP SIGNATURE----- >  > >  > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >  > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >  > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >  > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >  > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >  > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >  > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >  > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >  > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >  > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >  > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >  > v6rDpkeNksZ9fFSyoY2o >  > =ECSj >  > -----END PGP SIGNATURE----- >  > >  > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From dms at danplanet.com Thu Apr 1 17:46:56 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 01 Apr 2021 10:46:56 -0700 Subject: [all] Gate resources and performance In-Reply-To: (Dan Smith's message of "Wed, 31 Mar 2021 17:00:12 -0700") References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: > I'll try to circle back and generate a new set of numbers with > my script, and also see if I can get updated numbers from Clark on the > overall percentages. Okay, I re-ran the numbers this morning and got updated 30-day stats from Clark. Here's what I've got (delta from the last report in parens): Project % of total Node Hours Nodes ---------------------------------------------- 1. Neutron 23% 34h (-4) 30 (-2) 2. TripleO 18% 17h (-14) 14 (-6) 3. Nova 7% 22h (+1) 25 (-0) 4. Kolla 6% 10h (-2) 18 (-0) 5. OSA 6% 19h (-3) 16 (-1) Definitely a lot of improvement from tripleo, so thanks for that! Neutron rose to the top and is still very hefty. I think Nova's 1-hr rise is probably just noise given the node count didn't change. I think we're still waiting on zuulv3 conversion of the grenade multinode job so we can drop the base grenade job, which will make things go down. I've also got a proposal to make devstack parallel mode be the default, but we're waiting until after devstack cuts wallaby to proceed with that. Hopefully that will result in some across-the-board reduction. Anyway, definitely moving in the right direction on all fronts, so thanks a lot to everyone who has made efforts in this area. I think once things really kick back up around/after PTG we should measure again and see if the "quality of life" is reasonable, and if not, revisit the numbers in terms of who to lean on to reduce further. --Dan From DHilsbos at performair.com Thu Apr 1 17:47:19 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 1 Apr 2021 17:47:19 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: Message-ID: <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> Tony / Eddie; I think this is partially dependent on the version of OpenStack running. In our Victoria cloud, a volume created from an image is also done as a snapshot by Ceph, and is completed in seconds. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Eddie Yen [mailto:missile0407 at gmail.com] Sent: Wednesday, March 31, 2021 6:00 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From vhariria at redhat.com Thu Apr 1 18:05:42 2021 From: vhariria at redhat.com (Vida Haririan) Date: Thu, 1 Apr 2021 14:05:42 -0400 Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> Message-ID: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks, Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: > I forgot this is a holiday. Same on my side. > > Thanks, > Jimmy > > > > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann > wrote: > > > >  > > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady < > Arkady.Kanevsky at dell.com> wrote ---- > >> > >> Team, > >> This Friday is Good Friday and some people have a day off. > >> Should we cancel this week meeting? > >> Please, respond so we can see if we will have quorum. > > > > Thanks Arkady, > > > > I will be off from work and would not be able to join. > > > > -gmann > > > >> Thanks, > >> Arkady > >> > >> Arkady Kanevsky, Ph.D. > >> SP Chief Technologist & DE > >> Dell Technologies office of CTO > >> Dell Inc. One Dell Way, MS PS2-91 > >> Round Rock, TX 78682, USA > >> Phone: 512 7204955 > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Thu Apr 1 18:11:56 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Thu, 1 Apr 2021 18:11:56 +0000 (UTC) Subject: [Interop][Refstack] this Friday meeting In-Reply-To: References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> Message-ID: <1443143388.3156237.1617300716714@mail.yahoo.com> Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks,Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.faulkner at verizonmedia.com Thu Apr 1 18:22:50 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 1 Apr 2021 11:22:50 -0700 Subject: [E] [ironic] Review Jams In-Reply-To: References: Message-ID: > The tl;dr is we will use meetpad[1] and meet on Mondays at 2 PM UTC and Tuesdays at 6 PM UTC. Because downstream commitments are usually in US-local time, and DST exists, we've decided to move back the Tuesday review jam to 5 PM UTC to keep the time the same after DST adjustment. If you have any questions, please ask here or in #openstack-ironic. Thanks, Jay Faulkner On Tue, Feb 9, 2021 at 12:54 PM Zachary Buhman < zachary.buhman at verizonmedia.com> wrote: > I thought the 09 Feb 2021 review jam was highly valuable. > > Without the discussions we had, I think the "Secure RBAC" patch set would > be unapproachable for me. For example, having knowledge of the (new) > oslo-policy features that the patches make use of seems to be a requirement > for deeply understanding the changes. As a direct result of the review jam > [0], I feel that I have enough understanding and comfortability to make > valuable review feedback on these patches. > > [0] and also having read/reviewed the secure-rbac spec previously, to be > fair > > On Fri, Feb 5, 2021 at 7:10 AM Julia Kreger > wrote: > >> In the Ironic team's recent mid-cycle call, we discussed the need to >> return to occasionally having review jams in order to help streamline >> the review process. In other words, get eyes on a change in parallel >> and be able to discuss the change. The goal is to help get people on >> the same page in terms of what and why. Be on hand to answer questions >> or back-fill context. This is to hopefully avoid the more iterative >> back and forth nature of code review, which can draw out a long chain >> of patches. As always, the goal is not perfection, but forward >> movement especially for complex changes. >> >> We've established two time windows that will hopefully not to be too >> hard for some contributors to make it to. It doesn't need to be >> everyone, but it would help for at least some people whom actively >> review or want to actively participate in reviewing, or whom are even >> interested in a feature to join us for our meeting. >> >> I've added an entry on to our wiki page to cover this, with the >> current agenda and anticipated review jam topic schedule. The tl;dr is >> we will use meetpad[1] and meet on Mondays at 2 PM UTC and Tuesdays at >> 6 PM UTC. The hope is to to enable some overlap of reviewers. If >> people are interested in other times, please bring this up in the >> weekly meeting or on the mailing list. >> >> I'm not sending out calendar invites for this. Yet. :) >> >> See everyone next week! >> >> -Julia >> >> [0]: >> https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.openstack.org_wiki_Meetings_Ironic-23Review-5FJams&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=OsbscIvhVDRWHpDZtO7nXdqGCfPHirpVEemMwL8l5tw&m=S4p8gD_wQlpR_rvzdqGkdq574-DkUsgBRet9-k3RpVg&s=gVApbMsmNPVlfYreqkQe4yKFxC66U6D8nFc_TwjW-FE&e= >> [1]: >> https://urldefense.proofpoint.com/v2/url?u=https-3A__meetpad.opendev.org_ironic&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=OsbscIvhVDRWHpDZtO7nXdqGCfPHirpVEemMwL8l5tw&m=S4p8gD_wQlpR_rvzdqGkdq574-DkUsgBRet9-k3RpVg&s=iHBy7h99FQZ6Xb_fN2Hv3HZXIANl6BzR867jblUJvsk&e= >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 1 18:42:43 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Apr 2021 11:42:43 -0700 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Adding the list back and trimming the message. Replies in-band. Well, that is good that the server is not signed, nor other esp images are not working. On Thu, Apr 1, 2021 at 11:20 AM Vuk Gojnic wrote: > > Hey Julia, > > Thanks for asking. I have tried with several ESP image options with same effect (one taken from Ubuntu Live ISO that boots on that node, another downloaded and third made with grub tools). None of them was signed. Interesting. At least it is consistent! Have you tried to pull down the iso image and take it apart to verify it is UEFI bootable against a VM or another physical machine? I'm wondering if you need both uefi parameters set. You definitely don't have properties['capabilities']['boot_mode'] set which is used or... maybe a better word to use is drawn in for asserting defaults, but you do have the deploy_boot_mode setting set. I guess a quick manual sanity check of the actual resulting iso image is going to be critical. Debug logging may also be useful, and I'm only thinking that because there is no logging from the generation of the image. > > The server is not in UEFI secure boot mode. Interesting, sure sounds like it is based on your original message. :( > Btw. I will be on holidays for next week so I might not be able to follow up on this discussion before Apr 12th. No worries, just ping us on irc.freenode.net in #openstack-ironic if a reply on the mailing list doesn't grab our attention. > > Bests, > Vuk > > On Thu, Apr 1, 2021 at 4:20 PM Julia Kreger wrote: >> >> Greetings, >> >> Two questions: >> 1) Are the ESP image contents signed, or are they built using one of >> the grub commands? >> 2) Is the machine set to enforce secure boot at this time? >> >> [trim] From rafal at pregusia.pl Thu Apr 1 19:05:32 2021 From: rafal at pregusia.pl (pregusia) Date: Thu, 1 Apr 2021 21:05:32 +0200 Subject: [keystone]improvments in mapping models/support for JWT tokens Message-ID: Hello members! Please direct your attention to some keystone modyfications - to be more precisly two of them:  (1) extension to mapping engine in order to support multiple projects and assigning project by id  (2) extension to authorization mechanisms - add JWT token support Ad (1):     Currently mapping engine can map multiple projects for user (as stated in https://docs.openstack.org/keystone/pike/advanced-topics/federation/mapping_combinations.html), but this support     lacks from (a) dynamic mapping projects from assertion (for ex. projects ids are in assertion) and (b) mapping engine cannot map to list from assertion (if in asertion we have     "some_field": [ id1, id2, ..., idN ] - then using this field by substitution - eg "{1}" wont map this to whole needed structure of projects in mapping).     In patch I extend mapping schema by adding new field "projects_spec" of type string.     This field contains string formated like project1ID:role1:role2:..:roleN,project2ID:role1:...:roleN,...     and this is mapped into proper structure { "projects": [  { "id": .., "roles": [ ... ] }, ... ] }     this allows the identity provider to supply permission-like informations about what projects user should have access to.     The implementation is not ideal, and some work shoud be do to make it more user-friendly (for ex. configuration options for auto-creating projects by ID, and options for allowing user to log-in if project-by-id not exists etc)     This is only proof of concept and question if this direction is proper according to keystone development. Ad (2)     This patch adds new authorization protocol - jwt_token.     It allows to access endpoint '/v3/OS-FEDERATION/identity_providers/{IDP_NAME}/protocols/jwt_token/auth' (and obtain keystone auth token) using JWT token in Authorization header field.     Header value is checked against proper formating and JWT token is extracted from it.     Then, token is validated against public key (RS256/RS512 algorithms only supported), against issuer and expiration date and others.     Then, if this end with success, payload from token is supplied to mapping engine, when new user (and according to (1), it's permission to projects for ex.) is created and given access to proper projects/roles. Happy reviewing! pregusia -------------- next part -------------- A non-text attachment was scrubbed... Name: patch-victoria.patch Type: text/x-patch Size: 13182 bytes Desc: not available URL: From fungi at yuggoth.org Thu Apr 1 19:22:04 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Apr 2021 19:22:04 +0000 Subject: [keystone]improvments in mapping models/support for JWT tokens In-Reply-To: References: Message-ID: <20210401192203.jn2heaicdlwojc7i@yuggoth.org> On 2021-04-01 21:05:32 +0200 (+0200), pregusia wrote: > Please direct your attention to some keystone modyfications - to be more > precisly two of them: >  (1) extension to mapping engine in order to support multiple projects and > assigning project by id >  (2) extension to authorization mechanisms - add JWT token support [...] This is pretty exciting stuff. But please be aware that for an OpenStack project to merge patches they'll need to be proposed into the code review system (Gerrit) by someone, preferably by the author of the patches, which is the easiest place to discuss them as well. Also we need some way to confirm that the author of the patches has agreed to the Individual Contributor License Agreement (essentially asserting that the patches they propose are their own work or that they have permission from the author or are proposing patches consisting of existing code distributed under a license compatible with the Apache License version 2.0), and the usual way to agree to the ICLA is when creating your account in Gerrit. Please see the OpenStack Contributor Guide for a general introduction to our code proposal and review workflow: https://docs.openstack.org/contributors/ And feel free to ask questions on this mailing list if you have any. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From skaplons at redhat.com Thu Apr 1 19:35:55 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 01 Apr 2021 21:35:55 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: References: Message-ID: <3930281.aCZO8KT43X@p1> Hi, Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > Hello, > > I spent a lot of time troubleshooting my issue, which I described here : > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > To summarize, when I want to create an instance, binding fails on compute > node, the dhcp agent seems to give an ip to the VM but I have an error. What do You mean exactly? Failed binding of the port in Neutron? In such case nova will not boot vm so it can't get IP from DHCP. > > I don't know where to dig, besides what I have done. Please enable debug logs in neutron-server and look in its logs for the reason why it failed to bind port on specific host. Usually reason is dead L2 agent on host or mismatch in the agent's bridge mappings configuration in the agent. > > Thanks a lot for your help ! -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From janders at redhat.com Thu Apr 1 20:53:33 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 2 Apr 2021 06:53:33 +1000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic Message-ID: Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Thu Apr 1 21:30:28 2021 From: jpenick at gmail.com (James Penick) Date: Thu, 1 Apr 2021 14:30:28 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: I completely support this. However are you considering other options, such as pour-over coffee machines? Not every deployer is able to consume espresso! On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > Hi There, > > I was discussing this RFE with Julia and we decided it would be great to > get some feedback on it from the wider community, ideally by the end of > today. Apologies for the short notice. Here's the story: > > https://storyboard.openstack.org/#!/story/2008791 > > What are your thoughts on this? Please comment in the Story or just reply > to thread. > > Thank you, > Jacob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Thu Apr 1 21:40:32 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Thu, 1 Apr 2021 23:40:32 +0200 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Thanks for raising this Jacob! This will probably require a spec (since we will have multiple scenarios). I think we need a generic coffee driver, with support for different management-coffee-interfaces (expresso, latte, etc) and deploy-coffee-interfaces (mug, bottle). Em qui., 1 de abr. de 2021 às 23:32, James Penick escreveu: > I completely support this. However are you considering other options, such > as pour-over coffee machines? Not every deployer is able to consume > espresso! > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > >> Hi There, >> >> I was discussing this RFE with Julia and we decided it would be great to >> get some feedback on it from the wider community, ideally by the end of >> today. Apologies for the short notice. Here's the story: >> >> https://storyboard.openstack.org/#!/story/2008791 >> >> What are your thoughts on this? Please comment in the Story or just reply >> to thread. >> >> Thank you, >> Jacob >> > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 1 21:45:09 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Apr 2021 14:45:09 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: We shouldn't forget the classic drip makers that Operators tend to have in their facilities. Granted, I think they will need to send them all through cleaning once the pandemic is over. On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: > > I completely support this. However are you considering other options, such as pour-over coffee machines? Not every deployer is able to consume espresso! > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: >> >> Hi There, >> >> I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: >> >> https://storyboard.openstack.org/#!/story/2008791 >> >> What are your thoughts on this? Please comment in the Story or just reply to thread. >> >> Thank you, >> Jacob From jay.faulkner at verizonmedia.com Thu Apr 1 21:58:54 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 1 Apr 2021 14:58:54 -0700 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: OpenStack is about producing large amounts of homogenous instances. This sort of espresso-machine pandering will just lead us down a path of supporting all kinds of lattes, mochas, and macchiatos -- not even to get started on the flavor syrups and steamed milk. We have to focus on managing large pots of coffee, so that people can drink and know the next cup they get will be the exact same. We're building a homogenous coffee environment, we can't be supporting every milk, style, and flavor syrup in the world. - Jay Faulkner P.S. Don't even try to sneak any of those dark roast Starbucks beans past the refcoffee tests. That kind of bitterness exceeds our API spec. On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger wrote: > We shouldn't forget the classic drip makers that Operators tend to > have in their facilities. > > Granted, I think they will need to send them all through cleaning once > the pandemic is over. > > On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: > > > > I completely support this. However are you considering other options, > such as pour-over coffee machines? Not every deployer is able to consume > espresso! > > > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > >> > >> Hi There, > >> > >> I was discussing this RFE with Julia and we decided it would be great > to get some feedback on it from the wider community, ideally by the end of > today. Apologies for the short notice. Here's the story: > >> > >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= > >> > >> What are your thoughts on this? Please comment in the Story or just > reply to thread. > >> > >> Thank you, > >> Jacob > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Thu Apr 1 22:28:20 2021 From: jpenick at gmail.com (James Penick) Date: Thu, 1 Apr 2021 15:28:20 -0700 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Perhaps we should break this down into a subset of projects, one for grinding beans, one for extracting coffee, and maybe a microservice for handling milk or dairy based alternatives? I think we can also all agree that cold brew drinks are *completely* out of scope and should be implemented in another service. On Thu, Apr 1, 2021 at 2:59 PM Jay Faulkner wrote: > OpenStack is about producing large amounts of homogenous instances. This > sort of espresso-machine pandering will just lead us down a path of > supporting all kinds of lattes, mochas, and macchiatos -- not even to get > started on the flavor syrups and steamed milk. > > We have to focus on managing large pots of coffee, so that people can > drink and know the next cup they get will be the exact same. We're building > a homogenous coffee environment, we can't be supporting every milk, style, > and flavor syrup in the world. > > - > Jay Faulkner > > P.S. Don't even try to sneak any of those dark roast Starbucks beans past > the refcoffee tests. That kind of bitterness exceeds our API spec. > > On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger > wrote: > >> We shouldn't forget the classic drip makers that Operators tend to >> have in their facilities. >> >> Granted, I think they will need to send them all through cleaning once >> the pandemic is over. >> >> On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: >> > >> > I completely support this. However are you considering other options, >> such as pour-over coffee machines? Not every deployer is able to consume >> espresso! >> > >> > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: >> >> >> >> Hi There, >> >> >> >> I was discussing this RFE with Julia and we decided it would be great >> to get some feedback on it from the wider community, ideally by the end of >> today. Apologies for the short notice. Here's the story: >> >> >> >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= >> >> >> >> What are your thoughts on this? Please comment in the Story or just >> reply to thread. >> >> >> >> Thank you, >> >> Jacob >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Apr 1 22:33:43 2021 From: amy at demarco.com (Amy Marrich) Date: Thu, 1 Apr 2021 17:33:43 -0500 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Can we expand the spec for Cappuccinos and Lattes? Oh and maybe Pan Au Chocolate? Amy (spotz) On Thu, Apr 1, 2021 at 5:31 PM James Penick wrote: > Perhaps we should break this down into a subset of projects, one for > grinding beans, one for extracting coffee, and maybe a microservice for > handling milk or dairy based alternatives? > > I think we can also all agree that cold brew drinks are *completely* out > of scope and should be implemented in another service. > > On Thu, Apr 1, 2021 at 2:59 PM Jay Faulkner > wrote: > >> OpenStack is about producing large amounts of homogenous instances. This >> sort of espresso-machine pandering will just lead us down a path of >> supporting all kinds of lattes, mochas, and macchiatos -- not even to get >> started on the flavor syrups and steamed milk. >> >> We have to focus on managing large pots of coffee, so that people can >> drink and know the next cup they get will be the exact same. We're building >> a homogenous coffee environment, we can't be supporting every milk, style, >> and flavor syrup in the world. >> >> - >> Jay Faulkner >> >> P.S. Don't even try to sneak any of those dark roast Starbucks beans past >> the refcoffee tests. That kind of bitterness exceeds our API spec. >> >> On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger >> wrote: >> >>> We shouldn't forget the classic drip makers that Operators tend to >>> have in their facilities. >>> >>> Granted, I think they will need to send them all through cleaning once >>> the pandemic is over. >>> >>> On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: >>> > >>> > I completely support this. However are you considering other options, >>> such as pour-over coffee machines? Not every deployer is able to consume >>> espresso! >>> > >>> > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders >>> wrote: >>> >> >>> >> Hi There, >>> >> >>> >> I was discussing this RFE with Julia and we decided it would be great >>> to get some feedback on it from the wider community, ideally by the end of >>> today. Apologies for the short notice. Here's the story: >>> >> >>> >> >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= >>> >> >>> >> What are your thoughts on this? Please comment in the Story or just >>> reply to thread. >>> >> >>> >> Thank you, >>> >> Jacob >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Apr 1 22:43:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Apr 2021 22:43:31 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: <20210401224331.losxlxqer6rrgmmj@yuggoth.org> On 2021-04-01 23:40:32 +0200 (+0200), Iury Gregory wrote: > Thanks for raising this Jacob! > This will probably require a spec (since we will have multiple scenarios). > I think we need a generic coffee driver, with support for different > management-coffee-interfaces (expresso, latte, etc) and > deploy-coffee-interfaces (mug, bottle). [...] This is already heading toward an inevitable interoperability nightmare; we should already be planning for the gimme_a_coffee_already porcelain API. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From janders at redhat.com Thu Apr 1 22:53:39 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 2 Apr 2021 08:53:39 +1000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: <20210401224331.losxlxqer6rrgmmj@yuggoth.org> References: <20210401224331.losxlxqer6rrgmmj@yuggoth.org> Message-ID: Thank you for your invaluable insights! >From my side, given I've been doing a fair bit of clean_step related work lately, I'm happy to take on the de-scaling challenge which might be a little trickier than usual as it needs to run periodically even if there is a long-living instance provisioned, otherwise the hardware will inevitably end up in rescue mode. I'm also happy to look into HPC (High Performance Coffee) use cases. CCing Stig as he might have some insights there as well. On Fri, Apr 2, 2021 at 8:49 AM Jeremy Stanley wrote: > On 2021-04-01 23:40:32 +0200 (+0200), Iury Gregory wrote: > > Thanks for raising this Jacob! > > This will probably require a spec (since we will have multiple > scenarios). > > I think we need a generic coffee driver, with support for different > > management-coffee-interfaces (expresso, latte, etc) and > > deploy-coffee-interfaces (mug, bottle). > [...] > > This is already heading toward an inevitable interoperability > nightmare; we should already be planning for the > gimme_a_coffee_already porcelain API. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Fri Apr 2 00:19:20 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 2 Apr 2021 00:19:20 +0000 Subject: launch VM on volume vs. image In-Reply-To: <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> References: , <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> Message-ID: Hi Dominic, What's your image format? Thanks! Tony ________________________________________ From: DHilsbos at performair.com Sent: April 1, 2021 10:47 AM To: tonyliu0592 at hotmail.com Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com Subject: RE: launch VM on volume vs. image Tony / Eddie; I think this is partially dependent on the version of OpenStack running. In our Victoria cloud, a volume created from an image is also done as a snapshot by Ceph, and is completed in seconds. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Eddie Yen [mailto:missile0407 at gmail.com] Sent: Wednesday, March 31, 2021 6:00 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From tonyliu0592 at hotmail.com Fri Apr 2 00:24:04 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 2 Apr 2021 00:24:04 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , Message-ID: I have a 300GB QCOW image (800GB raw space). If launch VM on volume, Cinder will need to convert it first, and that requires at least 300GB free disk space on controller. If launch VM on image, it takes forever, I didn't look into where it's stuck. Is there any easier way to launch VM from such image? Ceph is the storage backend. Thanks! Tony ________________________________________ From: Eddie Yen Sent: March 31, 2021 10:47 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image BTW, If the source image is based on compression or thin provision type (like VDI, QCOW2, VMDK, etc.) It will take a long time to create no matter boot on image or volume. Nova will convert the image based on these type first during creation. Because Ceph RBD doesn't support. Make sure all the images you upload is based on RBD format (or RAW format in other word), unless the virtual size of image is small. . Tony Liu > 於 2021年4月1日 週四 上午10:18寫道: Thank you Eddie! It makes sense. Creating a snapshot is much faster than copying image to a volume. Tony ________________________________________ From: Eddie Yen > Sent: March 31, 2021 05:59 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu >> 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From gouthampravi at gmail.com Fri Apr 2 00:32:12 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 1 Apr 2021 17:32:12 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: On Thu, Apr 1, 2021 at 1:59 PM Jacob Anders wrote: > > Hi There, > > I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: > > https://storyboard.openstack.org/#!/story/2008791 > > What are your thoughts on this? Please comment in the Story or just reply to thread. Very neat. I'm hoping we can start thinking about storage sooner than later. We know everyone wants their raw materials in an infinite conveyor - and they're thinking tape machine, but let's start crawling before we're walking here. Ground coffee can't also be ephemeral for archival and regulatory purposes. Persistent storage matters. > > Thank you, > Jacob From cpiercey at icloud.com Fri Apr 2 01:08:35 2021 From: cpiercey at icloud.com (CHARLES PIERCEY) Date: Thu, 1 Apr 2021 18:08:35 -0700 Subject: [E] Re: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: <31E452C3-82CE-4F67-B4A3-EE5F16A1BBB0@icloud.com> Everyone should just accept double expresso macchiatos as the standard for superior caffeination. On Apr 1, 2021, at 3:05 PM, Jay Faulkner wrote:  OpenStack is about producing large amounts of homogenous instances. This sort of espresso-machine pandering will just lead us down a path of supporting all kinds of lattes, mochas, and macchiatos -- not even to get started on the flavor syrups and steamed milk. We have to focus on managing large pots of coffee, so that people can drink and know the next cup they get will be the exact same. We're building a homogenous coffee environment, we can't be supporting every milk, style, and flavor syrup in the world. - Jay Faulkner P.S. Don't even try to sneak any of those dark roast Starbucks beans past the refcoffee tests. That kind of bitterness exceeds our API spec. On Thu, Apr 1, 2021 at 2:50 PM Julia Kreger wrote: > We shouldn't forget the classic drip makers that Operators tend to > have in their facilities. > > Granted, I think they will need to send them all through cleaning once > the pandemic is over. > > On Thu, Apr 1, 2021 at 2:37 PM James Penick wrote: > > > > I completely support this. However are you considering other options, such as pour-over coffee machines? Not every deployer is able to consume espresso! > > > > On Thu, Apr 1, 2021 at 1:54 PM Jacob Anders wrote: > >> > >> Hi There, > >> > >> I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: > >> > >> https://urldefense.proofpoint.com/v2/url?u=https-3A__storyboard.openstack.org_-23-21_story_2008791&d=DwIBaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=PCP6by2W8jflO0mp6YeO5RmQQmP7uaT6aHvdRZ3C2V0&s=AbjZNK66m1pUKose0j2gZNtDgFUf5LYyS9Qz8JAfqr4&e= > >> > >> What are your thoughts on this? Please comment in the Story or just reply to thread. > >> > >> Thank you, > >> Jacob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Fri Apr 2 02:10:14 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 2 Apr 2021 02:10:14 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , , Message-ID: I uploaded 800GB RAW image. Then launching VM on either image or volume is in 10 seconds. Tony ________________________________________ From: Tony Liu Sent: April 1, 2021 05:24 PM To: Eddie Yen Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image I have a 300GB QCOW image (800GB raw space). If launch VM on volume, Cinder will need to convert it first, and that requires at least 300GB free disk space on controller. If launch VM on image, it takes forever, I didn't look into where it's stuck. Is there any easier way to launch VM from such image? Ceph is the storage backend. Thanks! Tony ________________________________________ From: Eddie Yen Sent: March 31, 2021 10:47 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image BTW, If the source image is based on compression or thin provision type (like VDI, QCOW2, VMDK, etc.) It will take a long time to create no matter boot on image or volume. Nova will convert the image based on these type first during creation. Because Ceph RBD doesn't support. Make sure all the images you upload is based on RBD format (or RAW format in other word), unless the virtual size of image is small. . Tony Liu > 於 2021年4月1日 週四 上午10:18寫道: Thank you Eddie! It makes sense. Creating a snapshot is much faster than copying image to a volume. Tony ________________________________________ From: Eddie Yen > Sent: March 31, 2021 05:59 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu >> 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From hberaud at redhat.com Fri Apr 2 05:31:17 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 07:31:17 +0200 Subject: [release] Release countdown for week R-1 Apr 05 - Apr 09 Message-ID: Development Focus ----------------- We are on the final mile of the Wallaby development cycle! Remember that the Wallaby final release will include the latest release candidate (for cycle-with-rc deliverables) or the latest intermediary release (for cycle-with-intermediary deliverables) available. April 8 is the deadline for final Wallaby release candidates as well as any last cycle-with-intermediary deliverables. We will then enter a quiet period until we tag the final release on 14 April, 2021. Teams should be prioritizing fixing release-critical bugs, before that deadline. Otherwise it's time to start planning the Xena development cycle, including discussing Forum and PTG sessions content, in preparation of PTG on the week of April 19. Actions ------- Watch for any translation patches coming through on the stable/wallaby branch and merge them quickly. If you discover a release-critical issue, please make sure to fix it on the master branch first, then backport the bugfix to the stable/wallaby branch before triggering a new release. Please drop by #openstack-release with any questions or concerns about the upcoming release! Upcoming Deadlines & Dates -------------------------- Final Wallaby release: 14 April, 2021 Xena virtual PTG: 19 - 23 April, 2021 Thanks for your attention -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Apr 2 05:49:01 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 2 Apr 2021 14:49:01 +0900 Subject: [infra][puppet] No verified score posted by zuul Message-ID: Hi I have asked for some help in #openstack-infra but didn't get any solution so far Many people might be now on holidays (enjoy holidays!), so let me send this email so that people involved can find this mail after getting back. A few days ago Wallaby release of openstack puppet modules were created, and the release bot submitted release patches. However for some patches zuul doesn't return any CI result(it doesn't put verified score)[1]. I posted +2+A on [2] but it is not merged, because it is not verified by zuul. I tried "recheck" but it didn't solve the problem. [1] https://review.opendev.org/c/openstack/puppet-aodh/+/784213/1 [2] https://review.opendev.org/c/openstack/puppet-cloudkitty/+/784230 Currently we don't have any job triggered for the change with .gitreview and I guess that is why we don't get verified. Actually I see that the same patch for puppet-oslo got verified +1, because tripleo job was unexpectedly triggered for the change in .gitreview [2] https://review.opendev.org/c/openstack/puppet-oslo/+/784302 The easiest solution would be to manually squash these two patches into one. However I remember that we did get verified when we created the last Victoria release, and I suspect some change in infra side which resulted in this situation. So it would be nice if I can ask some insights from infra team about this situation. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Apr 2 07:49:02 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 2 Apr 2021 16:49:02 +0900 Subject: [infra][puppet] No verified score posted by zuul In-Reply-To: References: Message-ID: Please ignore my previous email. It turned out the following change disabled all jobs against .gitreview and that is the cause why zuul no longer posts result... https://github.com/openstack/puppet-openstack-integration/commit/1914b7ed1e499d13af4952992d0bf1728ca4db8e I'll fix the gate asap. On Fri, Apr 2, 2021 at 2:49 PM Takashi Kajinami wrote: > Hi > > > I have asked for some help in #openstack-infra but didn't get any solution > so far > Many people might be now on holidays (enjoy holidays!), so let me send > this email > so that people involved can find this mail after getting back. > > A few days ago Wallaby release of openstack puppet modules were created, > and the release bot submitted release patches. > > However for some patches zuul doesn't return any CI result(it doesn't put > verified score)[1]. I posted +2+A on [2] but it is not merged, because > it is not verified by zuul. I tried "recheck" but it didn't solve the > problem. > [1] https://review.opendev.org/c/openstack/puppet-aodh/+/784213/1 > [2] https://review.opendev.org/c/openstack/puppet-cloudkitty/+/784230 > > Currently we don't have any job triggered for the change with .gitreview > and > I guess that is why we don't get verified. > Actually I see that the same patch for puppet-oslo got verified +1, because > tripleo job was unexpectedly triggered for the change in .gitreview > [2] https://review.opendev.org/c/openstack/puppet-oslo/+/784302 > > The easiest solution would be to manually squash these two patches into > one. > However I remember that we did get verified when we created the last > Victoria release, > and I suspect some change in infra side which resulted in this situation. > So it would be nice if I can ask some insights from infra team about this > situation. > > Thank you, > Takashi Kajinami > -- ---------- Takashi Kajinami Principal Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 2 09:51:07 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 11:51:07 +0200 Subject: [docs][release] Creating Xena's landing pages Message-ID: Hello Docs team, This is a friendly reminder from the release team, I think that it should be safe for you to apply your process to create the new release series landing pages for docs.openstack.org. All stable branches are now created. If you want you can do the work before the final release date to avoid having to synchronize with the release team on that day. Let us know if you have any questions. Cheers -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 2 09:54:19 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 11:54:19 +0200 Subject: PTO Monday April 5 Message-ID: Hello Monday is a public holiday in France. I'll be back Tuesday. Have a nice weekend! -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 2 09:55:30 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Apr 2021 11:55:30 +0200 Subject: [oslo] Canceled meeting - PTO Monday April 5 In-Reply-To: References: Message-ID: Le ven. 2 avr. 2021 à 11:54, Herve Beraud a écrit : > Hello > > Monday is a public holiday in France. I'll be back Tuesday. > > Have a nice weekend! > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Apr 2 12:34:07 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 2 Apr 2021 12:34:07 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <0c28086a5f8a45ca9aab8568d1c0de7e@ncwmexgp009.CORP.CHARTERCOM.com> I opened a bug for this issue: https://bugs.launchpad.net/kolla-ansible/+bug/1922269 -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 11:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 9:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From Arkady.Kanevsky at dell.com Fri Apr 2 12:45:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 2 Apr 2021 12:45:39 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 2 12:47:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 2 Apr 2021 12:47:39 +0000 Subject: [Interop][Refstack] No meeting this Friday meeting Message-ID: Team, No meeting this Friday. Happy Holiday! Will update Etherpad. Cheers, Arkady From: prakash RAMCHANDRAN Sent: Thursday, April 1, 2021 1:12 PM To: Jimmy McArthur; Vida Haririan Cc: Ghanshyam Mann; Kanevsky, Arkady; openstack-discuss; Martin Kopec; Goutham Pacha Ravi Subject: Re: [Interop][Refstack] this Friday meeting [EXTERNAL EMAIL] Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan > wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks, Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur > wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann > wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady > wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Apr 2 13:05:30 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 2 Apr 2021 13:05:30 +0000 (UTC) Subject: [Interop][Refstack] No meeting this Friday meeting In-Reply-To: References: Message-ID: <1429753648.209610.1617368730565@mail.yahoo.com> Thanks Arkad. A warm springtime and happy Easter to all - Cheers Prakash On Friday, April 2, 2021, 05:47:48 AM PDT, Kanevsky, Arkady wrote: Team, No meeting this Friday. Happy Holiday! Will update Etherpad.   Cheers, Arkady     From: prakash RAMCHANDRAN Sent: Thursday, April 1, 2021 1:12 PM To: Jimmy McArthur; Vida Haririan Cc: Ghanshyam Mann; Kanevsky, Arkady; openstack-discuss; Martin Kopec; Goutham Pacha Ravi Subject: Re: [Interop][Refstack] this Friday meeting   [EXTERNAL EMAIL] Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash   On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote:     Hi Arkady, Friday is a company holiday and I will be ooo.   Thanks, Vida   On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Apr 2 13:08:01 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 2 Apr 2021 13:08:01 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders ; openstack-discuss Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 2 13:18:50 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 2 Apr 2021 13:18:50 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: Dell Customer Communication - Confidential Expresso server is still needed and must be used at least once a day. From: Braden, Albert Sent: Friday, April 2, 2021 8:08 AM To: openstack-discuss Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders >; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Apr 2 15:23:14 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 2 Apr 2021 15:23:14 +0000 Subject: launch VM on volume vs. image In-Reply-To: References: , <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A52524FBBF96C@COM01.performair.local> Tony; In accordance with the second "Important" note listed here: https://docs.ceph.com/en/nautilus/rbd/rbd-openstack/, we use all RAW images. Thank you, Dominic L. Hilsbos, MBA Director ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Tony Liu [mailto:tonyliu0592 at hotmail.com] Sent: Thursday, April 1, 2021 5:19 PM To: Dominic Hilsbos Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com Subject: Re: launch VM on volume vs. image Hi Dominic, What's your image format? Thanks! Tony ________________________________________ From: DHilsbos at performair.com Sent: April 1, 2021 10:47 AM To: tonyliu0592 at hotmail.com Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com Subject: RE: launch VM on volume vs. image Tony / Eddie; I think this is partially dependent on the version of OpenStack running. In our Victoria cloud, a volume created from an image is also done as a snapshot by Ceph, and is completed in seconds. Thank you, Dominic L. Hilsbos, MBA Director ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Eddie Yen [mailto:missile0407 at gmail.com] Sent: Wednesday, March 31, 2021 6:00 PM To: Tony Liu Cc: openstack-discuss at lists.openstack.org Subject: Re: launch VM on volume vs. image Hi Tony, In Ceph layer, IME, launching VM on image is creating a snapshot from source image in Nova ephemeral pool. If you check the RBD image created in Nova ephemeral pool, all images have their own parents from glance images. For launching VM on volume, it will "copy" the image to volume pool first, resize to specified disk size, then connect and boot. Because it's not create a snapshot from image, so it will take much longer. Eddie. Tony Liu 於 2021年4月1日 週四 上午8:09寫道: Hi, With Ceph as the backend storage, launching a VM on volume takes much longer than launching on image. Why is that? Could anyone elaborate the high level workflow for those two cases? Thanks! Tony From DHilsbos at performair.com Fri Apr 2 15:26:05 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 2 Apr 2021 15:26:05 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: Message-ID: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] Sent: Friday, April 2, 2021 6:19 AM To: Braden, Albert; openstack-discuss Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic Dell Customer Communication - Confidential Expresso server is still needed and must be used at least once a day. From: Braden, Albert Sent: Friday, April 2, 2021 8:08 AM To: openstack-discuss Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders >; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Fri Apr 2 15:32:42 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Fri, 2 Apr 2021 16:32:42 +0100 Subject: all Octavia LoadBalancer Message-ID: Hi there, I recently deployed a load balancer on our openstack private cloud. I used this manual - https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html to create the load balancer. However, after creating and trying to access it, it returns an error message saying "No server is available to handle this request". Also on the dashboard, "Operating status" shows offline but "provisioning status" shows active. I have two web applications as members of the load balancer and I can individually access those web applications. Could someone please point me in the right direction? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Fri Apr 2 15:48:12 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 2 Apr 2021 15:48:12 +0000 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> Message-ID: In the event of a leak, smart hands will respond with empty cups. Next step is designing the "series of tubes" that will deliver the product to consumers. 507eba0cecad04cd7400002b (900×633) (insider.com) From: DHilsbos at performair.com Sent: Friday, April 2, 2021 11:26 AM To: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] Sent: Friday, April 2, 2021 6:19 AM To: Braden, Albert; openstack-discuss Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic Dell Customer Communication - Confidential Expresso server is still needed and must be used at least once a day. From: Braden, Albert > Sent: Friday, April 2, 2021 8:08 AM To: openstack-discuss Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM To: Jacob Anders >; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Dell Customer Communication - Confidential I love this April fool joke From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM To: openstack-discuss Subject: [Ironic][RFE] Enable support for espresso machines in Ironic [EXTERNAL EMAIL] Hi There, I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] What are your thoughts on this? Please comment in the Story or just reply to thread. Thank you, Jacob The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Apr 2 16:09:40 2021 From: mthode at mthode.org (Matthew Thode) Date: Fri, 2 Apr 2021 11:09:40 -0500 Subject: [Requirements][all] Requirements branched, freeze lifted Message-ID: <20210402160940.naqcizgxc7psbkwt@mthode.org> The requirements freeze is now lifted. If your project has not branched please be aware that master moves on (to xena). -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Sat Apr 3 04:04:12 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 2 Apr 2021 21:04:12 -0700 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> Message-ID: I do believe any smart hands called to the rackmount espresso server would be greatly appreciative of this important functionality... even should a leak have developed in the system. On Fri, Apr 2, 2021 at 8:50 AM Braden, Albert wrote: > > In the event of a leak, smart hands will respond with empty cups. > > > > Next step is designing the “series of tubes” that will deliver the product to consumers. > > > > 507eba0cecad04cd7400002b (900×633) (insider.com) > > > > From: DHilsbos at performair.com > Sent: Friday, April 2, 2021 11:26 AM > To: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? > > > > Dominic L. Hilsbos, MBA > > Director – Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] > Sent: Friday, April 2, 2021 6:19 AM > To: Braden, Albert; openstack-discuss > Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > Dell Customer Communication - Confidential > > > > Expresso server is still needed and must be used at least once a day. > > > > From: Braden, Albert > Sent: Friday, April 2, 2021 8:08 AM > To: openstack-discuss > Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > [EXTERNAL EMAIL] > > April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? > > > > From: Kanevsky, Arkady > Sent: Friday, April 2, 2021 8:46 AM > To: Jacob Anders ; openstack-discuss > Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > Dell Customer Communication - Confidential > > > > I love this April fool joke > > > > From: Jacob Anders > Sent: Thursday, April 1, 2021 3:54 PM > To: openstack-discuss > Subject: [Ironic][RFE] Enable support for espresso machines in Ironic > > > > [EXTERNAL EMAIL] > > Hi There, > > > > I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: > > > > MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] > > > > What are your thoughts on this? Please comment in the Story or just reply to thread. > > > > Thank you, > > Jacob > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From satish.txt at gmail.com Sat Apr 3 05:12:00 2021 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Apr 2021 01:12:00 -0400 Subject: ML2/OVN DVR question Message-ID: Folks, I have deployed openstack using ML2/OVN on 1 controller and 2 compute nodes so far everything is working fine, when i configured router it by default used L3HA and i can see active-backup router on both compute nodes. currently my all SNAT traffic going out using compute-1 I don't want bottleneck in network so i am looking for DVR deployment and after reading i found tenant VLAN doesn't support DVR https://bugzilla.redhat.com/show_bug.cgi?id=1704596 After doing more research i found that if i set manually external_mac using the following command then my vm using local compute node to send traffic in/out just like DVR instead of centralized design. root at os-infra-1-neutron-ovn-northd-container-24eea9c2:~# ovn-nbctl find NAT type=dnat_and_snat _uuid : 99bdd866-01ed-425d-853b-9362ae8572c9 external_ids : {"neutron:fip_external_mac"="fa:16:3e:2d:7e:fa", "neutron:fip_id"="025a912a-c0ee-4f36-98ad-8992bd825cfc", "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", "neutron:fip_port_id"="70ad361a-b42e-403b-a5c1-4ee39ddf5e31", "neutron:revision_number"="6", "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} external_ip : "10.40.255.10" external_mac : [] logical_ip : "172.168.0.164" logical_port : "70ad361a-b42e-403b-a5c1-4ee39ddf5e31" options : {} type : dnat_and_snat _uuid : c438e7be-5ff4-472e-b053-8d6ed74cd4dc external_ids : {"neutron:fip_external_mac"="fa:16:3e:f5:9f:f0", "neutron:fip_id"="31e8cb44-0acd-453b-a4e6-39f6ab3a6da4", "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", "neutron:fip_port_id"="44a677c5-86ff-4b6b-a046-54e79f79c4cd", "neutron:revision_number"="2", "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} external_ip : "10.40.255.5" external_mac : [] logical_ip : "172.168.0.67" logical_port : "44a677c5-86ff-4b6b-a046-54e79f79c4cd" options : {} type : dnat_and_snat This is how i set external mac from fip_external_mac"="fa:16:3e:2d:7e:fa" in above command. ovn-nbctl set NAT 99bdd866-01ed-425d-853b-9362ae8572c9 external_mac="fa\:16\:3e\:2d\:7e\:fa" How do i make this behavior default for every single VM, i don't want to do this manually to set the external mac address of each FIP? From missile0407 at gmail.com Sat Apr 3 12:24:54 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Sat, 3 Apr 2021 20:24:54 +0800 Subject: launch VM on volume vs. image In-Reply-To: <0670B960225633449A24709C291A52524FBBF96C@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBBD136@COM01.performair.local> <0670B960225633449A24709C291A52524FBBF96C@COM01.performair.local> Message-ID: According to Tony's info, it will take a very long time to create because it needs to convert first. Like what Dominic said, not only wasting time but also wasting the compute node disk space in every creation. Still suggest converting to RAW format first, only a long time to upload the Ceph for the first time. 於 2021年4月2日 週五 下午11:23寫道: > Tony; > > In accordance with the second "Important" note listed here: > https://docs.ceph.com/en/nautilus/rbd/rbd-openstack/, we use all RAW > images. > > Thank you, > > Dominic L. Hilsbos, MBA > Director ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Tony Liu [mailto:tonyliu0592 at hotmail.com] > Sent: Thursday, April 1, 2021 5:19 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com > Subject: Re: launch VM on volume vs. image > > Hi Dominic, > > What's your image format? > > > Thanks! > Tony > ________________________________________ > From: DHilsbos at performair.com > Sent: April 1, 2021 10:47 AM > To: tonyliu0592 at hotmail.com > Cc: openstack-discuss at lists.openstack.org; missile0407 at gmail.com > Subject: RE: launch VM on volume vs. image > > Tony / Eddie; > > I think this is partially dependent on the version of OpenStack running. > In our Victoria cloud, a volume created from an image is also done as a > snapshot by Ceph, and is completed in seconds. > > Thank you, > > Dominic L. Hilsbos, MBA > Director ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Eddie Yen [mailto:missile0407 at gmail.com] > Sent: Wednesday, March 31, 2021 6:00 PM > To: Tony Liu > Cc: openstack-discuss at lists.openstack.org > Subject: Re: launch VM on volume vs. image > > Hi Tony, > > In Ceph layer, IME, launching VM on image is creating a snapshot from > source image in Nova ephemeral pool. > If you check the RBD image created in Nova ephemeral pool, all images have > their own parents from glance images. > > For launching VM on volume, it will "copy" the image to volume pool first, > resize to specified disk size, then connect and boot. > Because it's not create a snapshot from image, so it will take much longer. > > Eddie. > > Tony Liu 於 2021年4月1日 週四 上午8:09寫道: > Hi, > > With Ceph as the backend storage, launching a VM on volume takes much > longer than launching on image. Why is that? > Could anyone elaborate the high level workflow for those two cases? > > > Thanks! > Tony > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Apr 3 12:28:43 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 3 Apr 2021 14:28:43 +0200 Subject: [TripleO][ussuri] undercloud install # fails on heat launch Message-ID: Hi all, I am trying to understand why undercloud install does not pass the heat step. I feel that it is related to undercloud.conf file, something wrong there? Or some special char which python do not want to understand? How to enable heat-launcher more verbose output? as last line is starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! when I launch it manuallly it is running for long time... ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! openstack undercloud install last lines: http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Apr 3 13:48:50 2021 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 3 Apr 2021 09:48:50 -0400 Subject: ML2/OVN DVR question In-Reply-To: References: Message-ID: Update: This is what I am experiencing: https://bugzilla.redhat.com/show_bug.cgi?id=1700043 I have deployed openstack using openstack-ansible on ubuntu 20.04 (openvswitch version 2.13.1), Is there anything i need to do? or this is real BUG? On Sat, Apr 3, 2021 at 1:12 AM Satish Patel wrote: > > Folks, > > I have deployed openstack using ML2/OVN on 1 controller and 2 compute > nodes so far everything is working fine, when i configured router it > by default used L3HA and i can see active-backup router on both > compute nodes. currently my all SNAT traffic going out using compute-1 > > I don't want bottleneck in network so i am looking for DVR deployment > and after reading i found tenant VLAN doesn't support DVR > https://bugzilla.redhat.com/show_bug.cgi?id=1704596 > > After doing more research i found that if i set manually external_mac > using the following command then my vm using local compute node to > send traffic in/out just like DVR instead of centralized design. > > > root at os-infra-1-neutron-ovn-northd-container-24eea9c2:~# ovn-nbctl > find NAT type=dnat_and_snat > _uuid : 99bdd866-01ed-425d-853b-9362ae8572c9 > external_ids : {"neutron:fip_external_mac"="fa:16:3e:2d:7e:fa", > "neutron:fip_id"="025a912a-c0ee-4f36-98ad-8992bd825cfc", > "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", > "neutron:fip_port_id"="70ad361a-b42e-403b-a5c1-4ee39ddf5e31", > "neutron:revision_number"="6", > "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} > external_ip : "10.40.255.10" > external_mac : [] > logical_ip : "172.168.0.164" > logical_port : "70ad361a-b42e-403b-a5c1-4ee39ddf5e31" > options : {} > type : dnat_and_snat > > _uuid : c438e7be-5ff4-472e-b053-8d6ed74cd4dc > external_ids : {"neutron:fip_external_mac"="fa:16:3e:f5:9f:f0", > "neutron:fip_id"="31e8cb44-0acd-453b-a4e6-39f6ab3a6da4", > "neutron:fip_network_id"="9cccf39d-edba-4069-91ef-5f30afbb6604", > "neutron:fip_port_id"="44a677c5-86ff-4b6b-a046-54e79f79c4cd", > "neutron:revision_number"="2", > "neutron:router_name"=neutron-8af10b06-c8de-4166-9ab1-ca2f775b08a8} > external_ip : "10.40.255.5" > external_mac : [] > logical_ip : "172.168.0.67" > logical_port : "44a677c5-86ff-4b6b-a046-54e79f79c4cd" > options : {} > type : dnat_and_snat > > > This is how i set external mac from > fip_external_mac"="fa:16:3e:2d:7e:fa" in above command. > > ovn-nbctl set NAT 99bdd866-01ed-425d-853b-9362ae8572c9 > external_mac="fa\:16\:3e\:2d\:7e\:fa" > > How do i make this behavior default for every single VM, i don't want > to do this manually to set the external mac address of each FIP? From tolga at etom.cloud Sat Apr 3 18:17:03 2021 From: tolga at etom.cloud (Tolga Kaprol) Date: Sat, 3 Apr 2021 21:17:03 +0300 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> Message-ID: <70d42f83-b247-6842-edca-926ba9870086@etom.cloud> We are looking forward for a CoffeeScript SDK too. On 3.04.2021 07:04, Julia Kreger wrote: > I do believe any smart hands called to the rackmount espresso server > would be greatly appreciative of this important functionality... even > should a leak have developed in the system. > > On Fri, Apr 2, 2021 at 8:50 AM Braden, Albert > wrote: >> In the event of a leak, smart hands will respond with empty cups. >> >> >> >> Next step is designing the “series of tubes” that will deliver the product to consumers. >> >> >> >> 507eba0cecad04cd7400002b (900×633) (insider.com) >> >> >> >> From: DHilsbos at performair.com >> Sent: Friday, April 2, 2021 11:26 AM >> To: openstack-discuss at lists.openstack.org >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. >> >> I agree that an espresso server is necessary, but do we really want it to be rack mounted? What if it starts to leak? >> >> >> >> Dominic L. Hilsbos, MBA >> >> Director – Information Technology >> >> Perform Air International Inc. >> >> DHilsbos at PerformAir.com >> >> www.PerformAir.com >> >> >> >> From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] >> Sent: Friday, April 2, 2021 6:19 AM >> To: Braden, Albert; openstack-discuss >> Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> Dell Customer Communication - Confidential >> >> >> >> Expresso server is still needed and must be used at least once a day. >> >> >> >> From: Braden, Albert >> Sent: Friday, April 2, 2021 8:08 AM >> To: openstack-discuss >> Subject: RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> [EXTERNAL EMAIL] >> >> April fool joke?! Does this mean that I wasted my time last night designing a rack-mount espresso server? >> >> >> >> From: Kanevsky, Arkady >> Sent: Friday, April 2, 2021 8:46 AM >> To: Jacob Anders ; openstack-discuss >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. >> >> Dell Customer Communication - Confidential >> >> >> >> I love this April fool joke >> >> >> >> From: Jacob Anders >> Sent: Thursday, April 1, 2021 3:54 PM >> To: openstack-discuss >> Subject: [Ironic][RFE] Enable support for espresso machines in Ironic >> >> >> >> [EXTERNAL EMAIL] >> >> Hi There, >> >> >> >> I was discussing this RFE with Julia and we decided it would be great to get some feedback on it from the wider community, ideally by the end of today. Apologies for the short notice. Here's the story: >> >> >> >> MailScanner has detected a possible fraud attempt from "urldefense.com" claiming to be https://storyboard.openstack.org/#!/story/2008791 [storyboard.openstack.org] >> >> >> >> What are your thoughts on this? Please comment in the Story or just reply to thread. >> >> >> >> Thank you, >> >> Jacob >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. From donny at fortnebula.com Sun Apr 4 13:57:44 2021 From: donny at fortnebula.com (Donny Davis) Date: Sun, 4 Apr 2021 09:57:44 -0400 Subject: [Ironic][RFE] Enable support for espresso machines in Ironic In-Reply-To: <70d42f83-b247-6842-edca-926ba9870086@etom.cloud> References: <0670B960225633449A24709C291A52524FBBF9F1@COM01.performair.local> <70d42f83-b247-6842-edca-926ba9870086@etom.cloud> Message-ID: What of the front end, what kind of look should we be shooting for in the Bean modal? What kind of point-and-drink experience is expected? We can't expect everyone to handle their caffeine needs based solely on an API. On Sat, Apr 3, 2021 at 2:22 PM Tolga Kaprol wrote: > We are looking forward for a CoffeeScript SDK too. > > On 3.04.2021 07:04, Julia Kreger wrote: > > I do believe any smart hands called to the rackmount espresso server > > would be greatly appreciative of this important functionality... even > > should a leak have developed in the system. > > > > On Fri, Apr 2, 2021 at 8:50 AM Braden, Albert > > wrote: > >> In the event of a leak, smart hands will respond with empty cups. > >> > >> > >> > >> Next step is designing the “series of tubes” that will deliver the > product to consumers. > >> > >> > >> > >> 507eba0cecad04cd7400002b (900×633) (insider.com) > >> > >> > >> > >> From: DHilsbos at performair.com > >> Sent: Friday, April 2, 2021 11:26 AM > >> To: openstack-discuss at lists.openstack.org > >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso > machines in Ironic > >> > >> > >> > >> CAUTION: The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > >> > >> I agree that an espresso server is necessary, but do we really want it > to be rack mounted? What if it starts to leak? > >> > >> > >> > >> Dominic L. Hilsbos, MBA > >> > >> Director – Information Technology > >> > >> Perform Air International Inc. > >> > >> DHilsbos at PerformAir.com > >> > >> www.PerformAir.com > >> > >> > >> > >> From: Kanevsky, Arkady [mailto:Arkady.Kanevsky at dell.com] > >> Sent: Friday, April 2, 2021 6:19 AM > >> To: Braden, Albert; openstack-discuss > >> Subject: [Disarmed] RE: [Ironic][RFE] Enable support for espresso > machines in Ironic > >> > >> > >> > >> Dell Customer Communication - Confidential > >> > >> > >> > >> Expresso server is still needed and must be used at least once a day. > >> > >> > >> > >> From: Braden, Albert > >> Sent: Friday, April 2, 2021 8:08 AM > >> To: openstack-discuss > >> Subject: RE: [Ironic][RFE] Enable support for espresso machines in > Ironic > >> > >> > >> > >> [EXTERNAL EMAIL] > >> > >> April fool joke?! Does this mean that I wasted my time last night > designing a rack-mount espresso server? > >> > >> > >> > >> From: Kanevsky, Arkady > >> Sent: Friday, April 2, 2021 8:46 AM > >> To: Jacob Anders ; openstack-discuss < > openstack-discuss at lists.openstack.org> > >> Subject: [EXTERNAL] RE: [Ironic][RFE] Enable support for espresso > machines in Ironic > >> > >> > >> > >> CAUTION: The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > >> > >> Dell Customer Communication - Confidential > >> > >> > >> > >> I love this April fool joke > >> > >> > >> > >> From: Jacob Anders > >> Sent: Thursday, April 1, 2021 3:54 PM > >> To: openstack-discuss > >> Subject: [Ironic][RFE] Enable support for espresso machines in Ironic > >> > >> > >> > >> [EXTERNAL EMAIL] > >> > >> Hi There, > >> > >> > >> > >> I was discussing this RFE with Julia and we decided it would be great > to get some feedback on it from the wider community, ideally by the end of > today. Apologies for the short notice. Here's the story: > >> > >> > >> > >> MailScanner has detected a possible fraud attempt from "urldefense.com" > claiming to be https://storyboard.openstack.org/#!/story/2008791 [ > storyboard.openstack.org] > >> > >> > >> > >> What are your thoughts on this? Please comment in the Story or just > reply to thread. > >> > >> > >> > >> Thank you, > >> > >> Jacob > >> > >> The contents of this e-mail message and > >> any attachments are intended solely for the > >> addressee(s) and may contain confidential > >> and/or legally privileged information. If you > >> are not the intended recipient of this message > >> or if this message has been addressed to you > >> in error, please immediately alert the sender > >> by reply e-mail and then delete this message > >> and any attachments. If you are not the > >> intended recipient, you are notified that > >> any use, dissemination, distribution, copying, > >> or storage of this message or any attachment > >> is strictly prohibited. > >> > >> The contents of this e-mail message and > >> any attachments are intended solely for the > >> addressee(s) and may contain confidential > >> and/or legally privileged information. If you > >> are not the intended recipient of this message > >> or if this message has been addressed to you > >> in error, please immediately alert the sender > >> by reply e-mail and then delete this message > >> and any attachments. If you are not the > >> intended recipient, you are notified that > >> any use, dissemination, distribution, copying, > >> or storage of this message or any attachment > >> is strictly prohibited. > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From 379035389 at qq.com Sat Apr 3 05:42:24 2021 From: 379035389 at qq.com (=?gb18030?B?s6/R9M60wdI=?=) Date: Sat, 3 Apr 2021 13:42:24 +0800 Subject: can't build a instance successfully Message-ID: Folks,   I am deploying OpenStack manually and have completed minimal development of the Ussuri. My controller node can find my compute node and confirm there are compute hosts in the database with the instruction:   ” openstack compute service list --service nova-compute”   But when I want to create an instance on the compute node, the status of the compute node just remains “build”. And I try to look for faults from the “/var/log/nova/nova-compute.log” of the compute node:   2021-04-03 00:59:43.379 1432 INFO os_vif [req-2ece8c1c-a96f-4d91-b704-5598c1166016 98049570d7a54e26b8af4eaec9e2eca2 8342df14fa614ad79a08e68f097e4487 - default default] Successfully unplugged vif VIFBridge(active=True,address=fa:16:3e:55:a5:65,bridge_name='brq3169e77c-99',has_traffic_filtering=True,id=28248ef5-6ad6-44bf-b2ce-3fa7ac2371ef,network=Network(3169e77c-9945-454f-9562-6e9a55e1adce),plugin='linux_bridge',port_profile= -------------- next part -------------- A non-text attachment was scrubbed... Name: 5C60710B at 0FA9A32F.40006860.png.jpg Type: image/jpeg Size: 38601 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 915DCF8E at 1A5B8D1B.40006860.png.jpg Type: image/jpeg Size: 37368 bytes Desc: not available URL: From berndbausch at mailbox.org Sun Apr 4 14:44:23 2021 From: berndbausch at mailbox.org (Bernd Bausch) Date: Sun, 4 Apr 2021 23:44:23 +0900 Subject: [Neutron] How to provide internet access to tier 2 instance Message-ID: <7aa3968f-dceb-43af-2548-e8ed0f7ac9b1@mailbox.org> I have a pretty standard single-server Victoria Devstack, where I created this network topology: public private backend | | | | /-------\ |-- I1 |- I2 |--|Router1|--| | | \-------/ | | | | /-------\ | | |--|Router2|--| | | \-------/ | | | | I1 and I2 are instances. My question: Is it possible to give I2 access to the external world to install software and download files? I don't need access **to** I2 **from** the external world. My unsuccessful attempt: After adding a static default route via Router1 to Router2, I can ping the internet from Router2's namespace, but not from I2. My guess is that Router1 ignores traffic from networks that are not attached to it. I don't have enough experience to understand the netfilter rules in Router1's namespace, and in any case, rather than tweaking them I need a supported method to give I2 internet access, or the confirmation that it is not possible. Thanks much for any insights and suggestions. Bernd From luke.camilleri at zylacomputing.com Sun Apr 4 18:43:20 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Sun, 4 Apr 2021 20:43:20 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding Message-ID: Hello everyone, I have enable the L3 extension for port-forwarding and can succesfully port-forward traffic after assigning an additional floating IP to the project. I would like to know if it is possible to enable the corresponding horizon functionality for this extension (port-forwarding) please? Regards From masayuki.igawa at gmail.com Sun Apr 4 23:16:12 2021 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Mon, 05 Apr 2021 08:16:12 +0900 Subject: [qa][hacking] Proposing new core reviewers In-Reply-To: References: Message-ID: Hi, On Wed, Mar 31, 2021, at 05:47, Martin Kopec wrote: > Hi all, > > I'd like to propose Sorin Sbarnea (IRC: zbr) and Radosław Piliszek > (IRC: yoctozepto) to hacking > core. They both are doing a great upstream work among multiple > different projects and > volunteered to help us with maintenance of hacking project as well. > > You can vote/feedback in this email thread. If no objection by 6th of > April, we will add them > to the list. > +1 ! -- Masayuki > Regards, > -- > Martin Kopec > > > From berndbausch at gmail.com Mon Apr 5 00:48:05 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 5 Apr 2021 09:48:05 +0900 Subject: can't build a instance successfully In-Reply-To: References: Message-ID: This is where you should start your troubleshooting: On 4/3/2021 2:42 PM, 朝阳未烈 wrote: > AMQP server on controller:5672 is unreachable Something prevents your compute node from reaching the message queue server on the controller. It could be a network problem, routing problem on the compute node or the controller, message queue server might be down, firewall suddenly blocking port 5672, ... the possibilities are endless. From thierry at openstack.org Mon Apr 5 08:55:36 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Apr 2021 10:55:36 +0200 Subject: [largescale-sig] Next meeting: April 7, 15utc Message-ID: Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210407T15 A number of topics have already been added to the agenda, including discussing our next video meetings and PTG participation. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From ykarel at redhat.com Mon Apr 5 12:18:37 2021 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 5 Apr 2021 17:48:37 +0530 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: Hi Ruslanas, Looks like the issue is with the version of python3-six and python3-urllib3 installed in your system, i have seen this issue in the past with this mismatch. Can you check versions of both on your system and from which repo those are installed(dnf list installed python3-six python3-urllib3). Seems python3-six is not updated from Ussuri repo, if that's true try again after updated python3-six(dnf update python3-six). Thanks and regards Yatin Karel On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis wrote: > > Hi all, > > I am trying to understand why undercloud install does not pass the heat step. > I feel that it is related to undercloud.conf file, something wrong there? Or some special char which python do not want to understand? > How to enable heat-launcher more verbose output? as last line is starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! when I launch it manuallly it is running for long time... ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! > > openstack undercloud install last lines: http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ > undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ > > -- > Ruslanas Gžibovskis > +370 6030 7030 From ruslanas at lpic.lt Mon Apr 5 12:50:01 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 5 Apr 2021 15:50:01 +0300 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: I had: [stack at undercloud ~]$ dnf list installed | egrep 'six|urllib3' python3-six.noarch 1.11.0-8.el8 @anaconda python3-urllib3.noarch 1.25.7-2.el8 @centos-openstack-ussuri [stack at undercloud ~]$ I belie they were latest last week, as I had dnf update in post install script and later fresh install last week. BUT maybe dnf update is commented out at some point... I will check, now I set it to update, as there were some updates (including six package and urllib3). Will check if this solves my issues. Yes, I have faced these issues previously, when part of packages got installed from epel... Now have only ussuri ( by the way it is now updating from ussuri repo) Might be issue, that it did not override it during installation of tipleo package... I will check it after undercloud deployment works. On Mon, 5 Apr 2021 at 15:19, Yatin Karel wrote: > Hi Ruslanas, > > Looks like the issue is with the version of python3-six and > python3-urllib3 installed in your system, i have seen this issue in > the past with this mismatch. > Can you check versions of both on your system and from which repo > those are installed(dnf list installed python3-six python3-urllib3). > Seems python3-six is not updated from Ussuri repo, if that's true try > again after updated python3-six(dnf update python3-six). > > > Thanks and regards > Yatin Karel > > > On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I am trying to understand why undercloud install does not pass the heat > step. > > I feel that it is related to undercloud.conf file, something wrong > there? Or some special char which python do not want to understand? > > How to enable heat-launcher more verbose output? as last line is > starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! > when I launch it manuallly it is running for long time... > ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! > > > > openstack undercloud install last lines: > http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ > > undercloud.conf: > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Apr 5 12:56:52 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 5 Apr 2021 15:56:52 +0300 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: Yes, it bypasses this step now, Thank you Yatin! now it is: python3-six.noarch 1.14.0-2.el8 @centos-openstack-ussuri python3-urllib-gssapi.noarch 1.0.1-10.el8 @centos-openstack-ussuri python3-urllib3.noarch 1.25.7-3.el8 @centos-openstack-ussuri [stack at undercloud ~]$ On Mon, 5 Apr 2021 at 15:50, Ruslanas Gžibovskis wrote: > I had: > [stack at undercloud ~]$ dnf list installed | egrep 'six|urllib3' > python3-six.noarch 1.11.0-8.el8 > @anaconda > python3-urllib3.noarch 1.25.7-2.el8 > @centos-openstack-ussuri > [stack at undercloud ~]$ > > I belie they were latest last week, as I had dnf update in post install > script and later fresh install last week. BUT maybe dnf update is commented > out at some point... I will check, now I set it to update, as there were > some updates (including six package and urllib3). > Will check if this solves my issues. > > Yes, I have faced these issues previously, when part of packages got > installed from epel... Now have only ussuri ( by the way it is now updating > from ussuri repo) Might be issue, that it did not override it during > installation of tipleo package... > > I will check it after undercloud deployment works. > > > > On Mon, 5 Apr 2021 at 15:19, Yatin Karel wrote: > >> Hi Ruslanas, >> >> Looks like the issue is with the version of python3-six and >> python3-urllib3 installed in your system, i have seen this issue in >> the past with this mismatch. >> Can you check versions of both on your system and from which repo >> those are installed(dnf list installed python3-six python3-urllib3). >> Seems python3-six is not updated from Ussuri repo, if that's true try >> again after updated python3-six(dnf update python3-six). >> >> >> Thanks and regards >> Yatin Karel >> >> >> On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis >> wrote: >> > >> > Hi all, >> > >> > I am trying to understand why undercloud install does not pass the heat >> step. >> > I feel that it is related to undercloud.conf file, something wrong >> there? Or some special char which python do not want to understand? >> > How to enable heat-launcher more verbose output? as last line is >> starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! >> when I launch it manuallly it is running for long time... >> ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! >> > >> > openstack undercloud install last lines: >> http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ >> > undercloud.conf: >> https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> >> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Mon Apr 5 13:27:23 2021 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 5 Apr 2021 18:57:23 +0530 Subject: [TripleO][ussuri] undercloud install # fails on heat launch In-Reply-To: References: Message-ID: Hi, On Mon, Apr 5, 2021 at 6:27 PM Ruslanas Gžibovskis wrote: > > Yes, it bypasses this step now, Thank you Yatin! > Good to know. > now it is: > python3-six.noarch 1.14.0-2.el8 @centos-openstack-ussuri > python3-urllib-gssapi.noarch 1.0.1-10.el8 @centos-openstack-ussuri > python3-urllib3.noarch 1.25.7-3.el8 @centos-openstack-ussuri > [stack at undercloud ~]$ > > On Mon, 5 Apr 2021 at 15:50, Ruslanas Gžibovskis wrote: >> >> I had: >> [stack at undercloud ~]$ dnf list installed | egrep 'six|urllib3' >> python3-six.noarch 1.11.0-8.el8 @anaconda >> python3-urllib3.noarch 1.25.7-2.el8 @centos-openstack-ussuri >> [stack at undercloud ~]$ >> Yes, this combination will not work and need python3-six updated. >> I belie they were latest last week, as I had dnf update in post install script and later fresh install last week. BUT maybe dnf update is commented out at some point... I will check, now I set it to update, as there were some updates (including six package and urllib3). >> Will check if this solves my issues. >> Yes dnf update is recommended to avoid such issues. >> Yes, I have faced these issues previously, when part of packages got installed from epel... Now have only ussuri ( by the way it is now updating from ussuri repo) Might be issue, that it did not override it during installation of tipleo package... >> Yes mixing EPEL with OpenStack repos can cause issues as can bring untested updates so should be avoided. >> I will check it after undercloud deployment works. >> >> >> >> On Mon, 5 Apr 2021 at 15:19, Yatin Karel wrote: >>> >>> Hi Ruslanas, >>> >>> Looks like the issue is with the version of python3-six and >>> python3-urllib3 installed in your system, i have seen this issue in >>> the past with this mismatch. >>> Can you check versions of both on your system and from which repo >>> those are installed(dnf list installed python3-six python3-urllib3). >>> Seems python3-six is not updated from Ussuri repo, if that's true try >>> again after updated python3-six(dnf update python3-six). >>> >>> >>> Thanks and regards >>> Yatin Karel >>> >>> >>> On Sat, Apr 3, 2021 at 6:00 PM Ruslanas Gžibovskis wrote: >>> > >>> > Hi all, >>> > >>> > I am trying to understand why undercloud install does not pass the heat step. >>> > I feel that it is related to undercloud.conf file, something wrong there? Or some special char which python do not want to understand? >>> > How to enable heat-launcher more verbose output? as last line is starting engine and that's it... VERY HELPFULLL!!! AND IT CANNOT CONNECT?! when I launch it manuallly it is running for long time... ?!>!???>?>?>?><>?!>?!!>!?!!>?!!?>!?! AAAAAA PAIN!!!!!! >>> > >>> > openstack undercloud install last lines: http://paste.openstack.org/show/RCUVMaHa4XwXwRM6IS67/ >>> > undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >>> > heat.log: http://paste.openstack.org/show/C5yP3iO8VDk7ugLOHTo6/ >>> > >>> > -- >>> > Ruslanas Gžibovskis >>> > +370 6030 7030 >>> >> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 Thanks and Regards Yatin Karel From whayutin at redhat.com Mon Apr 5 13:28:20 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 5 Apr 2021 07:28:20 -0600 Subject: [tripleo] centos-stream-8 and container-tools 3.0 Message-ID: FYI.. The container-tools 3.0 module has been published by the centos team. We're seeing it land in: http://dashboard-ci.tripleo.org/d/jwDYSidGz/rpm-dependency-pipeline?viewPanel=22&orgId=1 We will be moving ALL the upstream centos-stream-8 jobs to use container-tools 3.0 now. https://review.opendev.org/c/openstack/tripleo-quickstart/+/784770 https://review.opendev.org/c/openstack/tripleo-quickstart/+/784768 0/ happy monday -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Mon Apr 5 14:12:22 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Mon, 5 Apr 2021 15:12:22 +0100 Subject: [all] Octavia LoadBalancer Error Message-ID: Hi there, I recently deployed a load balancer on our openstack private cloud. I used this manual - https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html to create the load balancer. However, after creating and trying to access it, it returns an error message saying "No server is available to handle this request". Also on the dashboard, "Operating status" shows offline but "provisioning status" shows active. I have two web applications as members of the load balancer and I can individually access those web applications. Could someone please point me in the right direction? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Mon Apr 5 14:32:48 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 5 Apr 2021 14:32:48 +0000 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Message-ID: <362166bb4d484088b1232725a1ecf0a1@ncwmexgp009.CORP.CHARTERCOM.com> It looks like the problem may be caused by incompatible versions of RMQ. How can I work around that? -----Original Message----- From: Braden, Albert Sent: Friday, April 2, 2021 8:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I opened a bug for this issue: https://bugs.launchpad.net/kolla-ansible/+bug/1922269 -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 11:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? -----Original Message----- From: Braden, Albert Sent: Thursday, April 1, 2021 9:34 AM To: 'openstack-discuss at lists.openstack.org' Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: https://paste.ubuntu.com/p/ZDgFgKtQTB/ This appears in the RMQ log: https://paste.ubuntu.com/p/5D2Qjv3H8c/ -----Original Message----- From: Braden, Albert Sent: Wednesday, March 31, 2021 8:31 AM To: openstack-discuss at lists.openstack.org Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller Centos7: {rabbit,"RabbitMQ","3.7.24"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, Centos8: {rabbit,"RabbitMQ","3.7.28"}, "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: https://paste.ubuntu.com/p/h9HWdfwmrK/ and the crash dump that appears on control2: crash dump log: https://paste.ubuntu.com/p/MpZ8SwTJ2T/ First 1500 lines of the dump: https://paste.ubuntu.com/p/xkCyp2B8j8/ If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. -----Original Message----- From: Mark Goddard Sent: Wednesday, March 31, 2021 4:14 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Tue, 30 Mar 2021 at 13:41, Braden, Albert wrote: > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > … > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > 'rabbit at chrnc-void-testupgrade-control-1', > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > But my hypervisors are down: > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > 172.16.2.31 compute0 > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > In the RMQ logs I see this every 10 seconds: > > > > 172.16.1.132 control2 > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > client unexpectedly closed TCP connection > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? Hi Albert, Could you share the versions of RabbitMQ and erlang in both versions of the container? When initially testing this setup, I think we had 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on sufficiently to become incompatible? Mark > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From ruslanas at lpic.lt Mon Apr 5 19:42:53 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 5 Apr 2021 22:42:53 +0300 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud Message-ID: Hi all, While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ builddir/install-undercloud.log ( contains info about container-puppet-neutron ) http://paste.openstack.org/show/804181/ undercloud.conf: https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf dnf list installed http://paste.openstack.org/show/804182/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 5 21:38:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Apr 2021 16:38:17 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 8th at 1500 UTC. Message-ID: <178a3f8d599.cf94285387564.6978079671458448803@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 8th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, April 7th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From fungi at yuggoth.org Mon Apr 5 22:31:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Apr 2021 22:31:17 +0000 Subject: [all][elections][tc] TC Vacancy Special Election Nominations end this week Message-ID: <20210405223116.yw7mqfahtsqps6sp@yuggoth.org> Just a reminder, nominations for one vacant OpenStack TC (Technical Committee) position are only open for three more days, until Apr 08, 2021 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/xena// (for example, "candidates/xena/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for the vacant seat on the Technical Committee. This TC vacancy special election will be held from Apr 8, 2021 23:45 UTC through to Apr 15, 2021 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution to one of the official teams over the Victoria to Wallaby timeframe, Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. Note that the contribution qualifying period for this special election is being kept the same as what would have been used for the original TC election. The four already elected TC members for this term are listed as candidates in the special election, but will not appear on any resulting poll as they have already been officially elected. Only new candidates in addition to the four elected TC members for this term will appear on a subsequent poll for the TC vacancy special election. Please find below the timeline: nomination starts @ Mar 25, 2021 23:45 UTC nomination ends @ Apr 08, 2021 23:45 UTC elections start @ Apr 08, 2021 23:45 UTC elections end @ Apr 15, 2021 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Apr 6 01:00:24 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Apr 2021 20:00:24 -0500 Subject: [qa][heat][stable] grenade jobs with tempest plugins on stable/train broken Message-ID: <178a4b1e326.db78f8f289143.8139427571865552389@ghanshyammann.com> Hello Everyone, I capped stable/stein to use the Tempest 26.0.0 which means grenade jobs that run the tests from tempest plugins started using the Tempest 26.0.0. But the constraints used in Tempest virtual env are mismatched between when Tempest virtual env was created and when tests are run from grenade or grenade plugins scripts. Due to these two different constraint used, tox recreate the tempest virtual env which remove all already installed tempest plugins and their deps and it fails to run the smoke tests. This constraints mismatch issue occurred in stable/train and I standardized these for devstack based jobs - https://review.opendev.org/q/topic:%2522standardize-tempest-tox-constraints%2522+status:merged But this issue is occurring for grenade jobs that do not run the tests via run-tempest role (run-tempest role take care of constraints things). Rabi observed this in threat grenade jobs today. I have reported this as a bug in LP[1] and making it standardize from the master branch so that this kind of issue does not occur again when any stable branch starts using the non-master Tempest. Please don't recheck if your grenade job is failing with the same issue and wait for the updates on this ML thread. [1] https://bugs.launchpad.net/grenade/+bug/1922597 -gmann From skaplons at redhat.com Tue Apr 6 06:28:15 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 06 Apr 2021 08:28:15 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: References: Message-ID: <2626442.APKxhzko2K@p1> Hi, Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: > Hello everyone, I have enable the L3 extension for port-forwarding and > can succesfully port-forward traffic after assigning an additional > floating IP to the project. > > I would like to know if it is possible to enable the corresponding > horizon functionality for this extension (port-forwarding) please? > > Regards I'm not Horizon expert so I may be wrong here but I don't think there is anything regarding port forwarding support in Horizon currently. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gagehugo at gmail.com Tue Apr 6 07:13:43 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 6 Apr 2021 02:13:43 -0500 Subject: [openstack-helm] Meeting cancelled Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting today April 6th, the meeting is cancelled. Our next IRC meeting will be April 13th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Tue Apr 6 07:43:29 2021 From: ricolin at ricolky.com (Rico Lin) Date: Tue, 6 Apr 2021 15:43:29 +0800 Subject: [Multi-arch SIG] success to run full tempest tests on Arm64 env. What's next? Message-ID: Dear all, I'm glad to tell everyone that we finally succeeded to build Devstack and run full tempest tests on it [1]. As the test build result shows [2], the job is stable enough to run. For earlier 13+ job results. (will do more recheck later) One Timeout, and two failure cases (Which are targeted by increase `BUILD_TIMEOUT` to 900 secs). The job `devstack-platform-arm64` runs around 2.22 hrs to 3.04 hrs, which is near two times slower than on x86 environment. It's not a solid number as the performance might change a lot with different cloud environments and different hardware. But I think this is a great chance for us to make more improvements. At least now we have a test job ready (Not merged yet) for you to do experiments with. And we should also add suggestions to Multi-arch SIG documentation so once we make improvements, other architecture can share the efforts too. *So please join us if you are also interested in help tuning the performance :)* *Also, we need to discuss what kind of way we should run this job, should we separate it into small jobs? Should we run it as a periodic job? voting?* *On the other hand, I would hope to collect more ideas on how we should move forward.* *Please provide your idea for this on our Xena PTG etherpad* *https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig * Our PTG is scheduled around 4/20 Tuesday from 07:00-08:00 and 15:00-16:00 (UTC time) If you plan to join our PTG, feel free to update our PTG etherpad to suggest other topics. And our Meeting time is scheduled biweekly on Tuesday (host on demand) Please join our IRC #openstack-multi-arch [1] https://review.opendev.org/c/openstack/devstack/+/708317 [2] https://zuul.openstack.org/builds?job_name=devstack-platform-arm64+ *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Apr 6 08:03:01 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 6 Apr 2021 09:03:01 +0100 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller In-Reply-To: <362166bb4d484088b1232725a1ecf0a1@ncwmexgp009.CORP.CHARTERCOM.com> References: <362166bb4d484088b1232725a1ecf0a1@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: On Mon, 5 Apr 2021 at 15:33, Braden, Albert wrote: > > It looks like the problem may be caused by incompatible versions of RMQ. How can I work around that? Hi Albert, thanks for testing this procedure and reporting issues. I suggest we continue the discussion on the bug report. https://bugs.launchpad.net/kolla-ansible/+bug/1922269 Mark > > -----Original Message----- > From: Braden, Albert > Sent: Friday, April 2, 2021 8:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > I opened a bug for this issue: > > https://bugs.launchpad.net/kolla-ansible/+bug/1922269 > > -----Original Message----- > From: Braden, Albert > Sent: Thursday, April 1, 2021 11:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. > > Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? > > -----Original Message----- > From: Braden, Albert > Sent: Thursday, April 1, 2021 9:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: > > https://paste.ubuntu.com/p/ZDgFgKtQTB/ > > This appears in the RMQ log: > > https://paste.ubuntu.com/p/5D2Qjv3H8c/ > > -----Original Message----- > From: Braden, Albert > Sent: Wednesday, March 31, 2021 8:31 AM > To: openstack-discuss at lists.openstack.org > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > Centos7: > > {rabbit,"RabbitMQ","3.7.24"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > Centos8: > > {rabbit,"RabbitMQ","3.7.28"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: > > https://paste.ubuntu.com/p/h9HWdfwmrK/ > > and the crash dump that appears on control2: > > crash dump log: > > https://paste.ubuntu.com/p/MpZ8SwTJ2T/ > > First 1500 lines of the dump: > > https://paste.ubuntu.com/p/xkCyp2B8j8/ > > If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. > > -----Original Message----- > From: Mark Goddard > Sent: Wednesday, March 31, 2021 4:14 AM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > On Tue, 30 Mar 2021 at 13:41, Braden, Albert > wrote: > > > > I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 > > > > > > > > I used the instructions here to successfully remove and replace control0 with a Centos8 box > > > > > > > > https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers > > > > > > > > After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status > > > > Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... > > > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}, > > > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > > > {partitions,[]}, > > > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-1',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] > > > > > > > > After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: > > > > > > > > kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 > > > > … > > > > control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 > > > > > > > > After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > > > Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. > > > > > > > > If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: > > > > > > > > (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status > > > > Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... > > > > [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace', > > > > 'rabbit at chrnc-void-testupgrade-control-1', > > > > 'rabbit at chrnc-void-testupgrade-control-2']}]}, > > > > {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', > > > > 'rabbit at chrnc-void-testupgrade-control-0-replace']}, > > > > {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, > > > > {partitions,[]}, > > > > {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, > > > > {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] > > > > > > > > But my hypervisors are down: > > > > > > > > (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll > > > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | > > > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | > > > > | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | > > > > | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | > > > > +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ > > > > > > > > When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: > > > > > > > > 172.16.2.31 compute0 > > > > 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. > > > > 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out > > > > > > > > In the RMQ logs I see this every 10 seconds: > > > > > > > > 172.16.1.132 control2 > > > > [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 > > > > 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > client unexpectedly closed TCP connection > > > > 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) > > > > 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e > > > > 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' > > > > 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): > > > > > > > > Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? > > Hi Albert, > > Could you share the versions of RabbitMQ and erlang in both versions > of the container? When initially testing this setup, I think we had > 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on > sufficiently to become incompatible? > > Mark > > > > > > > > I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. > > > > > > > > The contents of this e-mail message and > > any attachments are intended solely for the > > addressee(s) and may contain confidential > > and/or legally privileged information. If you > > are not the intended recipient of this message > > or if this message has been addressed to you > > in error, please immediately alert the sender > > by reply e-mail and then delete this message > > and any attachments. If you are not the > > intended recipient, you are notified that > > any use, dissemination, distribution, copying, > > or storage of this message or any attachment > > is strictly prohibited. > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From missile0407 at gmail.com Tue Apr 6 08:47:50 2021 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 6 Apr 2021 16:47:50 +0800 Subject: [kolla][glance] Few questions about images. In-Reply-To: References: Message-ID: Update: For question 1, I found that the publicize image feature still work, but need to do this on command line by using openstack image set. Also found "403 Forbidden" error [1] is threw by Horizon when trying to modify visibility and image properties on private image/snapshot. Does someone know how to disable this limitation in Horizon? For question 2 is totally solved. The workaround in Sysprep is working actually. [1] The error message shown in Horizon is: "Forbidden. Insufficient permissions of the requested operation" Eddie Yen 於 2021年3月26日 週五 上午7:50寫道: > Hi everyone, I want to ask about the image operating permission & > Windows images issue we met since we can't find any answers on the > internet. > > 1. Now we're still using Rocky with Ceph as storage. Sometimes we > need to re-pack the image on the Openstack. We used to save as > snapshot (re-pack by Nova ephemeral VM) or upload to image (re-pack > by volume), then set snapshot/image's visibility to the public. But since > Rocky, we can't do this anymore because when we try to set public, > Horizon always shows "not enough permission" error. > The workaround we're using for now is creating a nova snapshot after > re-pack, download the snapshot, then upload the snapshot again as > public. But it's utterly wasting time if the images are huge. So we want to > know how to unleash this limitation can let us just change snapshot to > public at least. > > 2. Openstack uses virtio as a network device by default, so we always > install a virtio driver when packing Windows images. As the network > performance issue in GSO/TSO enablement, we also need to disable > them in device properties. But since Windows 10 2004 build (my thought) > device properties always reset these settings after Sysprep. We found > there's a workaround [1] to solve this issue, but may not work sometimes. > Is there a better way to solve this issue? > > Many thanks, > Eddie. > > [1] PersistAllDeviceInstalls | Microsoft Docs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Apr 6 09:24:26 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 6 Apr 2021 11:24:26 +0200 Subject: [keystone][horizon][Victoria] scope-based policy problems Message-ID: <0562401B-5118-4647-85CC-BC8A26080789@poczta.onet.pl> Hi, I have some questions about horizon and keystone policies : Im trying to achieve "domain_admin" role with the ability to add/remove/update projects in particular domain and add/update/remove users in the same domain (and of course be able to see instances, networks, etc. in this domain). As described here http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021105.html : "Horizon does not support the system-scoped token yet as of Victoria and coming Wallaby.” So there is no way to write json/not scope-based policy for horizon and scope-based policy for keystone, because it will not work due to lack of scope information in horizon’s token? So the question is how the policies should look like? Is it possible at all to achieve such „domain admin” role? How in different way allow one user to add/remove/update projects and add/update/remove users? Another thing is, that if I use something like this in horizon/keystone policy: "identity:list_users_in_group": "rule:admin_required or (role:domain_admin and domain_id:%(domain_id)s)” then (besides of that domain users) there is also admin account in the list (so I assume admin „belongs” to all domains) - how to prevent newly created domain_admin from seeing admin account and making changes to that account? It really holds up my whole project, can you help mi guys? Best regards Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Tue Apr 6 09:57:57 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 6 Apr 2021 10:57:57 +0100 Subject: [all] Gate resources and performance In-Reply-To: References: <53f77238-d77e-4b57-57bc-139065b23595@nemebean.com> Message-ID: On Thu, 1 Apr 2021 at 18:53, Dan Smith wrote: > > > I'll try to circle back and generate a new set of numbers with > > my script, and also see if I can get updated numbers from Clark on the > > overall percentages. > > Okay, I re-ran the numbers this morning and got updated 30-day stats > from Clark. Here's what I've got (delta from the last report in parens): > > Project % of total Node Hours Nodes > ---------------------------------------------- > 1. Neutron 23% 34h (-4) 30 (-2) > 2. TripleO 18% 17h (-14) 14 (-6) > 3. Nova 7% 22h (+1) 25 (-0) > 4. Kolla 6% 10h (-2) 18 (-0) > 5. OSA 6% 19h (-3) 16 (-1) > > Definitely a lot of improvement from tripleo, so thanks for that! > Neutron rose to the top and is still very hefty. I think Nova's 1-hr > rise is probably just noise given the node count didn't change. I think > we're still waiting on zuulv3 conversion of the grenade multinode job so > we can drop the base grenade job, which will make things go down. Thanks Dan, I've recently introduced a standalone nova-live-migration-ceph job that might be the cause of the additional hour for Nova. zuul: Add nova-live-migration-ceph job https://review.opendev.org/c/openstack/nova/+/768466 While this adds extra load it should be easier to maintain over the previous all in one live migration job that restacked the environment by making direct calls into various devstack plugins. Regarding the switch to the multinode grenade job I'm still working through that below and wanted to land it once Xena is formally open: zuul: Replace grenade and nova-grenade-multinode with grenade-multinode https://review.opendev.org/c/openstack/nova/+/778885/ This series also includes some attempted cleanup of our irrelevant-files for a few jobs that will hopefully reduce our numbers further. Plenty of work left to do here throughout Xena but it's a start. Cheers, Lee From aj at suse.com Tue Apr 6 10:08:21 2021 From: aj at suse.com (Andreas Jaeger) Date: Tue, 6 Apr 2021 12:08:21 +0200 Subject: [docs][release] Creating Xena's landing pages In-Reply-To: References: Message-ID: <18be6047-4455-91ec-c4cf-5bff341ba8bc@suse.com> On 02.04.21 11:51, Herve Beraud wrote: > Hello Docs team, > > This is a friendly reminder from the release team, I think that it > should be safe for you to apply your process to create the new release > series landing pages for docs.openstack.org . > > All stable branches are now created. > > If you want youcan do the work before the final release date to avoid > having to synchronize with the release team on that day. I've pushed a change for adding links for Wallaby pages that are already live and adding the Xena pages: https://review.opendev.org/c/openstack/openstack-manuals/+/784909 Note that at release time, we still need to push a change to mark Wallaby as released on docs.o.o: https://review.opendev.org/c/openstack/openstack-manuals/+/784910 Since many projects do not have /wallaby/ docs published, the wallaby index pages miss them. Once projects have published their docs (normally happens with every commit, thus initial merge like the .gitreview change should be fine), they can send updates to openstack-manuals to link to them. Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From gthiemonge at redhat.com Tue Apr 6 10:09:06 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 6 Apr 2021 12:09:06 +0200 Subject: [all] Octavia LoadBalancer Error In-Reply-To: References: Message-ID: On Mon, Apr 5, 2021 at 4:14 PM Adekunbi Adewojo wrote: > Hi there, > > I recently deployed a load balancer on our openstack private cloud. I used > this manual - > https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html > to create the load balancer. However, after creating and trying to access > it, it returns an error message saying "No server is available to handle > this request". Also on the dashboard, "Operating status" shows offline but > "provisioning status" shows active. I have two web applications as members > of the load balancer and I can individually access those web applications. > Hi, provisioning status ACTIVE shows the load balancer was successfully created but operating status OFFLINE indicates that the amphora is unable to communicate with the Octavia health-manager service. Basically, the amphora should report its status to the hm service using UDP messages (on a controller, you can dump the UDP packets on the o-hm0 interface), do you see any errors in the hm logs? I would recommend enabling the debug messages in the health-manager, and to check the logs, you should see messages about incoming packets: Apr 06 10:02:20 devstack2 octavia-health-manager[1747820]: DEBUG octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from ('192.168.0.73', 11273) {{(pid=1747857) dorecv /opt/stack/octavia/octavia/amphorae/drivers/health/heartbeat_udp.py:95}} The "No server is available to handle this request" response from the amphora indicates that haproxy is correctly running on the VIP interface but it doesn't have any member servers. Perhaps fixing the health-manager issue will help to understand why the traffic is not dispatched to the members. > Could someone please point me in the right direction? > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Apr 6 10:56:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 6 Apr 2021 11:56:38 +0100 Subject: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller In-Reply-To: References: Message-ID: On 01/04/2021 16:34, Braden, Albert wrote: > Sorry that was a typo. Stopping RMQ during the removal of the *second* controller is what causes the problem. > > Is there a way to tell Centos 8 Train to use RMQ 3.7.24 instead of 3.7.28? you could presumable hardcode the old images for the rabbitmq container so that you continue to use the old version if its compatiable until you have done the rest of the upgrade. then bounce all the rabbit containers from centos 7 to centos 8 at a later date. im not sure what other implications that would have but you can do it by setting rabbitmq_image https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/rabbitmq/defaults/main.yml#L54 > > -----Original Message----- > From: Braden, Albert > Sent: Thursday, April 1, 2021 9:34 AM > To: 'openstack-discuss at lists.openstack.org' > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > I did some experimenting and it looks like stopping RMQ during the removal of the first controller is what causes the problem. After deploying the first controller, stopping the RMQ container on any controller including the new centos8 controller will cause the entire cluster to stop. This crash dump appears on the controllers that stopped in sympathy: > > https://paste.ubuntu.com/p/ZDgFgKtQTB/ > > This appears in the RMQ log: > > https://paste.ubuntu.com/p/5D2Qjv3H8c/ > > -----Original Message----- > From: Braden, Albert > Sent: Wednesday, March 31, 2021 8:31 AM > To: openstack-discuss at lists.openstack.org > Subject: RE: Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > Centos7: > > {rabbit,"RabbitMQ","3.7.24"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > Centos8: > > {rabbit,"RabbitMQ","3.7.28"}, > "Erlang/OTP 22 [erts-10.7.2.8] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:128] [hipe]\n"}, > > When I deploy the first Centos8 controller, RMQ comes up with all 3 nodes active and seems to be working fine until I shut down the 2nd controller. The only hint of trouble when I replace the 1st node is this error message the first time I run the deployment: > > https://paste.ubuntu.com/p/h9HWdfwmrK/ > > and the crash dump that appears on control2: > > crash dump log: > > https://paste.ubuntu.com/p/MpZ8SwTJ2T/ > > First 1500 lines of the dump: > > https://paste.ubuntu.com/p/xkCyp2B8j8/ > > If I wait for a few minutes then RMQ recovers on control2 and the 2nd run of the deployment seems to work, and there is no trouble until I shut down control1. > > -----Original Message----- > From: Mark Goddard > Sent: Wednesday, March 31, 2021 4:14 AM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: [kolla] Train Centos7 -> Centos8 upgrade fails on 2nd controller > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > On Tue, 30 Mar 2021 at 13:41, Braden, Albert > wrote: >> I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here: >> >> >> >> https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8 >> >> >> >> I used the instructions here to successfully remove and replace control0 with a Centos8 box >> >> >> >> https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers >> >> >> >> After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com >> >> >> >> (rabbitmq)[root at chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status >> >> Cluster status of node rabbit at chrnc-void-testupgrade-control-2 ... >> >> [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', >> >> 'rabbit at chrnc-void-testupgrade-control-0-replace', >> >> 'rabbit at chrnc-void-testupgrade-control-1', >> >> 'rabbit at chrnc-void-testupgrade-control-2']}]}, >> >> {running_nodes,['rabbit at chrnc-void-testupgrade-control-0-replace', >> >> 'rabbit at chrnc-void-testupgrade-control-1', >> >> 'rabbit at chrnc-void-testupgrade-control-2']}, >> >> {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, >> >> {partitions,[]}, >> >> {alarms,[{'rabbit at chrnc-void-testupgrade-control-0-replace',[]}, >> >> {'rabbit at chrnc-void-testupgrade-control-1',[]}, >> >> {'rabbit at chrnc-void-testupgrade-control-2',[]}]}] >> >> >> >> After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully: >> >> >> >> kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1 >> >> … >> >> control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0 >> >> >> >> After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running: >> >> >> >> (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status >> >> Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'. >> >> >> >> If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal: >> >> >> >> (rabbitmq)[root at chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status >> >> Cluster status of node rabbit at chrnc-void-testupgrade-control-0-replace ... >> >> [{nodes,[{disc,['rabbit at chrnc-void-testupgrade-control-0', >> >> 'rabbit at chrnc-void-testupgrade-control-0-replace', >> >> 'rabbit at chrnc-void-testupgrade-control-1', >> >> 'rabbit at chrnc-void-testupgrade-control-2']}]}, >> >> {running_nodes,['rabbit at chrnc-void-testupgrade-control-2', >> >> 'rabbit at chrnc-void-testupgrade-control-0-replace']}, >> >> {cluster_name,<<"rabbit at chrnc-void-testupgrade-control-0.dev.chtrse.com">>}, >> >> {partitions,[]}, >> >> {alarms,[{'rabbit at chrnc-void-testupgrade-control-2',[]}, >> >> {'rabbit at chrnc-void-testupgrade-control-0-replace',[]}]}] >> >> >> >> But my hypervisors are down: >> >> >> >> (openstack) [root at chrnc-void-testupgrade-build kolla-ansible]# ohll >> >> +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ >> >> | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB | >> >> +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ >> >> | 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 | >> >> | 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 | >> >> | 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 | >> >> +----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+ >> >> >> >> When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds: >> >> >> >> 172.16.2.31 compute0 >> >> 2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out >> >> 2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422. >> >> 2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in 1 seconds.: timeout: timed out >> >> >> >> In the RMQ logs I see this every 10 seconds: >> >> >> >> 172.16.1.132 control2 >> >> [root at chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31 >> >> 2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): >> >> client unexpectedly closed TCP connection >> >> 2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) >> >> 2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e >> >> 2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/' >> >> 2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'): >> >> >> >> Why does RMQ fail when I shut down the 2nd controller after successfully replacing the first one? > Hi Albert, > > Could you share the versions of RabbitMQ and erlang in both versions > of the container? When initially testing this setup, I think we had > 3.7.24 on both sides. Perhaps the CentOS 8 version has moved on > sufficiently to become incompatible? > > Mark >> >> >> I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails. >> >> >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From luke.camilleri at zylacomputing.com Tue Apr 6 11:20:08 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 6 Apr 2021 13:20:08 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: <2626442.APKxhzko2K@p1> References: <2626442.APKxhzko2K@p1> Message-ID: <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> That is what I thought until I saw the image below: https://openstackdocs.safeswisscloud.ch/en/howto/ht-port-forward.html https://cdn.discordapp.com/attachments/821001329715183647/826146201774587944/unknown.png On 06/04/2021 08:28, Slawek Kaplonski wrote: > Hi, > > Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: >> Hello everyone, I have enable the L3 extension for port-forwarding and >> can succesfully port-forward traffic after assigning an additional >> floating IP to the project. >> >> I would like to know if it is possible to enable the corresponding >> horizon functionality for this extension (port-forwarding) please? >> >> Regards > I'm not Horizon expert so I may be wrong here but I don't think there is > anything regarding port forwarding support in Horizon currently. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Apr 6 11:21:17 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Apr 2021 13:21:17 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal Message-ID: Hi, one of our jobs (python-tempestconf project) is frequently failing with POST_FAILURE [1] during the following task: export-devstack-journal : Export journal I'm bringing this to a broader audience as we're not sure where exactly the issue might be. Did you encounter a similar issue lately or in the past? [1] https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf Thanks for any advice, -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Apr 6 11:39:25 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 6 Apr 2021 08:39:25 -0300 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> References: <2626442.APKxhzko2K@p1> <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> Message-ID: It was developed by us. The first step to implement this one is the port range support in Neutron; the spec has been accepted, and now we are working to create the patches. Afterwards, we will push this Horizon patch as well. On Tue, Apr 6, 2021 at 8:20 AM Luke Camilleri < luke.camilleri at zylacomputing.com> wrote: > That is what I thought until I saw the image below: > > https://openstackdocs.safeswisscloud.ch/en/howto/ht-port-forward.html > > [image: > https://cdn.discordapp.com/attachments/821001329715183647/826146201774587944/unknown.png] > On 06/04/2021 08:28, Slawek Kaplonski wrote: > > Hi, > > Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: > > Hello everyone, I have enable the L3 extension for port-forwarding and > can succesfully port-forward traffic after assigning an additional > floating IP to the project. > > I would like to know if it is possible to enable the corresponding > horizon functionality for this extension (port-forwarding) please? > > Regards > > I'm not Horizon expert so I may be wrong here but I don't think there is > anything regarding port forwarding support in Horizon currently. > > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Tue Apr 6 11:41:41 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 6 Apr 2021 13:41:41 +0200 Subject: [victoria][neutron][horizon]l3-agent+port-forwarding In-Reply-To: References: <2626442.APKxhzko2K@p1> <936caa99-a70a-afc1-37d1-3d8f4fe46ec1@zylacomputing.com> Message-ID: <92c2d3be-6563-93ba-48ab-170b7c2c8fac@zylacomputing.com> Thanks for the update On 06/04/2021 13:39, Rafael Weingärtner wrote: > It was developed by us. The first step to implement this one is the > port range support in Neutron; the spec has been accepted, and now we > are working to create the patches. Afterwards, we will push this > Horizon patch as well. > > On Tue, Apr 6, 2021 at 8:20 AM Luke Camilleri > > wrote: > > That is what I thought until I saw the image below: > > https://openstackdocs.safeswisscloud.ch/en/howto/ht-port-forward.html > > > > https://cdn.discordapp.com/attachments/821001329715183647/826146201774587944/unknown.png > > On 06/04/2021 08:28, Slawek Kaplonski wrote: >> Hi, >> >> Dnia niedziela, 4 kwietnia 2021 20:43:20 CEST Luke Camilleri pisze: >>> Hello everyone, I have enable the L3 extension for port-forwarding and >>> can succesfully port-forward traffic after assigning an additional >>> floating IP to the project. >>> >>> I would like to know if it is possible to enable the corresponding >>> horizon functionality for this extension (port-forwarding) please? >>> >>> Regards >> I'm not Horizon expert so I may be wrong here but I don't think there is >> anything regarding port forwarding support in Horizon currently. >> > > > -- > Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Apr 6 12:10:05 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 06 Apr 2021 12:10:05 +0000 Subject: [cinder][horizon] Ussuri: Horizon shows error for managed volume(s) Message-ID: <20210406121005.Horde.yHFPRIBoNuvRq8hpTixt7kS@webmail.nde.ag> Hi *, I'm struggling with a dashboard error that seems to be wrong, I can't seem to find any existing reports but I could have simply missed it so please point me to any existing reports about that. Anyway, a user imported two volumes from Ceph via 'cinder manage' in our Train version of OpenStack, we upgraded last week to Ussuri. One of those two volumes is bootable and the instance is up and running with both volumes (extended volume group). When clicking on the instance details the Horizon dashboard shows a red window saying "Failed to get attached volume." The dashboard log says: [Tue Apr 06 09:29:56.197238 2021] [wsgi:error] [pid 10140] [remote IP:59482] WARNING horizon.exceptions Recoverable error: volume_image_metadata But I can see both volumes in the details page, so the message seems incorrect. I searched the logs for this error message from before the upgrade but couldn't find any. So this seems to be new in Ussuri, I assume. Does anyone have the same experience with managed volumes? Thanks and best regards, Eugen From amotoki at gmail.com Tue Apr 6 13:48:05 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 6 Apr 2021 22:48:05 +0900 Subject: [neutron] bug deputy report (3/29-4/4) Message-ID: Hi, I was a bug deputy last week and here is a report. Last week is relatively quite. Please check my report. ## Unassigned * OpenStack Metadata API and OVN in Neutron https://bugs.launchpad.net/neutron/+bug/1921809 It would be nice if OVN folks follow it more while haleyb replied. ## Medium, Assigned * [LB] Linux Bridge iptables firewall does not work without "ipset" https://bugs.launchpad.net/neutron/+bug/1922127 Assigned to ralonsoh ## Fix Released * Strings in tags field is limited to 60 chars https://bugs.launchpad.net/neutron/+bug/1921713 ## Almost RFE * allow using tap device on netdev enabled host https://bugs.launchpad.net/neutron/+bug/1922222 I requested the bug author to provide more information on the background ## RFE * [QoS] Add minimum guaranteed packet rate QoS rule https://bugs.launchpad.net/neutron/+bug/1922237 * [RFE] BFD for BGP Dynamic Routing https://bugs.launchpad.net/neutron/+bug/1922716 From gmann at ghanshyammann.com Tue Apr 6 14:02:31 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Apr 2021 09:02:31 -0500 Subject: [qa][gate][stable] stable stein|train py2 devstack jobs are broken Message-ID: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> Hello Everyone, During fixing the grenade issue on stable/train[1], there is another issue that came up for py2 devstack based jobs. I have logged the bug on devstack side as of now - https://bugs.launchpad.net/devstack/+bug/1922736 Let's wait for this to fix before you recheck on failing patches. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021597.html -gmann From mkopec at redhat.com Tue Apr 6 14:44:42 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Apr 2021 16:44:42 +0200 Subject: [qa][hacking] Proposing new core reviewers In-Reply-To: References: Message-ID: Thank you for your feedback. As no objections were raised we added zbr and yoctozepto to the hacking-core group. Welcome to the team both of you! Regards, On Mon, 5 Apr 2021 at 01:16, Masayuki Igawa wrote: > Hi, > > On Wed, Mar 31, 2021, at 05:47, Martin Kopec wrote: > > Hi all, > > > > I'd like to propose Sorin Sbarnea (IRC: zbr) and Radosław Piliszek > > (IRC: yoctozepto) to hacking > > core. They both are doing a great upstream work among multiple > > different projects and > > volunteered to help us with maintenance of hacking project as well. > > > > You can vote/feedback in this email thread. If no objection by 6th of > > April, we will add them > > to the list. > > > > +1 ! > > -- Masayuki > > > Regards, > > -- > > Martin Kopec > > > > > > > > -- Martin Kopec -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Apr 6 15:06:50 2021 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 6 Apr 2021 17:06:50 +0200 Subject: How to modify staging-ovirt parameters Message-ID: Hello, I have a test lab with Queens on CentOS 7 deployed with TripleO, that I use sometimes to mimic an OSP 13 environment. The Openstack nodes are oVirt VMS. I moved these VMs from one oVirt infrastructure to another one. The Openstack environment starts and operates without problems (3 controllers, 2 computes and 3 cephs), but I don't know how to modify the staging-ovirt parameters. At oVirt side I re-created as before a user ostackpm at internal with sufficient power and I also set the same password. But the ip address of the engine cannot be the same. When I deployed the environment I put all the parameters inside the instackenv.json file to do introspection and deploy. Where are they put on undercloud? Is there a way to update them? Perhaps some database entries or other kind of repos where I can update the ip? Thanks in advance, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Apr 6 15:12:47 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Apr 2021 08:12:47 -0700 Subject: all Octavia LoadBalancer In-Reply-To: References: Message-ID: Hi Adekunbi, It sounds like the backend servers (web servers?) are not passing the health check or are otherwise unreachable. Provisioning status of Active shows that Octavia was able to create and provision the load balancer without error. Let's look at a few things: 1. Check if the that load balancer has statistics for the your connections to the VIP: openstack loadbalancer stats show If these are all zeros, your deployment of Octavia is not working correctly. Most likely the lb-mgmt-net is not passing the required traffic. Please debug in neutron. Assuming you see a value greater than zero in the "total_connections" column, your deployment is working as expected. 2. Check your health monitor configuration and load balancer status: openstack loadbalancer status show Check the "operating status" of all of the objects in your load balancer. As a refresher, operating status is the observed status of the object, so do we see the backend member as ONLINE, etc. openstack loadbalancer member show Also check that the member is configured with the correct subnet that can reach the backend member server. If a subnet was not specified, it will use the VIP subnet to attempt to reach the members. If the members are in operating status ERROR, this means that the load balancer sees that server as failed. Check your health monitor configuration (If you have one) to make sure it is connecting to the correct IPs and ports and the expected response is correct for your application. openstack loadbalancer healthmonitor show Also, check that the members have security groups or other firewall runs set appropriately to allow the load balancer to access it. Michael On Fri, Apr 2, 2021 at 8:36 AM Adekunbi Adewojo wrote: > > Hi there, > > I recently deployed a load balancer on our openstack private cloud. I used this manual - https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html > to create the load balancer. However, after creating and trying to access it, it returns an error message saying "No server is available to handle this request". Also on the dashboard, "Operating status" shows offline but "provisioning status" shows active. I have two web applications as members of the load balancer and I can individually access those web applications. > > Could someone please point me in the right direction? > > Thanks. From radoslaw.piliszek at gmail.com Tue Apr 6 15:14:02 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Apr 2021 17:14:02 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: Message-ID: I am testing whether replacing xz with gzip would solve the problem [1] [2]. [1] https://review.opendev.org/c/openstack/devstack/+/784964 [2] https://review.opendev.org/c/osf/python-tempestconf/+/784967 -yoctozepto On Tue, Apr 6, 2021 at 1:21 PM Martin Kopec wrote: > > Hi, > > one of our jobs (python-tempestconf project) is frequently failing with POST_FAILURE [1] > during the following task: > > export-devstack-journal : Export journal > > I'm bringing this to a broader audience as we're not sure where exactly the issue might be. > > Did you encounter a similar issue lately or in the past? > > [1] https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf > > Thanks for any advice, > -- > Martin Kopec > > > From oliver.wenz at dhbw-mannheim.de Tue Apr 6 15:28:19 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Tue, 6 Apr 2021 17:28:19 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1707579703.116615.1617722899181@ox.dhbw-mannheim.de> Following the suggestion from https://docs.openstack.org/swift/latest/overview_auth.html I set 'log_level=debug' for authtoken. Now I'm seeing more errors in the glance-api logs: Apr 06 15:07:40 infra1-glance-container-99614ac2 glance-wsgi-api[1837]: 2021-04-06 15:07:38.197 1837 WARNING keystonemiddleware.auth_token [req-ad8d0db9-b630-4237-9fff-d7ff282155d2 956806468e9f43dbaad1807a5208de52 ebe0fe5f3893495e82598c07716f5d45 - default default] Identity response: {"error": {"code": 500, "title": "Internal Server Error", "message": "An unexpected error prevented the server from fulfilling your request."}}: keystoneauth1.exceptions.http.InternalServerError: An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-55767c13-a450-4f81-90ab-5a7af6b3f672) Apr 06 15:07:46 infra1-glance-container-99614ac2 uwsgi[1822]: DAMN ! worker 13 (pid: 1836) died, killed by signal 9 :( trying respawn ... Here's the full log: http://paste.openstack.org/show/804208/ The options you suggested @Dmitriy are still there in user_variables.yml: glance_glance_api_conf_overrides: keystone_authtoken: service_token_roles_required: True service_token_roles: service Kind regards, Oliver From aadewojo at gmail.com Tue Apr 6 15:34:03 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Tue, 6 Apr 2021 16:34:03 +0100 Subject: all Octavia LoadBalancer In-Reply-To: References: Message-ID: Thank you very much for your detailed response. I checked my previous loadbalancer implementation, the members operating status showed active. However, when I checked the access log of one of the load balancer member it showed this "06/Apr/2021:06:25:05 +0000] "GET /healthcheck HTTP/1.0" 404 118". I then deleted the load balancer and recreated it. I realised that before adding a listener or any other thing, the load balancer wasn't showing an "Online" status as suggested by the cookbook. I also ran the stat command and everything was zero. I see that you mentioned neutron, I do not have admin access, I might have to go back to the admin. But from what I said, do you still think it is a neutron issue? Thanks. On Tue, Apr 6, 2021 at 4:12 PM Michael Johnson wrote: > Hi Adekunbi, > > It sounds like the backend servers (web servers?) are not passing the > health check or are otherwise unreachable. Provisioning status of > Active shows that Octavia was able to create and provision the load > balancer without error. > > Let's look at a few things: > > 1. Check if the that load balancer has statistics for the your > connections to the VIP: > > openstack loadbalancer stats show > > If these are all zeros, your deployment of Octavia is not working > correctly. Most likely the lb-mgmt-net is not passing the required > traffic. Please debug in neutron. > > Assuming you see a value greater than zero in the "total_connections" > column, your deployment is working as expected. > > 2. Check your health monitor configuration and load balancer status: > > openstack loadbalancer status show > > Check the "operating status" of all of the objects in your load > balancer. As a refresher, operating status is the observed status of > the object, so do we see the backend member as ONLINE, etc. > > openstack loadbalancer member show name> > > Also check that the member is configured with the correct subnet that > can reach the backend member server. If a subnet was not specified, it > will use the VIP subnet to attempt to reach the members. > > If the members are in operating status ERROR, this means that the load > balancer sees that server as failed. Check your health monitor > configuration (If you have one) to make sure it is connecting to the > correct IPs and ports and the expected response is correct for your > application. > > openstack loadbalancer healthmonitor show > > Also, check that the members have security groups or other firewall > runs set appropriately to allow the load balancer to access it. > > Michael > > On Fri, Apr 2, 2021 at 8:36 AM Adekunbi Adewojo > wrote: > > > > Hi there, > > > > I recently deployed a load balancer on our openstack private cloud. I > used this manual - > https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html > > to create the load balancer. However, after creating and trying to > access it, it returns an error message saying "No server is available to > handle this request". Also on the dashboard, "Operating status" shows > offline but "provisioning status" shows active. I have two web applications > as members of the load balancer and I can individually access those web > applications. > > > > Could someone please point me in the right direction? > > > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Tue Apr 6 15:36:18 2021 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Tue, 6 Apr 2021 16:36:18 +0100 Subject: [all] Octavia LoadBalancer Error In-Reply-To: References: Message-ID: Thank you for your response. However, I do not know how to enable debug messages for the health monitor. I do not even know how to access the health monitor log because I can't ssh into the load balancer. On Tue, Apr 6, 2021 at 11:09 AM Gregory Thiemonge wrote: > On Mon, Apr 5, 2021 at 4:14 PM Adekunbi Adewojo > wrote: > >> Hi there, >> >> I recently deployed a load balancer on our openstack private cloud. I >> used this manual - >> https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html >> to create the load balancer. However, after creating and trying to access >> it, it returns an error message saying "No server is available to handle >> this request". Also on the dashboard, "Operating status" shows offline but >> "provisioning status" shows active. I have two web applications as members >> of the load balancer and I can individually access those web applications. >> > > Hi, > > provisioning status ACTIVE shows the load balancer was successfully > created but operating status OFFLINE indicates that the amphora is unable > to communicate with the Octavia health-manager service. > Basically, the amphora should report its status to the hm service using > UDP messages (on a controller, you can dump the UDP packets on the o-hm0 > interface), do you see any errors in the hm logs? I would > recommend enabling the debug messages in the health-manager, and to check > the logs, you should see messages about incoming packets: > > Apr 06 10:02:20 devstack2 octavia-health-manager[1747820]: DEBUG > octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from > ('192.168.0.73', 11273) {{(pid=1747857) dorecv > /opt/stack/octavia/octavia/amphorae/drivers/health/heartbeat_udp.py:95}} > > The "No server is available to handle this request" response from the > amphora indicates that haproxy is correctly running on the VIP interface > but it doesn't have any member servers. Perhaps fixing the health-manager > issue will help to understand why the traffic is not dispatched to the > members. > > > >> Could someone please point me in the right direction? >> >> Thanks. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Apr 6 15:39:20 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 6 Apr 2021 09:39:20 -0600 Subject: [tripleo] master and victoria promotions delayed Message-ID: 0/ Hey just want to communicate that promotions for master and victoria are being delayed due to promotion-blockers. If you have a merged patch in a repo that is not gated by tripleo jobs you will NOT be able to use that patch until we promote. http://dashboard-ci.tripleo.org/d/HkOLImOMk/upstream-and-rdo-promotions?orgId=1 Please review the following bugs to better understand what is blocking. https://bugs.launchpad.net/tripleo/+bugs?field.tag=promotion-blocker&orderby=-datecreated&start=0 I'll note, any tempest test results logged in bugs are most likely already skipped and not blocking upstream via https://opendev.org/openstack/openstack-tempest-skiplist/src/branch/master/roles/validate-tempest/vars/tempest_skip.yml Your focus should be on deployment failures, not tempest failures at this time. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Apr 6 15:40:38 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 6 Apr 2021 17:40:38 +0200 Subject: all Octavia LoadBalancer In-Reply-To: References: Message-ID: Hello, have you tried to use tcp check rather than http check ? Il giorno mar 6 apr 2021 alle ore 17:38 Adekunbi Adewojo ha scritto: > Thank you very much for your detailed response. I checked my previous > loadbalancer implementation, the members operating status showed active. > However, when I checked the access log of one of the load balancer member > it showed this "06/Apr/2021:06:25:05 +0000] "GET /healthcheck HTTP/1.0" 404 > 118". > > I then deleted the load balancer and recreated it. I realised that before > adding a listener or any other thing, the load balancer wasn't showing an > "Online" status as suggested by the cookbook. I also ran the stat command > and everything was zero. > > I see that you mentioned neutron, I do not have admin access, I might have > to go back to the admin. But from what I said, do you still think it is a > neutron issue? > > Thanks. > > On Tue, Apr 6, 2021 at 4:12 PM Michael Johnson > wrote: > >> Hi Adekunbi, >> >> It sounds like the backend servers (web servers?) are not passing the >> health check or are otherwise unreachable. Provisioning status of >> Active shows that Octavia was able to create and provision the load >> balancer without error. >> >> Let's look at a few things: >> >> 1. Check if the that load balancer has statistics for the your >> connections to the VIP: >> >> openstack loadbalancer stats show >> >> If these are all zeros, your deployment of Octavia is not working >> correctly. Most likely the lb-mgmt-net is not passing the required >> traffic. Please debug in neutron. >> >> Assuming you see a value greater than zero in the "total_connections" >> column, your deployment is working as expected. >> >> 2. Check your health monitor configuration and load balancer status: >> >> openstack loadbalancer status show >> >> Check the "operating status" of all of the objects in your load >> balancer. As a refresher, operating status is the observed status of >> the object, so do we see the backend member as ONLINE, etc. >> >> openstack loadbalancer member show > name> >> >> Also check that the member is configured with the correct subnet that >> can reach the backend member server. If a subnet was not specified, it >> will use the VIP subnet to attempt to reach the members. >> >> If the members are in operating status ERROR, this means that the load >> balancer sees that server as failed. Check your health monitor >> configuration (If you have one) to make sure it is connecting to the >> correct IPs and ports and the expected response is correct for your >> application. >> >> openstack loadbalancer healthmonitor show >> >> Also, check that the members have security groups or other firewall >> runs set appropriately to allow the load balancer to access it. >> >> Michael >> >> On Fri, Apr 2, 2021 at 8:36 AM Adekunbi Adewojo >> wrote: >> > >> > Hi there, >> > >> > I recently deployed a load balancer on our openstack private cloud. I >> used this manual - >> https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html >> > to create the load balancer. However, after creating and trying to >> access it, it returns an error message saying "No server is available to >> handle this request". Also on the dashboard, "Operating status" shows >> offline but "provisioning status" shows active. I have two web applications >> as members of the load balancer and I can individually access those web >> applications. >> > >> > Could someone please point me in the right direction? >> > >> > Thanks. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Apr 6 15:47:40 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 6 Apr 2021 17:47:40 +0200 Subject: [all] Octavia LoadBalancer Error In-Reply-To: References: Message-ID: I also suggest to verify if the load balancer network used by amphora ave access to controllers port udp 5555. If you want to access the load balancers instances, you can specify in the configuration : amp_ssh_key_name = your ssh key So you can logon on loadbalancer instances via ssh Il giorno mar 6 apr 2021 alle ore 17:40 Adekunbi Adewojo ha scritto: > Thank you for your response. However, I do not know how to enable debug > messages for the health monitor. I do not even know how to access the > health monitor log because I can't ssh into the load balancer. > > On Tue, Apr 6, 2021 at 11:09 AM Gregory Thiemonge > wrote: > >> On Mon, Apr 5, 2021 at 4:14 PM Adekunbi Adewojo >> wrote: >> >>> Hi there, >>> >>> I recently deployed a load balancer on our openstack private cloud. I >>> used this manual - >>> https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html >>> to create the load balancer. However, after creating and trying to >>> access it, it returns an error message saying "No server is available to >>> handle this request". Also on the dashboard, "Operating status" shows >>> offline but "provisioning status" shows active. I have two web applications >>> as members of the load balancer and I can individually access those web >>> applications. >>> >> >> Hi, >> >> provisioning status ACTIVE shows the load balancer was successfully >> created but operating status OFFLINE indicates that the amphora is unable >> to communicate with the Octavia health-manager service. >> Basically, the amphora should report its status to the hm service using >> UDP messages (on a controller, you can dump the UDP packets on the o-hm0 >> interface), do you see any errors in the hm logs? I would >> recommend enabling the debug messages in the health-manager, and to check >> the logs, you should see messages about incoming packets: >> >> Apr 06 10:02:20 devstack2 octavia-health-manager[1747820]: DEBUG >> octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from >> ('192.168.0.73', 11273) {{(pid=1747857) dorecv >> /opt/stack/octavia/octavia/amphorae/drivers/health/heartbeat_udp.py:95}} >> >> The "No server is available to handle this request" response from the >> amphora indicates that haproxy is correctly running on the VIP interface >> but it doesn't have any member servers. Perhaps fixing the health-manager >> issue will help to understand why the traffic is not dispatched to the >> members. >> >> >> >>> Could someone please point me in the right direction? >>> >>> Thanks. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 6 15:51:19 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Apr 2021 08:51:19 -0700 Subject: =?UTF-8?Q?Re:_[devstack][infra]_POST=5FFAILURE_on_export-devstack-journa?= =?UTF-8?Q?l_:_Export_journal?= In-Reply-To: References: Message-ID: On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > I am testing whether replacing xz with gzip would solve the problem [1] [2]. The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > [1] https://review.opendev.org/c/openstack/devstack/+/784964 > [2] https://review.opendev.org/c/osf/python-tempestconf/+/784967 > > -yoctozepto > > On Tue, Apr 6, 2021 at 1:21 PM Martin Kopec wrote: > > > > Hi, > > > > one of our jobs (python-tempestconf project) is frequently failing with POST_FAILURE [1] > > during the following task: > > > > export-devstack-journal : Export journal > > > > I'm bringing this to a broader audience as we're not sure where exactly the issue might be. > > > > Did you encounter a similar issue lately or in the past? > > > > [1] https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf > > > > Thanks for any advice, > > -- > > Martin Kopec From pierre at stackhpc.com Tue Apr 6 15:51:50 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 6 Apr 2021 17:51:50 +0200 Subject: [qa][gate][stable] stable stein|train py2 devstack jobs are broken In-Reply-To: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> References: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> Message-ID: We've discussed this issue (which affects non-devstack jobs too) on #opendev and #openstack-qa. If I understood correctly, it is caused by a recent PyPI outage and their fallback infra not providing the data-requires-python metadata. While PyPI appears to be back to normal, some opendev mirrors (proxies really) are still serving indexes without data-requires-python, which suggests that bad PyPI servers may still be handling some of the requests. On Tue, 6 Apr 2021 at 16:10, Ghanshyam Mann wrote: > > Hello Everyone, > > During fixing the grenade issue on stable/train[1], there is another issue that came up for > py2 devstack based jobs. I have logged the bug on devstack side as of now > > - https://bugs.launchpad.net/devstack/+bug/1922736 > > Let's wait for this to fix before you recheck on failing patches. > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021597.html > > -gmann > From fungi at yuggoth.org Tue Apr 6 16:02:48 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Apr 2021 16:02:48 +0000 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: Message-ID: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> On 2021-04-06 13:21:17 +0200 (+0200), Martin Kopec wrote: > one of our jobs (python-tempestconf project) is frequently failing with > POST_FAILURE [1] > during the following task: > > export-devstack-journal : Export journal > > I'm bringing this to a broader audience as we're not sure where exactly the > issue might be. > > Did you encounter a similar issue lately or in the past? > > [1] > https://zuul.opendev.org/t/openstack/builds?job_name=python-tempestconf-tempest-devstack-admin-plugins&project=osf/python-tempestconf Looking at the error, I strongly suspect memory exhaustion. We could try tuning xz to use less memory when compressing. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Tue Apr 6 16:11:41 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Apr 2021 18:11:41 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: On Tue, Apr 6, 2021 at 6:02 PM Jeremy Stanley wrote: > Looking at the error, I strongly suspect memory exhaustion. We could > try tuning xz to use less memory when compressing. That was my hunch as well, hence why I test using gzip. On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. Let's see how bad the file sizes are. If they are acceptable, we can keep gzip and be happy. Otherwise we try to tune the params to make xz a better citizen as fungi suggested. -yoctozepto From radoslaw.piliszek at gmail.com Tue Apr 6 16:15:28 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Apr 2021 18:15:28 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: On Tue, Apr 6, 2021 at 6:11 PM Radosław Piliszek wrote: > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > Let's see how bad the file sizes are. devstack.journal.gz 23.6M Less than all the other logs together, I would not mind. I wonder how it is in other jobs (this is from the failing one). -yoctozepto From juliaashleykreger at gmail.com Tue Apr 6 16:25:57 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Apr 2021 09:25:57 -0700 Subject: How to modify staging-ovirt parameters In-Reply-To: References: Message-ID: Greetings, The parameters get stored in ironic. You can use the "openstack baremetal node set" command to set new parameters into the driver_info field. Specifically it seems like you only need to update the address, so you'll just want to examine the driver_info field contents to see which value you need to update. You can do this with "openstack baremetal node show ". Hope that helps, -Julia On Tue, Apr 6, 2021 at 8:10 AM Gianluca Cecchi wrote: > > Hello, > I have a test lab with Queens on CentOS 7 deployed with TripleO, that I use sometimes to mimic an OSP 13 environment. > The Openstack nodes are oVirt VMS. > I moved these VMs from one oVirt infrastructure to another one. > The Openstack environment starts and operates without problems (3 controllers, 2 computes and 3 cephs), but I don't know how to modify the staging-ovirt parameters. > At oVirt side I re-created as before a user ostackpm at internal with sufficient power and I also set the same password. > But the ip address of the engine cannot be the same. > > When I deployed the environment I put all the parameters inside the instackenv.json file to do introspection and deploy. > > Where are they put on undercloud? > Is there a way to update them? Perhaps some database entries or other kind of repos where I can update the ip? > Thanks in advance, > Gianluca > From cboylan at sapwetik.org Tue Apr 6 16:39:04 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Apr 2021 09:39:04 -0700 Subject: =?UTF-8?Q?Re:_[devstack][infra]_POST=5FFAILURE_on_export-devstack-journa?= =?UTF-8?Q?l_:_Export_journal?= In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: On Tue, Apr 6, 2021, at 9:15 AM, Radosław Piliszek wrote: > On Tue, Apr 6, 2021 at 6:11 PM Radosław Piliszek > wrote: > > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > > > > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > > > Let's see how bad the file sizes are. > > devstack.journal.gz 23.6M > > Less than all the other logs together, I would not mind. > I wonder how it is in other jobs (this is from the failing one). There does seem to be a range (likely due to how much the job workload causes logging to happen in journald) from about a few megabytes to eighty something MB [3]. This is probably acceptable. Just keep an eye out for jobs that end up with much larger file sizes and we can reevaluate if we notice them. [3] https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_038/784964/1/check/tempest-multinode-full-py3/038bd51/controller/logs/index.html From cboylan at sapwetik.org Tue Apr 6 16:46:33 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Apr 2021 09:46:33 -0700 Subject: =?UTF-8?Q?Re:_[devstack][infra]_POST=5FFAILURE_on_export-devstack-journa?= =?UTF-8?Q?l_:_Export_journal?= In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> Message-ID: <7626869f-dab3-41df-a40b-dafa20dcfaf4@www.fastmail.com> On Tue, Apr 6, 2021, at 9:11 AM, Radosław Piliszek wrote: > On Tue, Apr 6, 2021 at 6:02 PM Jeremy Stanley wrote: > > Looking at the error, I strongly suspect memory exhaustion. We could > > try tuning xz to use less memory when compressing. Worth noting that we continue to suspect memory pressure, and in particular diving into swap, for random failures that appear timing or performance related. I still think it would be a helpful exercise for OpenStack to look at its memory consumption (remember end users will experience this too) and see if there are any unexpected areas of memory use. I think the last time i skimmed logs the privsep daemon was a large consumer because we separate instance is run for each service and they all add up. > > That was my hunch as well, hence why I test using gzip. > > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan wrote: > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > I am testing whether replacing xz with gzip would solve the problem [1] [2]. > > > > The reason we used xz is that the files are very large and gz compression is very poor compared to xz for these files and these files are not really human readable as is (you need to load them into journald first). Let's test it and see what the gz file sizes look like but if they are still quite large then this is unlikely to be an appropriate fix. > > Let's see how bad the file sizes are. > If they are acceptable, we can keep gzip and be happy. > Otherwise we try to tune the params to make xz a better citizen as > fungi suggested. > > -yoctozepto > > From fungi at yuggoth.org Tue Apr 6 16:47:56 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Apr 2021 16:47:56 +0000 Subject: [qa][gate][stable][infra] stable stein|train py2 devstack jobs are broken In-Reply-To: References: <178a77deedb.e105a565136465.4951627500252316851@ghanshyammann.com> Message-ID: <20210406164756.5otrjrzxv423lpph@yuggoth.org> On 2021-04-06 17:51:50 +0200 (+0200), Pierre Riteau wrote: > We've discussed this issue (which affects non-devstack jobs too) on > #opendev and #openstack-qa. If I understood correctly, it is caused by > a recent PyPI outage and their fallback infra not providing the > data-requires-python metadata. While PyPI appears to be back to > normal, some opendev mirrors (proxies really) are still serving > indexes without data-requires-python, which suggests that bad PyPI > servers may still be handling some of the requests. [...] Still speculation at this point, though the evidence points to that happening (we've seen it several times in the past). Technically yes our proxies are sometimes serving indices without the metadata, but that tends to happen because PyPI's CDN is sometimes serving indices without metadata to our proxies and not because of any actual problem with our proxies. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tuguoyi at outlook.com Tue Apr 6 12:00:18 2021 From: tuguoyi at outlook.com (Guoyi Tu) Date: Tue, 6 Apr 2021 20:00:18 +0800 Subject: [dev][nova] Problem about vm migration compatibility check Message-ID: hi there, In my test environment, i created a vm and configured the cpu with host-model, when I migrate the vm to another host with the same cpu, it failed the migration compatibility check which complains the cpu definition of domain is incompatible with target host cpu. As we know, when the domain configured as above starts, the host-model cpu definition will automatically converted to custom cpu model and with some addtional features that the KVM supported, these addtional features may contains features that the host doesn't support. In the code, the compatibility of the target host is check by calling compareCPU()(libvirt API). The compareCPU() can only recongnize the features probed by cpuid instruction on the host, but it may not recognize the features of cpu definition of domain xml (virsh dumpxml domainname) when the domain running. So the compatibility check will fail when KVM support one or more features which is considerd as disabled by the cpuid instuction. I think we should call compareHypervisorCPU() or something like that (supported by libvirt since v4.4.0) instead of compareCPU() to check the migration compatibility. My test environment is as follow: host cpu: Cascadelake libvirt-6.9 qemu-5.0 host-model cpu: Cascadelake-Server Intel The hypervisor, umip, pschange-mc-no features block the compatibility check -- Best Regards, Guoyi Tu From luke.camilleri at zylacomputing.com Tue Apr 6 21:51:00 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 6 Apr 2021 23:51:00 +0200 Subject: [victoria][magnum]fedora-atomic-27 image Message-ID: We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'// //+ sleep 5s// //+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run     --entrypoint /bin/bash     --name install-kubectl     --net host     --privileged     --rm --user root     --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7     -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// //bash: /usr/bin/podman: No such file or directory// //ERROR Unable to install kubectl. Abort.// //+ i=61// //+ '[' 61 -gt 60 ']'// //+ echo 'ERROR Unable to install kubectl. Abort.'// //+ exit 1/ The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: /Starting to run kube-apiserver-to-kubelet-role// //Waiting for Kubernetes API...// //+ echo 'Waiting for Kubernetes API...'// //++ curl --silent http://127.0.0.1:8080/healthz// //+ '[' ok = '' ']'// //+ sleep 5/ This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? Thanks in advance for any asistance -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Apr 6 22:00:39 2021 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 7 Apr 2021 00:00:39 +0200 Subject: Unable to retrieve ceph docker image for Queens Message-ID: Hello, I'm trying to extend an old small test lab Queens environment with dedicated Ceph Storage nodes. I would like to add some disks to the storage nodes and so I customize my original templates/ceph-config.yaml adding them under devices section: parameter_defaults: CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb - /dev/sdc . . . When I run the overcloud deploy I get this in ceph-install-workflow.log 2021-04-06 21:24:57,955 p=32618 u=mistral | TASK [ceph-docker-common : pulling docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 image] *** 2021-04-06 21:24:57,955 p=32618 u=mistral | Tuesday 06 April 2021 21:24:57 +0200 (0:00:00.153) 0:06:17.978 ********* 2021-04-06 21:25:13,206 p=32618 u=mistral | FAILED - RETRYING: pulling docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 image (3 retries left). . . . 2021-04-06 21:26:03,715 p=32618 u=mistral | FAILED - RETRYING: pulling docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 image (1 retries left). 2021-04-06 21:26:28,839 p=32618 u=mistral | fatal: [172.23.0.232]: FAILED! => {"attempts": 3, "changed": false, "cmd": ["timeout", "300s", "docker", "pull", "docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64"], "delta": "0:00:15.022274", "end": "2021-04-06 21:25:48.701980", "msg": "non-zero return code", "rc": 1, "start": "2021-04-06 21:25:33.679706", "stderr": "Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", "stderr_lines": ["Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"], "stdout": "Trying to pull repository docker.io/ceph/daemon ... ", "stdout_lines": ["Trying to pull repository docker.io/ceph/daemon ... "]} So it seems the docker images are not there any more. My existing ceph nodes are using that image: [heat-admin at ostack-ceph0 ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ad4aeb3f4cb docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 "/entrypoint.sh" 3 weeks ago Up 3 weeks ceph-osd-0 0dc9a9889283 docker.io/ceph/daemon:v3.2.10-stable-3.2-luminous-centos-7-x86_64 "/entrypoint.sh" 3 weeks ago Up 3 weeks ceph-osd-3 8d543016cce7 docker.io/tripleoqueens/centos-binary-cron:current-tripleo "dumb-init --singl..." 11 months ago Up 3 weeks logrotate_crond [heat-admin at ostack-ceph0 ~]$ Any way or suggestion to manage this? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Tue Apr 6 22:03:15 2021 From: feilong at catalyst.net.nz (feilong) Date: Wed, 7 Apr 2021 10:03:15 +1200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hi Luke, The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. On 7/04/21 9:51 am, Luke Camilleri wrote: > > We have insatlled magnum following the installation guide here > https://docs.openstack.org/magnum/victoria/install/install-rdo.html > and the process was quite smooth but we have been having some issues > with the deployment of the clusters. > > The image being used as per the documentation is > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 > > Our first issue was that podman was being used even if we specified > the use_podman=false (since the image above did not include podman) > but this was resulting in a timeout and the cluster would fail to > deploy. We have then installed podman in the image and the cluster > progressed a bit further > > /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'// > //+ sleep 5s// > //+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman > run     --entrypoint /bin/bash     --name install-kubectl     --net > host     --privileged     --rm     --user root     --volume > /srv/magnum/bin:/host/srv/magnum/bin     > k8s.gcr.io/hyperkube:v1.15.7     -c '\''cp /usr/local/bin/kubectl > /host/srv/magnum/bin/kubectl'\'''// > //bash: /usr/bin/podman: No such file or directory// > //ERROR Unable to install kubectl. Abort.// > //+ i=61// > //+ '[' 61 -gt 60 ']'// > //+ echo 'ERROR Unable to install kubectl. Abort.'// > //+ exit 1/ > > The cluster is now failing here at "kube_cluster_deploy" and when > checking the logs on the master node we noticed the following in the > log files: > > /Starting to run kube-apiserver-to-kubelet-role// > //Waiting for Kubernetes API...// > //+ echo 'Waiting for Kubernetes API...'// > //++ curl --silent http://127.0.0.1:8080/healthz// > //+ '[' ok = '' ']'// > //+ sleep 5/ > > This is because the kubernetes API server is not installed either. I > have noticed some scripts that should handle the installation but I > would like to know if anyone here has had similar issues with a clean > Victoria installation. > > Also should we have to install any packages in the fedora atomic image > file or should the installation requirements be part of the stack? > > Thanks in advance for any asistance > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Apr 6 22:04:55 2021 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 7 Apr 2021 00:04:55 +0200 Subject: How to modify staging-ovirt parameters In-Reply-To: References: Message-ID: On Tue, Apr 6, 2021 at 6:26 PM Julia Kreger wrote: > Greetings, > > The parameters get stored in ironic. You can use the "openstack > baremetal node set" command to set new parameters into the driver_info > field. Specifically it seems like you only need to update the address, > so you'll just want to examine the driver_info field contents to see > which value you need to update. You can do this with "openstack > baremetal node show ". > > Hope that helps, > > -Julia > > Thanks Julia, it worked setting the new ip and I was able to run openstack baremetal node power off ostack-compute1 and the corresponding VM in oVirt was correctly powered off using the user and the driver. Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Apr 6 23:02:58 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 7 Apr 2021 00:02:58 +0100 Subject: [dev][nova] Problem about vm migration compatibility check In-Reply-To: References: Message-ID: On 06/04/2021 13:00, Guoyi Tu wrote: > hi there, > > In my test environment, i created a vm and configured the cpu with > host-model, when I migrate the vm to another host with the same cpu, > it failed the migration compatibility check which complains the cpu > definition of domain is incompatible with target host cpu. > > As we know, when the domain configured as above starts, the host-model > cpu definition will automatically converted to custom cpu model and > with some addtional features that the KVM supported, these addtional > features may contains features that the host doesn't support. > > In the code, the compatibility of the target host is check by calling > compareCPU()(libvirt API). The compareCPU() can only recongnize the > features probed by cpuid instruction on the host, but it may not > recognize the features of cpu definition of domain xml (virsh dumpxml > domainname) when the domain running. So the compatibility check will > fail when KVM support one or more features which is considerd as > disabled by the cpuid instuction. > > I think we should call compareHypervisorCPU() or something like that > (supported by libvirt since v4.4.0) instead of compareCPU() to check > the migration compatibility. there are patches already for review to move to the newer cpu apis. https://review.opendev.org/c/openstack/nova/+/762330 that uses baseline_hypervisor_cpu and compare_hypervisor_cpu instead of the old functions. this work will likely be resumed now that we are after feature freeze and the recandiates are out but we tend not to merge any large change until the release is done. https://review.opendev.org/c/openstack/nova/+/762330 is not particalarly big but changing how we detct cpu feature is not something that is great to merge durign the RC stablisation period. while this should technically resovle https://bugs.launchpad.net/nova/+bug/1903822 but its not really a bug its paying down technical debt so im not sure this is something we should back port. with that said if you are interested in this you should review that patch. > > > My test environment is as follow: > host cpu: Cascadelake > libvirt-6.9 > qemu-5.0 > > host-model cpu: >    >       Cascadelake-Server >       Intel >       >       >       >       >       >       >       >       >       >       >       >       >       >       >       >       >       >     > > > The hypervisor, umip, pschange-mc-no features block the compatibility > check > > From luke.camilleri at zylacomputing.com Tue Apr 6 23:57:20 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 7 Apr 2021 01:57:20 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? On 07/04/2021 00:03, feilong wrote: > > Hi Luke, > > The Fedora Atomic driver has been deprecated a while since the Fedora > Atomic has been deprecated by upstream. For now, I would suggest using > Fedora CoreOS 32.20201104.3.0 > > The latest version of Fedora CoreOS is 33.xxx, but there are something > when booting based my testing, see > https://github.com/coreos/fedora-coreos-tracker/issues/735 > > > Please feel free to let me know if you have any question about using > Magnum. We're using stable/victoria on our public cloud and it works > very well. I can share our public templates if you want. Cheers. > > > > On 7/04/21 9:51 am, Luke Camilleri wrote: >> >> We have insatlled magnum following the installation guide here >> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >> and the process was quite smooth but we have been having some issues >> with the deployment of the clusters. >> >> The image being used as per the documentation is >> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >> >> Our first issue was that podman was being used even if we specified >> the use_podman=false (since the image above did not include podman) >> but this was resulting in a timeout and the cluster would fail to >> deploy. We have then installed podman in the image and the cluster >> progressed a bit further >> >> /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'// >> //+ sleep 5s// >> //+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman >> run     --entrypoint /bin/bash     --name install-kubectl     --net >> host     --privileged --rm     --user root     --volume >> /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7     >> -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >> //bash: /usr/bin/podman: No such file or directory// >> //ERROR Unable to install kubectl. Abort.// >> //+ i=61// >> //+ '[' 61 -gt 60 ']'// >> //+ echo 'ERROR Unable to install kubectl. Abort.'// >> //+ exit 1/ >> >> The cluster is now failing here at "kube_cluster_deploy" and when >> checking the logs on the master node we noticed the following in the >> log files: >> >> /Starting to run kube-apiserver-to-kubelet-role// >> //Waiting for Kubernetes API...// >> //+ echo 'Waiting for Kubernetes API...'// >> //++ curl --silent http://127.0.0.1:8080/healthz// >> //+ '[' ok = '' ']'// >> //+ sleep 5/ >> >> This is because the kubernetes API server is not installed either. I >> have noticed some scripts that should handle the installation but I >> would like to know if anyone here has had similar issues with a clean >> Victoria installation. >> >> Also should we have to install any packages in the fedora atomic >> image file or should the installation requirements be part of the stack? >> >> Thanks in advance for any asistance >> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email:flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Wed Apr 7 01:16:20 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 6 Apr 2021 21:16:20 -0400 Subject: [docs] Project guides in PDF format In-Reply-To: <20210330000704.bsuukwkon2vnint3@yuggoth.org> References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> <20210330000704.bsuukwkon2vnint3@yuggoth.org> Message-ID: On Mon, Mar 29, 2021 at 8:09 PM Jeremy Stanley wrote: > On 2021-03-29 19:47:24 -0400 (-0400), Peter Matulis wrote: > > I changed the testenv to 'pdf-docs' and the build is still being skipped. > > > > Do I need to submit a PR to have this [1] set to 'false'? > > > > [1]: > > > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L970 > [...] > > Oh, yep that'll need to be adjusted or overridden as well. I see > that https://review.opendev.org/678077 explicitly chose not to do > PDF builds for deploy guides for the original PDF docs > implementation a couple of years ago. Unfortunately the commit > message doesn't say why, but maybe this is a good opportunity to > start. Any other thoughts before I propose a change to the below? https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L970 However, many (most?) API projects seem to include their > deployment guides in their software Git repos, so switching this on > for everyone might break their deploy guide builds. If we combine it > with an expectation for a deploy-guide-specific PDF building tox > testenv like you had previously, then it would get safely skipped by > any projects without that testenv defined. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 379035389 at qq.com Wed Apr 7 03:46:12 2021 From: 379035389 at qq.com (=?utf-8?B?5pyd6Ziz5pyq54OI?=) Date: Wed, 7 Apr 2021 11:46:12 +0800 Subject: [victoria]oslo_privsep.daemon.FailedToDropPrivileges Message-ID: Hi, everyone: I tried to build an instance on the compute node but failed. I am sure that every necessary connection has been built. And I found the same error information on the controller node and the compute node , in /var/log/neutron/linuxbride-agent.log That is information: INFO neutron.common.config [-] Logging enabled! 2021-04-07 11:30:52.866 2182 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-agent version 17.1.0 2021-04-07 11:30:52.867 2182 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface mappings: {'provider': 'ens160'} 2021-04-07 11:30:52.867 2182 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge mappings: {} 2021-04-07 11:30:52.868 2182 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--config-dir', '/etc/neutron/conf.d/neutron-linuxbridge-agent', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpm5d0ytiv/privsep.sock'] 2021-04-07 11:30:53.346 2182 CRITICAL oslo.privsep.daemon [-] privsep helper command exited non-zero (1) 2021-04-07 11:30:53.346 2182 CRITICAL neutron [-] Unhandled error: oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited non-zero (1) 2021-04-07 11:30:53.346 2182 ERROR neutron Traceback (most recent call last): 2021-04-07 11:30:53.346 2182 ERROR neutron   File "/usr/bin/neutron-linuxbridge-agent", line 10, in From noonedeadpunk at ya.ru Wed Apr 7 04:09:38 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 07 Apr 2021 07:09:38 +0300 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <1707579703.116615.1617722899181@ox.dhbw-mannheim.de> References: <1707579703.116615.1617722899181@ox.dhbw-mannheim.de> Message-ID: <21731617768405@mail.yandex.ua> An HTML attachment was scrubbed... URL: From aj at suse.com Wed Apr 7 07:06:38 2021 From: aj at suse.com (Andreas Jaeger) Date: Wed, 7 Apr 2021 09:06:38 +0200 Subject: [docs] Project guides in PDF format In-Reply-To: References: <20210303194526.cbyj6k43z4cvfgsq@yuggoth.org> <20210303203027.c3pgopms57zf4ehk@yuggoth.org> <20210329215416.ztokyw5iirbu3jhi@yuggoth.org> <20210330000704.bsuukwkon2vnint3@yuggoth.org> Message-ID: <8dd30a4e-ac37-d83e-ce75-e517232d779e@suse.com> On 07.04.21 03:16, Peter Matulis wrote: > > > On Mon, Mar 29, 2021 at 8:09 PM Jeremy Stanley > wrote: > > On 2021-03-29 19:47:24 -0400 (-0400), Peter Matulis wrote: > > I changed the testenv to 'pdf-docs' and the build is still being > skipped. > > > > Do I need to submit a PR to have this [1] set to 'false'? > > > > [1]: > > > https://opendev.org/openstack/openstack-zuul-jobs/src/commit/01746b6df094c25f0cd67690b44adca0fb4ee1fd/zuul.d/jobs.yaml#L970 > > [...] > > Oh, yep that'll need to be adjusted or overridden as well. I see > that https://review.opendev.org/678077 > explicitly chose not to do > PDF builds for deploy guides for the original PDF docs > implementation a couple of years ago. Unfortunately the commit > message doesn't say why, but maybe this is a good opportunity to > start. > > > Any other thoughts before I propose a change to the below? > > https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L970 > > I think it will not be enough, you need to set tox_pdf_envlist as well. I suggest to propose such a change and make two tests - with depends-on: 1) For your repo to show that you build the deploy-guide as PDF and have it in artifacts uploaded 2) For another use of the job that builds the normal docs as well to check that the PDF for the deploy-guide is build and not the normal docs. Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From ralonsoh at redhat.com Wed Apr 7 07:24:24 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 7 Apr 2021 09:24:24 +0200 Subject: [victoria]oslo_privsep.daemon.FailedToDropPrivileges In-Reply-To: References: Message-ID: Hello: This is indeed a problem with the execution privileges of the user running those commands. What deployment tool are you using? What is the user that runs the LB agent? The problem is, I think, that the privsep daemon is not properly starting. Try to execute manually the command you see in the logs. That will start the privsep daemon. If it doesn't work, check the privsep log and fix the permissions. ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--config-dir', '/etc/neutron/conf.d/neutron-linuxbridge-agent', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpm5d0ytiv/privsep.sock'] Regards. On Wed, Apr 7, 2021 at 5:51 AM 朝阳未烈 <379035389 at qq.com> wrote: > Hi, everyone: > > I tried to build an instance on the* compute node *but failed. I am sure > that every necessary connection has been built. > > And I found the same error information on the *controller node* and the *compute > node* , in */var/log/neutron/linuxbride-agent.log* > > That is information: > > INFO neutron.common.config [-] Logging enabled! > > 2021-04-07 11:30:52.866 2182 INFO neutron.common.config [-] > /usr/bin/neutron-linuxbridge-agent version 17.1.0 > > 2021-04-07 11:30:52.867 2182 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Interface mappings: {'provider': 'ens160'} > > 2021-04-07 11:30:52.867 2182 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Bridge mappings: {} > > 2021-04-07 11:30:52.868 2182 INFO oslo.privsep.daemon [-] Running privsep > helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', > 'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', > '--config-file', '/etc/neutron/neutron.conf', '--config-file', > '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--config-dir', > '/etc/neutron/conf.d/neutron-linuxbridge-agent', '--privsep_context', > 'neutron.privileged.default', '--privsep_sock_path', > '/tmp/tmpm5d0ytiv/privsep.sock'] > > 2021-04-07 11:30:53.346 2182 CRITICAL oslo.privsep.daemon [-] privsep > helper command exited non-zero (1) > > 2021-04-07 11:30:53.346 2182 CRITICAL neutron [-] Unhandled error: > oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited > non-zero (1) > > 2021-04-07 11:30:53.346 2182 ERROR neutron Traceback (most recent call > last): > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/bin/neutron-linuxbridge-agent", line 10, in > > 2021-04-07 11:30:53.346 2182 ERROR neutron sys.exit(main()) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py", > line 28, in main > > 2021-04-07 11:30:53.346 2182 ERROR neutron agent_main.main() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", > line 1052, in main > > 2021-04-07 11:30:53.346 2182 ERROR neutron manager = > LinuxBridgeManager(bridge_mappings, interface_mappings) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", > line 79, in __init__ > > 2021-04-07 11:30:53.346 2182 ERROR neutron > self.validate_interface_mappings() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py", > line 94, in validate_interface_mappings > > 2021-04-07 11:30:53.346 2182 ERROR neutron if not > ip_lib.device_exists(interface): > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 748, > in device_exists > > 2021-04-07 11:30:53.346 2182 ERROR neutron return > IPDevice(device_name, namespace=namespace).exists() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 328, > in exists > > 2021-04-07 11:30:53.346 2182 ERROR neutron return > privileged.interface_exists(self.name, self.namespace) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/oslo_privsep/priv_context.py", line 246, > in _wrap > > 2021-04-07 11:30:53.346 2182 ERROR neutron self.start() > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/oslo_privsep/priv_context.py", line 258, > in start > > 2021-04-07 11:30:53.346 2182 ERROR neutron channel = > daemon.RootwrapClientChannel(context=self) > > 2021-04-07 11:30:53.346 2182 ERROR neutron File > "/usr/lib/python3.6/site-packages/oslo_privsep/daemon.py", line 367, in > __init__ > > 2021-04-07 11:30:53.346 2182 ERROR neutron raise > FailedToDropPrivileges(msg) > > 2021-04-07 11:30:53.346 2182 ERROR neutron > oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited > non-zero (1) > > 2021-04-07 11:30:53.346 2182 ERROR neutron > > > > > > And it is the configuration in* /etc/sudoer.d/neutron *below: > > > > *Defaults:neutron !requiretty* > > *neutron ALL = (root) NOPASSWD: /usr/bin/neutron-rootwrap > /etc/neutron/rootwrap.conf ** > > *neutron ALL = (root) NOPASSWD: /usr/bin/neutron-rootwrap-daemon > /etc/neutron/rootwrap.conf* > > > > > > I googled for the solution but they didn’t matter. How can I solve this > problem? Thanks for your advicement! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Wed Apr 7 08:24:54 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 7 Apr 2021 13:24:54 +0500 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hi Luke, You may refer to below guide for magnum installation and its template https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 It worked pretty well for me. - Ammad On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri < luke.camilleri at zylacomputing.com> wrote: > Thanks for your quick reply. Do you have a download link for that image as > I cannot find an archive for the 32 release? > > As for the image upload into openstack you still use the fedora-atomic > property right to be available for coe deployments? > On 07/04/2021 00:03, feilong wrote: > > Hi Luke, > > The Fedora Atomic driver has been deprecated a while since the Fedora > Atomic has been deprecated by upstream. For now, I would suggest using > Fedora CoreOS 32.20201104.3.0 > > The latest version of Fedora CoreOS is 33.xxx, but there are something > when booting based my testing, see > https://github.com/coreos/fedora-coreos-tracker/issues/735 > > Please feel free to let me know if you have any question about using > Magnum. We're using stable/victoria on our public cloud and it works very > well. I can share our public templates if you want. Cheers. > > > > On 7/04/21 9:51 am, Luke Camilleri wrote: > > We have insatlled magnum following the installation guide here > https://docs.openstack.org/magnum/victoria/install/install-rdo.html and > the process was quite smooth but we have been having some issues with the > deployment of the clusters. > > The image being used as per the documentation is > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 > > Our first issue was that podman was being used even if we specified the > use_podman=false (since the image above did not include podman) but this > was resulting in a timeout and the cluster would fail to deploy. We have > then installed podman in the image and the cluster progressed a bit further > > *+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s'* > *+ sleep 5s* > *+ ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run > --entrypoint /bin/bash --name install-kubectl --net host > --privileged --rm --user root --volume > /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 > -c '\''cp /usr/local/bin/kubectl > /host/srv/magnum/bin/kubectl'\'''* > *bash: /usr/bin/podman: No such file or directory* > *ERROR Unable to install kubectl. Abort.* > *+ i=61* > *+ '[' 61 -gt 60 ']'* > *+ echo 'ERROR Unable to install kubectl. Abort.'* > *+ exit 1* > > The cluster is now failing here at "kube_cluster_deploy" and when > checking the logs on the master node we noticed the following in the log > files: > > *Starting to run kube-apiserver-to-kubelet-role* > *Waiting for Kubernetes API...* > *+ echo 'Waiting for Kubernetes API...'* > *++ curl --silent http://127.0.0.1:8080/healthz > * > *+ '[' ok = '' ']'* > *+ sleep 5* > > This is because the kubernetes API server is not installed either. I have > noticed some scripts that should handle the installation but I would like > to know if anyone here has had similar issues with a clean Victoria > installation. > Also should we have to install any packages in the fedora atomic image > file or should the installation requirements be part of the stack? > > Thanks in advance for any asistance > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Apr 7 09:05:20 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Apr 2021 11:05:20 +0200 Subject: [Release-job-failures] Pre-release of openstack/neutron for ref refs/tags/18.0.0.0rc2 failed In-Reply-To: References: Message-ID: <2e0d97ea-e0ca-274c-7b0a-0bf77fd7603b@openstack.org> zuul at openstack.org wrote: > Build failed. > > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/4b31dfd25e244e38a346e097c993c519 : SUCCESS in 54s > - release-openstack-python https://zuul.opendev.org/t/openstack/build/309d80630c8e4086b52a0d83062cd431 : SUCCESS in 4m 43s > - announce-release https://zuul.opendev.org/t/openstack/build/4bde576261db4943a1dd6610aa52f6be : POST_FAILURE in 8m 33s > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/18de7b1f60044445a06db64da42bbcc3 : SUCCESS in 5m 20s We are missing logs, but it looks like the job actually succeeded at announcing the release: http://lists.openstack.org/pipermail/release-announce/2021-April/011022.html So this can be safely ignored. -- Thierry Carrez (ttx) From destienne.maxime at gmail.com Wed Apr 7 09:48:27 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Wed, 7 Apr 2021 11:48:27 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: <3930281.aCZO8KT43X@p1> References: <3930281.aCZO8KT43X@p1> Message-ID: As Slawek Kaplonski told me, I enabled neutron debugging and I didn't find why specific mechanism drivers are refusing to bind ports on that host. I noticed that the VM can get an IP from DHCP, I see a link on the web interface (network topology) between my physical network "provider" and the VM. But this link disappeared when the VM crashed due to the error. Here are the previous DEBUG logs, just before the ERROR one. I don't succeed in getting more informed by these logs. (/neutron/server.log) Thank you a lot for your time ! Maxime `2021-04-07 10:10:30.294 25623 DEBUG > neutron.pecan_wsgi.hooks.policy_enforcement > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 9c53e456ca2d4d07a4aecbf91c487cae > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > 'binding:vif_details'] _exclude_attributes_by_policy > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > 2021-04-07 10:10:30.995 25626 DEBUG > neutron.pecan_wsgi.hooks.policy_enforcement > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a 9c53e456ca2d4d07a4aecbf91c487cae > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > 'binding:vif_details'] _exclude_attributes_by_policy > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > 2021-04-07 10:10:31.105 25626 DEBUG > neutron.pecan_wsgi.hooks.policy_enforcement > [req-446ed89e-0697-4822-b69b-49b02ad9732d 9c53e456ca2d4d07a4aecbf91c487cae > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > 'binding:vif_details'] _exclude_attributes_by_policy > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: {'port': > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > with profile bind_port > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', 'network_type': > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > 'network_type': 'flat', 'physical_network': 'provider', 'segmentation_id': > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a écrit : > Hi, > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > Hello, > > > > I spent a lot of time troubleshooting my issue, which I described here : > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > To summarize, when I want to create an instance, binding fails on compute > > node, the dhcp agent seems to give an ip to the VM but I have an error. > > What do You mean exactly? Failed binding of the port in Neutron? In such > case > nova will not boot vm so it can't get IP from DHCP. > > > > > I don't know where to dig, besides what I have done. > > Please enable debug logs in neutron-server and look in its logs for the > reason > why it failed to bind port on specific host. > Usually reason is dead L2 agent on host or mismatch in the agent's bridge > mappings configuration in the agent. > > > > > Thanks a lot for your help ! > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 7 10:03:12 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 07 Apr 2021 12:03:12 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: References: <3930281.aCZO8KT43X@p1> Message-ID: <3513595.c0HGFkD9VC@p1> Hi, Can You send me full neutron-server log? I will check if there is anything more there. Dnia środa, 7 kwietnia 2021 11:48:27 CEST Maxime d'Estienne pisze: > As Slawek Kaplonski told me, I enabled neutron debugging and I didn't find > why specific mechanism drivers are refusing to bind ports > on that host. > > I noticed that the VM can get an IP from DHCP, I see a link on the web > interface (network topology) between my physical network "provider" and the > VM. But this link disappeared when the VM crashed due to the error. > > Here are the previous DEBUG logs, just before the ERROR one. > > I don't succeed in getting more informed by these logs. > (/neutron/server.log) > > Thank you a lot for your time ! > Maxime > > `2021-04-07 10:10:30.294 25623 DEBUG > > neutron.pecan_wsgi.hooks.policy_enforcement > > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 9c53e456ca2d4d07a4aecbf91c487cae > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > > 'binding:vif_details'] _exclude_attributes_by_policy > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > 2021-04-07 10:10:30.995 25626 DEBUG > > neutron.pecan_wsgi.hooks.policy_enforcement > > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a 9c53e456ca2d4d07a4aecbf91c487cae > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > > 'binding:vif_details'] _exclude_attributes_by_policy > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > 2021-04-07 10:10:31.105 25626 DEBUG > > neutron.pecan_wsgi.hooks.policy_enforcement > > [req-446ed89e-0697-4822-b69b-49b02ad9732d 9c53e456ca2d4d07a4aecbf91c487cae > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes excluded by > > policy engine: ['binding:profile', 'binding:host_id', 'binding:vif_type', > > 'binding:vif_details'] _exclude_attributes_by_policy > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: {'port': > > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > > with profile bind_port > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind port > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', 'network_type': > > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a b21b8901642c470b8f668965997c7922 > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type normal > > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > 'network_type': 'flat', 'physical_network': 'provider', 'segmentation_id': > > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > > > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a > écrit : > > > Hi, > > > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > > Hello, > > > > > > I spent a lot of time troubleshooting my issue, which I described here : > > > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > > > To summarize, when I want to create an instance, binding fails on compute > > > node, the dhcp agent seems to give an ip to the VM but I have an error. > > > > What do You mean exactly? Failed binding of the port in Neutron? In such > > case > > nova will not boot vm so it can't get IP from DHCP. > > > > > > > > I don't know where to dig, besides what I have done. > > > > Please enable debug logs in neutron-server and look in its logs for the > > reason > > why it failed to bind port on specific host. > > Usually reason is dead L2 agent on host or mismatch in the agent's bridge > > mappings configuration in the agent. > > > > > > > > Thanks a lot for your help ! > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From hberaud at redhat.com Wed Apr 7 10:14:17 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 7 Apr 2021 12:14:17 +0200 Subject: [release] Meeting Time Poll In-Reply-To: References: Message-ID: Greetings, The poll is now terminated, everybody voted and we reached a consensus, our new meeting time is at 2pm UTC on Thursdays. https://doodle.com/poll/ip6tg4fvznz7p3qx It will take effect from our next meeting, i.e tomorrow. I'm going to update our agenda accordingly. Thanks to everyone for your vote. Le mer. 31 mars 2021 à 17:55, Herve Beraud a écrit : > Hello deliveryers, > > Don't forget to vote for our new meeting time. > > Thank you > > Le ven. 26 mars 2021 à 13:43, Herve Beraud a écrit : > >> Hello >> >> We have a few regular attendees of the Release Management meeting who >> have conflicts >> with the current meeting time. As a result, we would like to find a new >> time to hold the meeting. I've created a Doodle poll[1] for everyone to >> give their input on times. It's mostly limited to times that reasonably >> overlap the working day in the US and Europe since that's where most of >> our attendees are located. >> >> If you attend the Release Management meeting, please fill out the poll >> so we can hopefully find a time that works better for everyone. >> >> For the sake of organization and to allow everyone to schedule his agenda >> accordingly, the poll will be closed on April 5th. On that date, I will >> announce the time of this meeting and the date on which it will take effect >> . >> >> Thanks! >> >> [1] https://doodle.com/poll/ip6tg4fvznz7p3qx >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 7 11:26:13 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 07 Apr 2021 14:26:13 +0300 Subject: [openstack-ansible] OSA Meeting Poll Message-ID: <170911617794404@mail.yandex.ru> Hi! We haven't changed OSA meeting time for a while and stick with the current option (Tuesday, 16:00 UTC) for a while. So we decided it's time to make a poll regarding preferred time for OSA meetings since list of the interested parties and circumstances might have changed since picking meeting time. You can find the poll via link [1]. Poll is open till Monday, April 12 2021. Please, make sure you vote before this time. [1] https://doodle.com/poll/m554dx4mrsideuzi/ -- Kind Regards, Dmitriy Rabotyagov From christian.rohmann at inovex.de Wed Apr 7 11:53:12 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 7 Apr 2021 13:53:12 +0200 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: Message-ID: On 13/01/2021 10:37, Christian Rohmann wrote: > I wrote a tiny patch to add the Ceph RDB feature of fast-diff to > backups created by cinder-backup: > >  * https://review.opendev.org/c/openstack/cinder/+/766856/ > > > Could someone please take a peek and let me know of this is sufficient > to be merged? This change was already merged to master and I now created cherry-picks / backports to victoria (https://review.opendev.org/c/openstack/cinder/+/782917) and ussuri (https://review.opendev.org/c/openstack/cinder/+/782929). Also Andrey Bolgov did create yet another backport of this feaure down to stable/train (https://review.opendev.org/c/openstack/cinder/+/784041). While the cherry-pick onto the stable/victoria branch does verify fine with Zuul (only need review to be merged), the cinder-plugin-ceph-tempest tests fail for ussuri and also train. > Stdout: 'RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable volumes/volume-081e9c22-21f3-4585-a2fe-1caed098052b object-map fast-diff".\nIn some cases useful info is found in syslog - try "dmesg | tail".\n' > Stderr: 'rbd: sysfs write failed\nrbd: map failed: (6) No such device or address\n' Could anybody give me a hint on why this might be? Also is there any other process to follow for backports than to create a cherry-pick from the following release down and wait for review? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Apr 7 12:09:40 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 07 Apr 2021 15:09:40 +0300 Subject: [openstack-ansible] OSA Meeting Poll In-Reply-To: <170911617794404@mail.yandex.ru> References: <170911617794404@mail.yandex.ru> Message-ID: <202851617797329@mail.yandex.ru> Sorry for the typo in the link, added extra slash in the end. Correct link is: https://doodle.com/poll/m554dx4mrsideuzi 07.04.2021, 14:31, "Dmitriy Rabotyagov" : > Hi! > > We haven't changed OSA meeting time for a while and stick with the current option (Tuesday, 16:00 UTC) for a while. > > So we decided it's time to make a poll regarding preferred time for OSA meetings since list of the interested parties and circumstances might have changed since picking meeting time. > > You can find the poll via link [1]. Poll is open till Monday, April 12 2021. Please, make sure you vote before this time. > > [1] https://doodle.com/poll/m554dx4mrsideuzi/ > > -- > Kind Regards, > Dmitriy Rabotyagov --  Kind Regards, Dmitriy Rabotyagov From rosmaita.fossdev at gmail.com Wed Apr 7 12:28:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Apr 2021 08:28:51 -0400 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: Message-ID: On 4/7/21 7:53 AM, Christian Rohmann wrote: > On 13/01/2021 10:37, Christian Rohmann wrote: >> I wrote a tiny patch to add the Ceph RDB feature of fast-diff to >> backups created by cinder-backup: >> >>  * https://review.opendev.org/c/openstack/cinder/+/766856/ >> >> >> Could someone please take a peek and let me know of this is sufficient >> to be merged? > > > This change was already merged to master and I now created cherry-picks > / backports to victoria > (https://review.opendev.org/c/openstack/cinder/+/782917) and ussuri > (https://review.opendev.org/c/openstack/cinder/+/782929). > Also Andrey Bolgov did create yet another backport of this feaure down > to stable/train (https://review.opendev.org/c/openstack/cinder/+/784041). > > While the cherry-pick onto the stable/victoria branch does verify fine > with Zuul (only need review to be merged), the > cinder-plugin-ceph-tempest tests fail for ussuri and also train. > >> Stdout: 'RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable volumes/volume-081e9c22-21f3-4585-a2fe-1caed098052b object-map fast-diff".\nIn some cases useful info is found in syslog - try "dmesg | tail".\n' >> Stderr: 'rbd: sysfs write failed\nrbd: map failed: (6) No such device or address\n' > > Could anybody give me a hint on why this might be? You have hit https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1921897 Eric has a patch up addressing this: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/783880 > > Also is there any other process to follow for backports than to create a > cherry-pick from the following release down and wait for review? > You're following the correct procedure. One thing I noticed, though, is that your pick to stable/ussuri should have the cherry pick info for both the cherry-pick from master (which was the wallaby development branch at the time) to stable/victoria and also from victoria to stable/ussuri. > > > Regards > > > Christian > From rafal at pregusia.pl Wed Apr 7 13:18:59 2021 From: rafal at pregusia.pl (pregusia) Date: Wed, 7 Apr 2021 15:18:59 +0200 Subject: [keystone]improvments in mapping models/support for JWT tokens In-Reply-To: <20210401192203.jn2heaicdlwojc7i@yuggoth.org> References: <20210401192203.jn2heaicdlwojc7i@yuggoth.org> Message-ID: Thanks for the answer. I submitted patches to review: https://review.opendev.org/c/openstack/keystone/+/784553 https://review.opendev.org/c/openstack/keystone/+/784558 but it looks like some problem with python version https://zuul.opendev.org/t/openstack/build/45a04fb21bf14806a3a32b83c18b8120 Can You advice what is wrong here ? On 4/1/21 9:22 PM, Jeremy Stanley wrote: > On 2021-04-01 21:05:32 +0200 (+0200), pregusia wrote: >> Please direct your attention to some keystone modyfications - to be more >> precisly two of them: >>  (1) extension to mapping engine in order to support multiple projects and >> assigning project by id >>  (2) extension to authorization mechanisms - add JWT token support > [...] > > This is pretty exciting stuff. But please be aware that for an > OpenStack project to merge patches they'll need to be proposed into > the code review system (Gerrit) by someone, preferably by the author > of the patches, which is the easiest place to discuss them as well. > Also we need some way to confirm that the author of the patches has > agreed to the Individual Contributor License Agreement (essentially > asserting that the patches they propose are their own work or that > they have permission from the author or are proposing patches > consisting of existing code distributed under a license compatible > with the Apache License version 2.0), and the usual way to agree to > the ICLA is when creating your account in Gerrit. > > Please see the OpenStack Contributor Guide for a general > introduction to our code proposal and review workflow: > > https://docs.openstack.org/contributors/ > > And feel free to ask questions on this mailing list if you have any. From senrique at redhat.com Wed Apr 7 13:27:14 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 7 Apr 2021 10:27:14 -0300 Subject: [cinder] Bug deputy report for week of 2021-04-07 Message-ID: Hello, This is a bug report from 2021-03-31 to 2021-04-07. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC in #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - Medium: - https://bugs.launchpad.net/cinder/+bug/1922408 `` Create volume from snapshot will lose encrypted head when source volume is encrypted in RBD''. Assigned to haixin. - https://bugs.launchpad.net/cinder/+bug/1922013 '' IBM SVF driver: GMCV Add vols to a group fails even if rcrel and rccg are in the same state". Unassigned. Low: - https://bugs.launchpad.net/cinder/+bug/1922255 "Dell PowerVault PVMEISCSIDriver driver cannot manage volumes". Unassigned. - https://bugs.launchpad.net/python-cinderclient/+bug/1922749 "Top-level client doesn't support auth v3". Unassigned. Undecided: - https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1921897 "fast-diff in default feature set breaks 'rbd map'". Assigned to Eric Harney. Regards, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Apr 7 14:01:29 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 7 Apr 2021 16:01:29 +0200 Subject: [ironic] APAC-Europe SPUC time? Message-ID: Hi folks! The initial SPUC datetime was for 10am UTC, which was 11am for us in central Europe, now is supposed to be 12pm. On one hand, I find it more convenient to have SPUC at 11am still, on the other - I have German classes at this time for a few months starting mid-April. What do you think? Should we keep it in UTC, i.e. 12pm for us in Europe? Will that work for you Jacob? Dmitry -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at jorisengbers.nl Wed Apr 7 14:44:19 2021 From: info at jorisengbers.nl (Joris Engbers) Date: Wed, 7 Apr 2021 16:44:19 +0200 Subject: [Neutron] How to provide internet access to tier 2 instance In-Reply-To: <7aa3968f-dceb-43af-2548-e8ed0f7ac9b1@mailbox.org> References: <7aa3968f-dceb-43af-2548-e8ed0f7ac9b1@mailbox.org> Message-ID: <5757632f-d5cd-921a-ff25-70f095384441@jorisengbers.nl> I have tried a similar set-up and it seems to work here. On Router 2 I have added a static route for 0.0.0.0/0 to the IP of Router1 in the 'private' network. With this addition it is possible to ping 1.1.1.1. Just to be sure, I disabled port security on every intermediate port, but after reenabling them, it still works. I did find that the l3 agent is slow to clean up static routes after removing them in my version from OpenStack, this caused me to do a lot more debugging than necessary. With a fresh router it worked instantly. Joris On 04-04-2021 16:44, Bernd Bausch wrote: > I have a pretty standard single-server Victoria Devstack, where I > created this network topology: > > public       private      backend >   |             |             | >   |  /-------\  |-- I1        |- I2 >   |--|Router1|--|             | >   |  \-------/  |             | >   |             |  /-------\  | >   |             |--|Router2|--| >   |             |  \-------/  | >   |             |             | > > I1 and I2 are instances. > > My question: > > Is it possible to give I2 access to the external world to install > software and download files? I don't need access **to** I2 **from** > the external world. > > My unsuccessful attempt: > > After adding a static default route via Router1 to Router2, I can ping > the internet from Router2's namespace, but not from I2. > > My guess is that Router1 ignores traffic from networks that are not > attached to it. I don't have enough experience to understand the > netfilter rules in Router1's namespace, and in any case, rather than > tweaking them I need a supported method to give I2 internet access, or > the confirmation that it is not possible. > > Thanks much for any insights and suggestions. > > Bernd > From skaplons at redhat.com Wed Apr 7 15:11:14 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 07 Apr 2021 17:11:14 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: References: <3513595.c0HGFkD9VC@p1> Message-ID: <77243093.lmQnbLH0hy@p1> Hi, Dnia środa, 7 kwietnia 2021 13:35:07 CEST Maxime d'Estienne pisze: > Hi ! > > Here is the log file. First error occurs at line 117. I have couple of questions there: 1. What version of Neutron are You using exactly? It seems from that log that You don't have patch https://github.com/openstack/neutron/commit/74c51a2e5390f258290ee890c9218beb5fdfd29c in Your code. 2. What mechanism drivers do You have enabled in Your ML2 config? In logs there should be lines e.g. like https://github.com/openstack/neutron/blob/34d6fbcc2a67eac45ad6f841903f656ef7118614/neutron/plugins/ml2/drivers/mech_agent.py#L87 but I don't see any line like that in Your log. > > Thank you ! > > Le mer. 7 avr. 2021 à 12:04, Slawek Kaplonski a > écrit : > > > Hi, > > > > Can You send me full neutron-server log? I will check if there is anything > > more there. > > > > Dnia środa, 7 kwietnia 2021 11:48:27 CEST Maxime d'Estienne pisze: > > > As Slawek Kaplonski told me, I enabled neutron debugging and I didn't > > find > > > why specific mechanism drivers are refusing to bind ports > > > on that host. > > > > > > I noticed that the VM can get an IP from DHCP, I see a link on the web > > > interface (network topology) between my physical network "provider" and > > the > > > VM. But this link disappeared when the VM crashed due to the error. > > > > > > Here are the previous DEBUG logs, just before the ERROR one. > > > > > > I don't succeed in getting more informed by these logs. > > > (/neutron/server.log) > > > > > > Thank you a lot for your time ! > > > Maxime > > > > > > `2021-04-07 10:10:30.294 25623 DEBUG > > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 > > 9c53e456ca2d4d07a4aecbf91c487cae > > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > > excluded by > > > > policy engine: ['binding:profile', 'binding:host_id', > > 'binding:vif_type', > > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > > > 2021-04-07 10:10:30.995 25626 DEBUG > > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a > > 9c53e456ca2d4d07a4aecbf91c487cae > > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > > excluded by > > > > policy engine: ['binding:profile', 'binding:host_id', > > 'binding:vif_type', > > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > > > 2021-04-07 10:10:31.105 25626 DEBUG > > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > > [req-446ed89e-0697-4822-b69b-49b02ad9732d > > 9c53e456ca2d4d07a4aecbf91c487cae > > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > > excluded by > > > > policy engine: ['binding:profile', 'binding:host_id', > > 'binding:vif_type', > > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: > > {'port': > > > > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > > > > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > > > > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > > > > > > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > > port > > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > > normal > > > > with profile bind_port > > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > > > > > > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > > port > > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > > > > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > 'network_type': > > > > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > > > > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > > > > > > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > > b21b8901642c470b8f668965997c7922 > > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > > normal > > > > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > > > 'network_type': 'flat', 'physical_network': 'provider', > > 'segmentation_id': > > > > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > > > > > > > > > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a > > > écrit : > > > > > > > Hi, > > > > > > > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > > > > Hello, > > > > > > > > > > I spent a lot of time troubleshooting my issue, which I described > > here : > > > > > > > > > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > > > > > > > To summarize, when I want to create an instance, binding fails on > > compute > > > > > node, the dhcp agent seems to give an ip to the VM but I have an > > error. > > > > > > > > What do You mean exactly? Failed binding of the port in Neutron? In > > such > > > > case > > > > nova will not boot vm so it can't get IP from DHCP. > > > > > > > > > > > > > > I don't know where to dig, besides what I have done. > > > > > > > > Please enable debug logs in neutron-server and look in its logs for the > > > > reason > > > > why it failed to bind port on specific host. > > > > Usually reason is dead L2 agent on host or mismatch in the agent's > > bridge > > > > mappings configuration in the agent. > > > > > > > > > > > > > > Thanks a lot for your help ! > > > > > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Wed Apr 7 15:52:24 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Apr 2021 15:52:24 +0000 Subject: [Release-job-failures] Pre-release of openstack/neutron for ref refs/tags/18.0.0.0rc2 failed In-Reply-To: <2e0d97ea-e0ca-274c-7b0a-0bf77fd7603b@openstack.org> References: <2e0d97ea-e0ca-274c-7b0a-0bf77fd7603b@openstack.org> Message-ID: <20210407155223.wy63hhj2d54kg7gn@yuggoth.org> On 2021-04-07 11:05:20 +0200 (+0200), Thierry Carrez wrote: [...] > We are missing logs, but it looks like the job actually succeeded at > announcing the release: > > http://lists.openstack.org/pipermail/release-announce/2021-April/011022.html > > So this can be safely ignored. Yes, without looking that closely at it, the most likely cause was a known incident with authentication breakage for log uploads in one of our storage donors' clouds, as mentioned in our status log: https://wiki.openstack.org/wiki/Infrastructure_Status -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marios at redhat.com Wed Apr 7 16:24:41 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 7 Apr 2021 19:24:41 +0300 Subject: [TripleO] Xena PTG schedule please review Message-ID: Hello TripleO o/ Thanks again to everybody who has volunteered to lead a session for the coming Xena TripleO project teams gathering. I've had a go at the agenda [1] trying to keep it to max 4 or 5 sessions per day with some breaks. Please review the slot assigned for your session at [1]. If that time is not ok then please let me know as soon as possible and indicate if you want it later or earlier or on any other day. If you've decided the session no longer makes sense then also please tell me and we can move things around accordingly to finish earlier. I'd like to finalise the schedule by next Monday 12 April which is a week before PTG. We can and likely will make changes after this date but last minute changes are best avoided to allow folks to schedule their PTG attendance across projects. Thanks everybody for your help! Looking forward to interesting presentations and discussions as always regards, marios [1] https://etherpad.opendev.org/p/tripleo-ptg-xena From thierry at openstack.org Wed Apr 7 16:45:34 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Apr 2021 18:45:34 +0200 Subject: [largescale-sig] Next meeting: April 7, 15utc In-Reply-To: References: Message-ID: We held our meeting today. We discussed future editions of our video meeting, our PTG presence, and progress on documenting the Scaling journey. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-04-07-15.00.html Our next meeting will be Wednesday, April 21 at 15utc as part of the PTG! We'll be discussing the current state of the Large Scale SIG, and pick a topic for our next video meeting, which should be happening May 13. Regards, -- Thierry Carrez (ttx) From johfulto at redhat.com Wed Apr 7 16:54:52 2021 From: johfulto at redhat.com (John Fulton) Date: Wed, 7 Apr 2021 12:54:52 -0400 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. On Monday I see: 1. STORAGE: 1430-1510 (ceph) 2. DF: 1510-1550 (ephemeral heat) 3. DF/Networking: 1600-1700 (ports v2 "no heat") If Harald and James are OK with it, could it be changed to the following? A. DF: 1430-1510 (ephemeral heat) B. DF/Networking: 1510-1550 (ports v2 "no heat") C. STORAGE: 1600-1700 (ceph) I ask because a portion of C depends on B, so it would be helpful to have that context first. If the presenters have conflicts however, we don't need this change. Thanks, John > If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Apr 7 18:18:05 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Apr 2021 14:18:05 -0400 Subject: [cinder] final reviews for RC-2 Message-ID: We have 3 patches that need review/revision/approval as soon as possible before we release RC-2 tomorrow (Thursday 8 April). All 3 are updates to the release notes: Release note for mTLS support cinder->glance - https://review.opendev.org/c/openstack/cinder/+/783964 Release note about the cgroups v1 situation - https://review.opendev.org/c/openstack/cinder/+/784179 Add known issue note about RBD encrypted volumes - https://review.opendev.org/c/openstack/cinder/+/785235 Please review and leave comments as soon as you can. From gouthampravi at gmail.com Wed Apr 7 18:38:02 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 7 Apr 2021 11:38:02 -0700 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core Message-ID: Hello Zorillas, Vida's been our bug czar since the Ussuri release and she's conceptualized and executed our successful bug triage strategy. She has also painstakingly organized several documentation and code bug squash events and kept the pulse on multi-release efforts. She's taught me a lot about project management and you can see tangible results here, I suppose :) Liron's fixed a lot of test code bugs and covered some old and important test gaps over the past few releases. He's driving standardization of the tempest plugin and bringing in best practices from tempest, refstack and elsewhere into our testing. It's always a pleasure to work with Liron since he's happy to provide and welcome feedback. More recently, Liron and Vida have enabled us to work with the InteropWG and define refstack guidelines. They've also gotten us closer to members from the QA community who they work with more closely downstream. In short, they bring in different perspectives while also espousing the team's core values. So I'd like to propose their addition to the manila-tempest-plugin-core team. Please give me your +/- 1s for this proposal. Thanks, Goutham From viroel at gmail.com Wed Apr 7 18:41:01 2021 From: viroel at gmail.com (Douglas) Date: Wed, 7 Apr 2021 15:41:01 -0300 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: A big +1 for both. Thank you for your contributions so far. On Wed, Apr 7, 2021 at 3:39 PM Goutham Pacha Ravi wrote: > Hello Zorillas, > > Vida's been our bug czar since the Ussuri release and she's > conceptualized and executed our successful bug triage strategy. She > has also painstakingly organized several documentation and code bug > squash events and kept the pulse on multi-release efforts. She's > taught me a lot about project management and you can see tangible > results here, I suppose :) > > Liron's fixed a lot of test code bugs and covered some old and > important test gaps over the past few releases. He's driving > standardization of the tempest plugin and bringing in best practices > from tempest, refstack and elsewhere into our testing. It's always a > pleasure to work with Liron since he's happy to provide and welcome > feedback. > > More recently, Liron and Vida have enabled us to work with the > InteropWG and define refstack guidelines. They've also gotten us > closer to members from the QA community who they work with more > closely downstream. In short, they bring in different perspectives > while also espousing the team's core values. So I'd like to propose > their addition to the manila-tempest-plugin-core team. > > Please give me your +/- 1s for this proposal. > > Thanks, > Goutham > > -- Douglas Salles Viroel -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Apr 7 18:53:57 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Apr 2021 13:53:57 -0500 Subject: [ptg] Secure RBAC and Policy Xena PTG sessoins Message-ID: Hey all, Several projects are working through RBAC overhauls and naturally sessions are cropping up for the PTG. I tried bouncing around to various policy sessions during the Wallaby PTG, but I didn't plan things out very well. As a result, I missed sessions, had duplicate conversations with multiple groups, and ended up being more reactive than I'd like. To prevent that, Ghanshyam and I have condensed all the policy/RBAC sessions we know about in a single etherpad [0]. I know most projects are still firming up their schedules, but I've written down the session times that we know of and organized them chronologically. My hope is that this will help us group similar discussions and reach broader consensus on topics easier and quicker. For example, keystone and nova have a cross-project session on Thursday to discuss how nova should handle consuming system-scoped tokens for project-specific operations. This topic certainly isn't exclusive to nova. It'll impact just about every other service and approaching it consistently will be huge for end users and operators. Another good example of this would be the glance refactor to integrate system-scope support we're going to talk about on Wednesday (cinder and barbican are potentially facing very similar refactors). Each session in the etherpad [0] has topics, so if a topic sounds relevant to your service, please feel free to drop into those discussions. A rough outline is that: - Monday we're going to focus on QA and general policy problems (e.g., converting tempest to use system-scope, the JSON->YAML community goal, overall status from Wallaby, etc) - Tuesday we're going to find ways to adopt system-scope in cinder - Wednesday we're going to work through system-scope adoption, the meta definitions API, and test coverage in glance - Thursday we're going to discuss what the experience should be like for operators using system-scoped tokens to do project-specific operations with nova (e.g., rebooting instances) I'm contemplating hosting a 30 minute recap session on Friday that attempts to summarize everything from the week regarding RBAC discussions. If that sounds useful, I'll ask Kristi if I can use one of the keystone sessions for that recap. I know, this feels like a lot of focus for one thing and I appreciate everyone's help working through this stuff. But, I'm hopeful that better organization throughout the PTG week will result in less confusion about what we plan to do in Xena with RBAC so we can deliver something useful to users and operators. Thanks, Lance [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Apr 7 18:59:19 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Apr 2021 11:59:19 -0700 Subject: [ptg] Secure RBAC and Policy Xena PTG sessoins In-Reply-To: References: Message-ID: I think a 30 minute re-cap session would be good on Friday because not everyone is going to be able to attend every session, depending on their own resulting schedule and commitments. -Julia On Wed, Apr 7, 2021 at 11:56 AM Lance Bragstad wrote: > > Hey all, > > Several projects are working through RBAC overhauls and naturally sessions are cropping up for the PTG. > > I tried bouncing around to various policy sessions during the Wallaby PTG, but I didn't plan things out very well. As a result, I missed sessions, had duplicate conversations with multiple groups, and ended up being more reactive than I'd like. > > To prevent that, Ghanshyam and I have condensed all the policy/RBAC sessions we know about in a single etherpad [0]. > > I know most projects are still firming up their schedules, but I've written down the session times that we know of and organized them chronologically. My hope is that this will help us group similar discussions and reach broader consensus on topics easier and quicker. > > For example, keystone and nova have a cross-project session on Thursday to discuss how nova should handle consuming system-scoped tokens for project-specific operations. This topic certainly isn't exclusive to nova. It'll impact just about every other service and approaching it consistently will be huge for end users and operators. Another good example of this would be the glance refactor to integrate system-scope support we're going to talk about on Wednesday (cinder and barbican are potentially facing very similar refactors). Each session in the etherpad [0] has topics, so if a topic sounds relevant to your service, please feel free to drop into those discussions. > > A rough outline is that: > > - Monday we're going to focus on QA and general policy problems (e.g., converting tempest to use system-scope, the JSON->YAML community goal, overall status from Wallaby, etc) > - Tuesday we're going to find ways to adopt system-scope in cinder > - Wednesday we're going to work through system-scope adoption, the meta definitions API, and test coverage in glance > - Thursday we're going to discuss what the experience should be like for operators using system-scoped tokens to do project-specific operations with nova (e.g., rebooting instances) > > I'm contemplating hosting a 30 minute recap session on Friday that attempts to summarize everything from the week regarding RBAC discussions. If that sounds useful, I'll ask Kristi if I can use one of the keystone sessions for that recap. > > I know, this feels like a lot of focus for one thing and I appreciate everyone's help working through this stuff. But, I'm hopeful that better organization throughout the PTG week will result in less confusion about what we plan to do in Xena with RBAC so we can deliver something useful to users and operators. > > Thanks, > > Lance > > [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg From ces.eduardo98 at gmail.com Wed Apr 7 19:04:00 2021 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 7 Apr 2021 16:04:00 -0300 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: Big +1! Thank you, Liron and Vida! :) Em qua., 7 de abr. de 2021 às 15:40, Goutham Pacha Ravi < gouthampravi at gmail.com> escreveu: > Hello Zorillas, > > Vida's been our bug czar since the Ussuri release and she's > conceptualized and executed our successful bug triage strategy. She > has also painstakingly organized several documentation and code bug > squash events and kept the pulse on multi-release efforts. She's > taught me a lot about project management and you can see tangible > results here, I suppose :) > > Liron's fixed a lot of test code bugs and covered some old and > important test gaps over the past few releases. He's driving > standardization of the tempest plugin and bringing in best practices > from tempest, refstack and elsewhere into our testing. It's always a > pleasure to work with Liron since he's happy to provide and welcome > feedback. > > More recently, Liron and Vida have enabled us to work with the > InteropWG and define refstack guidelines. They've also gotten us > closer to members from the QA community who they work with more > closely downstream. In short, they bring in different perspectives > while also espousing the team's core values. So I'd like to propose > their addition to the manila-tempest-plugin-core team. > > Please give me your +/- 1s for this proposal. > > Thanks, > Goutham > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Apr 7 19:25:00 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Apr 2021 14:25:00 -0500 Subject: [ptg] Secure RBAC and Policy Xena PTG sessoins In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 1:59 PM Julia Kreger wrote: > I think a 30 minute re-cap session would be good on Friday because not > everyone is going to be able to attend every session, depending on > their own resulting schedule and commitments. > +1 Tentatively added to the keystone schedule for Friday. I'll see what Kristi thinks. > > -Julia > > On Wed, Apr 7, 2021 at 11:56 AM Lance Bragstad > wrote: > > > > Hey all, > > > > Several projects are working through RBAC overhauls and naturally > sessions are cropping up for the PTG. > > > > I tried bouncing around to various policy sessions during the Wallaby > PTG, but I didn't plan things out very well. As a result, I missed > sessions, had duplicate conversations with multiple groups, and ended up > being more reactive than I'd like. > > > > To prevent that, Ghanshyam and I have condensed all the policy/RBAC > sessions we know about in a single etherpad [0]. > > > > I know most projects are still firming up their schedules, but I've > written down the session times that we know of and organized them > chronologically. My hope is that this will help us group similar > discussions and reach broader consensus on topics easier and quicker. > > > > For example, keystone and nova have a cross-project session on Thursday > to discuss how nova should handle consuming system-scoped tokens for > project-specific operations. This topic certainly isn't exclusive to nova. > It'll impact just about every other service and approaching it consistently > will be huge for end users and operators. Another good example of this > would be the glance refactor to integrate system-scope support we're going > to talk about on Wednesday (cinder and barbican are potentially facing very > similar refactors). Each session in the etherpad [0] has topics, so if a > topic sounds relevant to your service, please feel free to drop into those > discussions. > > > > A rough outline is that: > > > > - Monday we're going to focus on QA and general policy problems (e.g., > converting tempest to use system-scope, the JSON->YAML community goal, > overall status from Wallaby, etc) > > - Tuesday we're going to find ways to adopt system-scope in cinder > > - Wednesday we're going to work through system-scope adoption, the meta > definitions API, and test coverage in glance > > - Thursday we're going to discuss what the experience should be like for > operators using system-scoped tokens to do project-specific operations with > nova (e.g., rebooting instances) > > > > I'm contemplating hosting a 30 minute recap session on Friday that > attempts to summarize everything from the week regarding RBAC discussions. > If that sounds useful, I'll ask Kristi if I can use one of the keystone > sessions for that recap. > > > > I know, this feels like a lot of focus for one thing and I appreciate > everyone's help working through this stuff. But, I'm hopeful that better > organization throughout the PTG week will result in less confusion about > what we plan to do in Xena with RBAC so we can deliver something useful to > users and operators. > > > > Thanks, > > > > Lance > > > > [0] https://etherpad.opendev.org/p/policy-popup-xena-ptg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From destienne.maxime at gmail.com Wed Apr 7 11:35:07 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Wed, 7 Apr 2021 13:35:07 +0200 Subject: [neutron][nova] Port binding fails when creating an instance In-Reply-To: <3513595.c0HGFkD9VC@p1> References: <3930281.aCZO8KT43X@p1> <3513595.c0HGFkD9VC@p1> Message-ID: Hi ! Here is the log file. First error occurs at line 117. Thank you ! Le mer. 7 avr. 2021 à 12:04, Slawek Kaplonski a écrit : > Hi, > > Can You send me full neutron-server log? I will check if there is anything > more there. > > Dnia środa, 7 kwietnia 2021 11:48:27 CEST Maxime d'Estienne pisze: > > As Slawek Kaplonski told me, I enabled neutron debugging and I didn't > find > > why specific mechanism drivers are refusing to bind ports > > on that host. > > > > I noticed that the VM can get an IP from DHCP, I see a link on the web > > interface (network topology) between my physical network "provider" and > the > > VM. But this link disappeared when the VM crashed due to the error. > > > > Here are the previous DEBUG logs, just before the ERROR one. > > > > I don't succeed in getting more informed by these logs. > > (/neutron/server.log) > > > > Thank you a lot for your time ! > > Maxime > > > > `2021-04-07 10:10:30.294 25623 DEBUG > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > [req-a995e8eb-fde4-49be-b822-29f7e98b56d4 > 9c53e456ca2d4d07a4aecbf91c487cae > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > excluded by > > > policy engine: ['binding:profile', 'binding:host_id', > 'binding:vif_type', > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > 2021-04-07 10:10:30.995 25626 DEBUG > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > [req-706ad36e-31a1-4e5a-b9f6-17951ccb089a > 9c53e456ca2d4d07a4aecbf91c487cae > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > excluded by > > > policy engine: ['binding:profile', 'binding:host_id', > 'binding:vif_type', > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > 2021-04-07 10:10:31.105 25626 DEBUG > > > neutron.pecan_wsgi.hooks.policy_enforcement > > > [req-446ed89e-0697-4822-b69b-49b02ad9732d > 9c53e456ca2d4d07a4aecbf91c487cae > > > d26b6143299a47e3a77feff04ae8b7a1 - default default] Attributes > excluded by > > > policy engine: ['binding:profile', 'binding:host_id', > 'binding:vif_type', > > > 'binding:vif_details'] _exclude_attributes_by_policy > > > > /usr/lib/python3/dist-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py:256 > > > > > > 2021-04-07 10:10:31.328 25623 DEBUG neutron.api.v2.base > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Request body: > {'port': > > > {'device_id': '6406a1b1-7f0b-4f8e-88dd-81dcded8299d', 'device_owner': > > > 'compute:nova', 'binding:host_id': 'compute1'}} prepare_request_body > > > /usr/lib/python3/dist-packages/neutron/api/v2/base.py:716 > > > > > > 2021-04-07 10:10:31.980 25623 DEBUG neutron.plugins.ml2.managers > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > port > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > normal > > > with profile bind_port > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:747 > > > > > > 2021-04-07 10:10:31.981 25623 DEBUG neutron.plugins.ml2.managers > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Attempting to bind > port > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 at level 0 using > > > segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > 'network_type': > > > 'flat', 'physical_network': 'provider', 'segmentation_id': None, > > > 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] _bind_port_level > > > /usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py:768 > > > > > > 2021-04-07 10:10:31.981 25623 ERROR neutron.plugins.ml2.managers > > > [req-05e6a5c3-f5b4-45f4-8339-fbfa2a7eed8a > b21b8901642c470b8f668965997c7922 > > > 0f23d567d2ce4599a1571d8fd5982f9a - default default] Failed to bind port > > > 2e702d95-34df-4b85-9206-75c28bbcb9da on host compute1 for vnic_type > normal > > > using segments [{'id': 'a35c88a5-2234-4b2f-bab6-a5a17af42d1e', > > > 'network_type': 'flat', 'physical_network': 'provider', > 'segmentation_id': > > > None, 'network_id': '45320836-f1a3-4e96-a3d8-59b95d633d1e'}] ` > > > > > > > > Le jeu. 1 avr. 2021 à 21:36, Slawek Kaplonski a > > écrit : > > > > > Hi, > > > > > > Dnia czwartek, 1 kwietnia 2021 14:44:21 CEST Maxime d'Estienne pisze: > > > > Hello, > > > > > > > > I spent a lot of time troubleshooting my issue, which I described > here : > > > > > > > > https://serverfault.com/questions/1058969/cannot-create-an-instance-due-to-failed-binding > > > > > > > > To summarize, when I want to create an instance, binding fails on > compute > > > > node, the dhcp agent seems to give an ip to the VM but I have an > error. > > > > > > What do You mean exactly? Failed binding of the port in Neutron? In > such > > > case > > > nova will not boot vm so it can't get IP from DHCP. > > > > > > > > > > > I don't know where to dig, besides what I have done. > > > > > > Please enable debug logs in neutron-server and look in its logs for the > > > reason > > > why it failed to bind port on specific host. > > > Usually reason is dead L2 agent on host or mismatch in the agent's > bridge > > > mappings configuration in the agent. > > > > > > > > > > > Thanks a lot for your help ! > > > > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: server.log Type: application/octet-stream Size: 454640 bytes Desc: not available URL: From eng.taha1928 at gmail.com Wed Apr 7 17:17:10 2021 From: eng.taha1928 at gmail.com (Taha Adel) Date: Wed, 7 Apr 2021 19:17:10 +0200 Subject: [Keystone] Managing keystone tokens in high availability environment Message-ID: Hello Engineers and Developers, I'm currently deploying a three-nodes openstack controller cluster, controller-01, controller-02, anc controller-03. I have installed the keystone service on the three controllers and generated fernet keys on one node and distributed the keys to the other nodes of the cluster. Hence, I have configured an HAProxy in front of them that would distribute the incoming requests over them. The issue is, when I try to access the keystone endpoint from using the VIP of the loadbalancer, the service works ONLY on the node that I have generated the keys on, and it doesn't work on the nodes that got the keys by distribution. the error message I have got is *"INTERNAL SERVER ERROR (500)"* In other words, the node that had* keystone-manage fernet_setup *command ran on it, it can run the service properly, but the others can't. Is the way of replicating the key incorrect? is there any other way? Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Wed Apr 7 18:37:03 2021 From: helena at openstack.org (helena at openstack.org) Date: Wed, 7 Apr 2021 14:37:03 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting Message-ID: <1617820623.770226846@apps.rackspace.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Project Updates Template.pptx Type: application/octet-stream Size: 791921 bytes Desc: not available URL: From tpb at dyncloud.net Wed Apr 7 19:44:36 2021 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 7 Apr 2021 15:44:36 -0400 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: <20210407194436.vbtmfwts3r7ighh3@barron.net> Ditto, including the big thanks. On 07/04/21 16:04 -0300, Carlos Silva wrote: >Big +1! > >Thank you, Liron and Vida! :) > >Em qua., 7 de abr. de 2021 às 15:40, Goutham Pacha Ravi < >gouthampravi at gmail.com> escreveu: > >> Hello Zorillas, >> >> Vida's been our bug czar since the Ussuri release and she's >> conceptualized and executed our successful bug triage strategy. She >> has also painstakingly organized several documentation and code bug >> squash events and kept the pulse on multi-release efforts. She's >> taught me a lot about project management and you can see tangible >> results here, I suppose :) >> >> Liron's fixed a lot of test code bugs and covered some old and >> important test gaps over the past few releases. He's driving >> standardization of the tempest plugin and bringing in best practices >> from tempest, refstack and elsewhere into our testing. It's always a >> pleasure to work with Liron since he's happy to provide and welcome >> feedback. >> >> More recently, Liron and Vida have enabled us to work with the >> InteropWG and define refstack guidelines. They've also gotten us >> closer to members from the QA community who they work with more >> closely downstream. In short, they bring in different perspectives >> while also espousing the team's core values. So I'd like to propose >> their addition to the manila-tempest-plugin-core team. >> >> Please give me your +/- 1s for this proposal. >> >> Thanks, >> Goutham >> >> From rlandy at redhat.com Wed Apr 7 19:48:19 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 7 Apr 2021 15:48:19 -0400 Subject: [tripleo][DIB] Second core review/w+ requested on DIB patch to unblock tripleo promotions Message-ID: Hi diskimage-builder cores, OVB jobs have been failing on centos-8 based releases promotion jobs since the move to container-tools:3.0. https://review.opendev.org/c/openstack/diskimage-builder/+/785138 - "Make DIB_DNF_MODULE_STREAMS part of yum element" and https://review.opendev.org/c/openstack/tripleo-ci/+/785087 - "Use dib_dnf_module_streams for enabling modules" patches were added to address the failures. These patches were tested in https://review.rdoproject.org/r/c/testproject/+/33138 - see passing OVB job https://review.rdoproject.org/zuul/build/6582fb8afa2a44c6a806bb2545a9fadf. Thanks to Carlos for reviewing the DIB patch. We are looking for a second core review and w+ on this patch to clear the promotion lines. Thank you, tripleo CI ruck/rovers -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Apr 7 19:50:45 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 7 Apr 2021 12:50:45 -0700 Subject: [all] vPTG April 2021 Registration & Schedule Message-ID: Hello everyone! The April 2021 Project Teams Gathering is right around the corner! The official schedule has now been posted on the PTG website [1], the PTGbot will be up to date by the end of the week [2], and we have also attached it to this email. Please double check your rooms! We did a little consolidation and shifting while maintaining the times you signed up for. Friendly reminder, if you have not already registered, please do so [3]. It is important that we get everyone to register for the event as this is how we will contact you about tooling information/passwords and other event details. Please let us know if you have any questions! Cheers, - The Kendalls (diablo_rojo & wendallkaters) [1] PTG Website www.openstack.org/ptg [2] PTGbot: http://ptg.openstack.org/ptg.html [3] PTG Registration: https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 7 20:11:00 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 07 Apr 2021 22:11:00 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: <6282510.b6OWOPelg3@p1> Hi, Dnia środa, 7 kwietnia 2021 20:37:03 CEST helena at openstack.org pisze: > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. Thx for doing that. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. I will do the update for Neutron. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From luke.camilleri at zylacomputing.com Wed Apr 7 20:39:37 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 7 Apr 2021 22:39:37 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: kube-master:     type: OS::Nova::Server     condition: image_based     properties:       name: {get_param: name}       image: {get_param: server_image}       flavor: {get_param: master_flavor}                                                 MISSING ----->   key_name: {get_param: ssh_key_name}       user_data_format: SOFTWARE_CONFIG       software_config_transport: POLL_SERVER_HEAT       user_data: {get_resource: agent_config}       networks:         - port: {get_resource: kube_master_eth0}       scheduler_hints: { group: { get_param: nodes_server_group_id }}       availability_zone: {get_param: availability_zone} kube-master-bfv:     type: OS::Nova::Server     condition: volume_based     properties:       name: {get_param: name}       flavor: {get_param: master_flavor}                                                 MISSING ----->   key_name: {get_param: ssh_key_name}       user_data_format: SOFTWARE_CONFIG       software_config_transport: POLL_SERVER_HEAT       user_data: {get_resource: agent_config}       networks:         - port: {get_resource: kube_master_eth0}       scheduler_hints: { group: { get_param: nodes_server_group_id }}       availability_zone: {get_param: availability_zone}       block_device_mapping_v2:         - boot_index: 0           volume_id: {get_resource: kube_node_volume} If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? On 07/04/2021 10:24, Ammad Syed wrote: > Hi Luke, > > You may refer to below guide for magnum installation and its template > > https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 > > > It worked pretty well for me. > > - Ammad > On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri > > wrote: > > Thanks for your quick reply. Do you have a download link for that > image as I cannot find an archive for the 32 release? > > As for the image upload into openstack you still use the > fedora-atomic property right to be available for coe deployments? > > On 07/04/2021 00:03, feilong wrote: >> >> Hi Luke, >> >> The Fedora Atomic driver has been deprecated a while since the >> Fedora Atomic has been deprecated by upstream. For now, I would >> suggest using Fedora CoreOS 32.20201104.3.0 >> >> The latest version of Fedora CoreOS is 33.xxx, but there are >> something when booting based my testing, see >> https://github.com/coreos/fedora-coreos-tracker/issues/735 >> >> >> Please feel free to let me know if you have any question about >> using Magnum. We're using stable/victoria on our public cloud and >> it works very well. I can share our public templates if you want. >> Cheers. >> >> >> >> On 7/04/21 9:51 am, Luke Camilleri wrote: >>> >>> We have insatlled magnum following the installation guide here >>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >>> >>> and the process was quite smooth but we have been having some >>> issues with the deployment of the clusters. >>> >>> The image being used as per the documentation is >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>> >>> >>> Our first issue was that podman was being used even if we >>> specified the use_podman=false (since the image above did not >>> include podman) but this was resulting in a timeout and the >>> cluster would fail to deploy. We have then installed podman in >>> the image and the cluster progressed a bit further >>> >>> /+ echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping >>> 5s'// >>> //+ sleep 5s// >>> //+ ssh -F /srv/magnum/.ssh/config root at localhost >>> '/usr/bin/podman run --entrypoint /bin/bash     --name >>> install-kubectl     --net host --privileged     --rm     --user >>> root --volume /srv/magnum/bin:/host/srv/magnum/bin >>> k8s.gcr.io/hyperkube:v1.15.7 >>> -c '\''cp >>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >>> //bash: /usr/bin/podman: No such file or directory// >>> //ERROR Unable to install kubectl. Abort.// >>> //+ i=61// >>> //+ '[' 61 -gt 60 ']'// >>> //+ echo 'ERROR Unable to install kubectl. Abort.'// >>> //+ exit 1/ >>> >>> The cluster is now failing here at "kube_cluster_deploy" and >>> when checking the logs on the master node we noticed the >>> following in the log files: >>> >>> /Starting to run kube-apiserver-to-kubelet-role// >>> //Waiting for Kubernetes API...// >>> //+ echo 'Waiting for Kubernetes API...'// >>> //++ curl --silent http://127.0.0.1:8080/healthz >>> // >>> //+ '[' ok = '' ']'// >>> //+ sleep 5/ >>> >>> This is because the kubernetes API server is not installed >>> either. I have noticed some scripts that should handle the >>> installation but I would like to know if anyone here has had >>> similar issues with a clean Victoria installation. >>> >>> Also should we have to install any packages in the fedora atomic >>> image file or should the installation requirements be part of >>> the stack? >>> >>> Thanks in advance for any asistance >>> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email:flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House,150 Willis Street, Wellington >> ------------------------------------------------------ > > -- > Regards, > > > Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Wed Apr 7 20:54:05 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Wed, 7 Apr 2021 21:54:05 +0100 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: <4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com> The ssh key gets injected via ignition which is why it’s not present in the HOT template. You need minimum train release of Heat for this to work however. Sent from my iPhone > On 7 Apr 2021, at 21:45, Luke Camilleri wrote: > >  > Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" > > Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. > > I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: > > kube-master: > type: OS::Nova::Server > condition: image_based > properties: > name: {get_param: name} > image: {get_param: server_image} > flavor: {get_param: master_flavor} > MISSING -----> key_name: {get_param: ssh_key_name} > user_data_format: SOFTWARE_CONFIG > software_config_transport: POLL_SERVER_HEAT > user_data: {get_resource: agent_config} > networks: > - port: {get_resource: kube_master_eth0} > scheduler_hints: { group: { get_param: nodes_server_group_id }} > availability_zone: {get_param: availability_zone} > > kube-master-bfv: > type: OS::Nova::Server > condition: volume_based > properties: > name: {get_param: name} > flavor: {get_param: master_flavor} > MISSING -----> key_name: {get_param: ssh_key_name} > user_data_format: SOFTWARE_CONFIG > software_config_transport: POLL_SERVER_HEAT > user_data: {get_resource: agent_config} > networks: > - port: {get_resource: kube_master_eth0} > scheduler_hints: { group: { get_param: nodes_server_group_id }} > availability_zone: {get_param: availability_zone} > block_device_mapping_v2: > - boot_index: 0 > volume_id: {get_resource: kube_node_volume} > > If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? > >> On 07/04/2021 10:24, Ammad Syed wrote: >> Hi Luke, >> >> You may refer to below guide for magnum installation and its template >> >> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >> >> It worked pretty well for me. >> >> - Ammad >> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri wrote: >>> Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? >>> >>> As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? >>> >>>> On 07/04/2021 00:03, feilong wrote: >>>> Hi Luke, >>>> >>>> The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 >>>> >>>> The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>> >>>> Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. >>>> >>>> >>>> >>>> >>>> >>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>> We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. >>>>> >>>>> The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>> >>>>> Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further >>>>> >>>>> + echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s' >>>>> + sleep 5s >>>>> + ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\''' >>>>> bash: /usr/bin/podman: No such file or directory >>>>> ERROR Unable to install kubectl. Abort. >>>>> + i=61 >>>>> + '[' 61 -gt 60 ']' >>>>> + echo 'ERROR Unable to install kubectl. Abort.' >>>>> + exit 1 >>>>> >>>>> The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: >>>>> >>>>> Starting to run kube-apiserver-to-kubelet-role >>>>> Waiting for Kubernetes API... >>>>> + echo 'Waiting for Kubernetes API...' >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> >>>>> This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. >>>>> >>>>> Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? >>>>> >>>>> Thanks in advance for any asistance >>>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> ------------------------------------------------------ >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email: flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>> ------------------------------------------------------ >> -- >> Regards, >> >> >> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Wed Apr 7 21:12:44 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 7 Apr 2021 23:12:44 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: <4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com> References: <4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com> Message-ID: Hi Bharat, I am on Victoria so that should satisfy the requirement: # rpm -qa | grep -i heat openstack-heat-api-cfn-15.0.0-1.el8.noarch openstack-heat-api-15.0.0-1.el8.noarch python3-heatclient-2.2.1-2.el8.noarch openstack-heat-common-15.0.0-1.el8.noarch openstack-heat-engine-15.0.0-1.el8.noarch openstack-heat-ui-4.0.0-1.el8.noarch So from what I can see during the stack's step at OS::Heat::SoftwareConfig is the step that gets the data right? agent_config:     type: OS::Heat::SoftwareConfig     properties:       group: ungrouped       config:         list_join:           - "\n"           -             - str_replace:                 template: {get_file: user_data.json}                 params:                   __HOSTNAME__: {get_param: name}                   __SSH_KEY_VALUE__: {get_param: ssh_public_key}                   __OPENSTACK_CA__: {get_param: openstack_ca}                   __CONTAINER_INFRA_PREFIX__: In the stack I can see that the step below which corresponds to the agent_config above and has just been initialized: kube_cluster_config OS::Heat::SoftwareConfig 46 minutes Init Complete My question here would be: 1- is the file the user_data 2- at which step is this data aplied to the instance as from the fedora docs ( https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview ) this step seems to be at the initial stages of the boot process Thanks in advance for any assistance On 07/04/2021 22:54, Bharat Kunwar wrote: > The ssh key gets injected via ignition which is why it’s not present > in the HOT template. You need minimum train release of Heat for this > to work however. > > Sent from my iPhone > >> On 7 Apr 2021, at 21:45, Luke Camilleri >> wrote: >> >>  >> >> Hello Ammad and thanks for your assistance. I followed the guide and >> it has all the details and steps except for one thing, the ssh key is >> not being passed over to the instance, if I deploy an instance from >> that image and pass the ssh key it works fine but if I use the image >> as part of the HOT it lists the key as "-" >> >> Did you have this issue by any chance? Never thought I would be >> asking this question as it is a basic thing but I find it very >> strange that this is not working. I tried to pass the ssh key in >> either the template or in the cluster creation command but for both >> options the Key Name metadata option for the instance remains "None" >> when the instance is deployed. >> >> I then went on and checked the yaml file the resource uses that >> loads/gets the parameters >> /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >> has the below yaml configurations: >> >> kube-master: >>     type: OS::Nova::Server >>     condition: image_based >>     properties: >>       name: {get_param: name} >>       image: {get_param: server_image} >>       flavor: {get_param: master_flavor} >>                                                 MISSING ----->   >> key_name: {get_param: ssh_key_name} >>       user_data_format: SOFTWARE_CONFIG >>       software_config_transport: POLL_SERVER_HEAT >>       user_data: {get_resource: agent_config} >>       networks: >>         - port: {get_resource: kube_master_eth0} >>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>       availability_zone: {get_param: availability_zone} >> >> kube-master-bfv: >>     type: OS::Nova::Server >>     condition: volume_based >>     properties: >>       name: {get_param: name} >>       flavor: {get_param: master_flavor} >>                                                 MISSING ----->   >> key_name: {get_param: ssh_key_name} >>       user_data_format: SOFTWARE_CONFIG >>       software_config_transport: POLL_SERVER_HEAT >>       user_data: {get_resource: agent_config} >>       networks: >>         - port: {get_resource: kube_master_eth0} >>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>       availability_zone: {get_param: availability_zone} >>       block_device_mapping_v2: >>         - boot_index: 0 >>           volume_id: {get_resource: kube_node_volume} >> >> If i add the lines which show as missing, then everything works well >> and the key is actually injected in the kubemaster. Did anyone had >> this issue? >> >> On 07/04/2021 10:24, Ammad Syed wrote: >>> Hi Luke, >>> >>> You may refer to below guide for magnum installation and its template >>> >>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>> >>> >>> It worked pretty well for me. >>> >>> - Ammad >>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri >>> >> > wrote: >>> >>> Thanks for your quick reply. Do you have a download link for >>> that image as I cannot find an archive for the 32 release? >>> >>> As for the image upload into openstack you still use the >>> fedora-atomic property right to be available for coe deployments? >>> >>> On 07/04/2021 00:03, feilong wrote: >>>> >>>> Hi Luke, >>>> >>>> The Fedora Atomic driver has been deprecated a while since the >>>> Fedora Atomic has been deprecated by upstream. For now, I would >>>> suggest using Fedora CoreOS 32.20201104.3.0 >>>> >>>> The latest version of Fedora CoreOS is 33.xxx, but there are >>>> something when booting based my testing, see >>>> https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>> >>>> >>>> Please feel free to let me know if you have any question about >>>> using Magnum. We're using stable/victoria on our public cloud >>>> and it works very well. I can share our public templates if you >>>> want. Cheers. >>>> >>>> >>>> >>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>> >>>>> We have insatlled magnum following the installation guide here >>>>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >>>>> >>>>> and the process was quite smooth but we have been having some >>>>> issues with the deployment of the clusters. >>>>> >>>>> The image being used as per the documentation is >>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>> >>>>> >>>>> Our first issue was that podman was being used even if we >>>>> specified the use_podman=false (since the image above did not >>>>> include podman) but this was resulting in a timeout and the >>>>> cluster would fail to deploy. We have then installed podman in >>>>> the image and the cluster progressed a bit further >>>>> >>>>> /+ echo 'WARNING Attempt 60: Trying to install kubectl. >>>>> Sleeping 5s'// >>>>> //+ sleep 5s// >>>>> //+ ssh -F /srv/magnum/.ssh/config root at localhost >>>>> '/usr/bin/podman run --entrypoint /bin/bash     --name >>>>> install-kubectl     --net host --privileged     --rm     >>>>> --user root --volume /srv/magnum/bin:/host/srv/magnum/bin >>>>> k8s.gcr.io/hyperkube:v1.15.7 >>>>> -c '\''cp >>>>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >>>>> //bash: /usr/bin/podman: No such file or directory// >>>>> //ERROR Unable to install kubectl. Abort.// >>>>> //+ i=61// >>>>> //+ '[' 61 -gt 60 ']'// >>>>> //+ echo 'ERROR Unable to install kubectl. Abort.'// >>>>> //+ exit 1/ >>>>> >>>>> The cluster is now failing here at "kube_cluster_deploy" and >>>>> when checking the logs on the master node we noticed the >>>>> following in the log files: >>>>> >>>>> /Starting to run kube-apiserver-to-kubelet-role// >>>>> //Waiting for Kubernetes API...// >>>>> //+ echo 'Waiting for Kubernetes API...'// >>>>> //++ curl --silent http://127.0.0.1:8080/healthz >>>>> // >>>>> //+ '[' ok = '' ']'// >>>>> //+ sleep 5/ >>>>> >>>>> This is because the kubernetes API server is not installed >>>>> either. I have noticed some scripts that should handle the >>>>> installation but I would like to know if anyone here has had >>>>> similar issues with a clean Victoria installation. >>>>> >>>>> Also should we have to install any packages in the fedora >>>>> atomic image file or should the installation requirements be >>>>> part of the stack? >>>>> >>>>> Thanks in advance for any asistance >>>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> ------------------------------------------------------ >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email:flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House,150 Willis Street, Wellington >>>> ------------------------------------------------------ >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Apr 7 21:42:58 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Apr 2021 14:42:58 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: Related, Is there 2020 user survey data available? On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > From allison at openstack.org Wed Apr 7 22:15:09 2021 From: allison at openstack.org (Allison Price) Date: Wed, 7 Apr 2021 17:15:09 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Hi Julia, I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. Thanks! Allison > On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > > Related, Is there 2020 user survey data available? > > On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > wrote: >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! >> >> >> >> Thank you for your participation, >> >> Helena >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Apr 7 22:23:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Apr 2021 17:23:30 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 8th at 1500 UTC. In-Reply-To: <178a3f8d599.cf94285387564.6978079671458448803@ghanshyammann.com> References: <178a3f8d599.cf94285387564.6978079671458448803@ghanshyammann.com> Message-ID: <178ae6ef65f.1107939f863588.6814515042321844496@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on April 8th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Follow up on past action items * PTG ** https://etherpad.opendev.org/p/tc-xena-ptg * Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * PTL assignment for Xena cycle leaderless projects (gmann) ** https://etherpad.opendev.org/p/xena-leaderless * Election for one Vacant TC seat (gmann) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021334.html * Community newsletter: "OpenStack project news" snippets ** https://etherpad.opendev.org/p/newsletter-openstack-news * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 05 Apr 2021 16:38:17 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 8th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, April 7th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gouthampravi at gmail.com Wed Apr 7 23:09:51 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 7 Apr 2021 16:09:51 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! Hi Helena, Thanks for the information. I'd like to sign up on behalf of the Manila project team. Is this a live presentation unlike last time where we pre-recorded ~10 minute updates? Thanks, Goutham > > > > Thank you for your participation, > > Helena > > From rosmaita.fossdev at gmail.com Thu Apr 8 00:20:37 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Apr 2021 20:20:37 -0400 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: On 4/7/21 7:09 PM, Goutham Pacha Ravi wrote: > On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org > wrote: >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! > > Hi Helena, > > Thanks for the information. I'd like to sign up on behalf of the > Manila project team. > Is this a live presentation unlike last time where we pre-recorded ~10 > minute updates? > > Thanks, > Goutham I'd like to sign up on behalf of Cinder. Same questions as Goutham, though: will it be "live", and what are your expectations about length of presentation? cheers, brian > > >> >> >> >> Thank you for your participation, >> >> Helena >> >> > From bharat at stackhpc.com Thu Apr 8 06:05:16 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Thu, 8 Apr 2021 07:05:16 +0100 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Is your os_distro=fedora-coreos or fedora-atomic? Sent from my iPhone > On 7 Apr 2021, at 22:12, Luke Camilleri wrote: > >  > Hi Bharat, I am on Victoria so that should satisfy the requirement: > > # rpm -qa | grep -i heat > openstack-heat-api-cfn-15.0.0-1.el8.noarch > openstack-heat-api-15.0.0-1.el8.noarch > python3-heatclient-2.2.1-2.el8.noarch > openstack-heat-common-15.0.0-1.el8.noarch > openstack-heat-engine-15.0.0-1.el8.noarch > openstack-heat-ui-4.0.0-1.el8.noarch > > So from what I can see during the stack's step at OS::Heat::SoftwareConfig is the step that gets the data right? > > agent_config: > type: OS::Heat::SoftwareConfig > properties: > group: ungrouped > config: > list_join: > - "\n" > - > - str_replace: > template: {get_file: user_data.json} > params: > __HOSTNAME__: {get_param: name} > __SSH_KEY_VALUE__: {get_param: ssh_public_key} > __OPENSTACK_CA__: {get_param: openstack_ca} > __CONTAINER_INFRA_PREFIX__: > > > > In the stack I can see that the step below which corresponds to the agent_config above and has just been initialized: > > kube_cluster_config > OS::Heat::SoftwareConfig 46 minutes Init Complete > My question here would be: > > 1- is the file the user_data > > 2- at which step is this data aplied to the instance as from the fedora docs ( https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview ) this step seems to be at the initial stages of the boot process > > Thanks in advance for any assistance > > On 07/04/2021 22:54, Bharat Kunwar wrote: >> The ssh key gets injected via ignition which is why it’s not present in the HOT template. You need minimum train release of Heat for this to work however. >> >> Sent from my iPhone >> >>> On 7 Apr 2021, at 21:45, Luke Camilleri wrote: >>> >>>  >>> Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" >>> >>> Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. >>> >>> I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: >>> >>> kube-master: >>> type: OS::Nova::Server >>> condition: image_based >>> properties: >>> name: {get_param: name} >>> image: {get_param: server_image} >>> flavor: {get_param: master_flavor} >>> MISSING -----> key_name: {get_param: ssh_key_name} >>> user_data_format: SOFTWARE_CONFIG >>> software_config_transport: POLL_SERVER_HEAT >>> user_data: {get_resource: agent_config} >>> networks: >>> - port: {get_resource: kube_master_eth0} >>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>> availability_zone: {get_param: availability_zone} >>> >>> kube-master-bfv: >>> type: OS::Nova::Server >>> condition: volume_based >>> properties: >>> name: {get_param: name} >>> flavor: {get_param: master_flavor} >>> MISSING -----> key_name: {get_param: ssh_key_name} >>> user_data_format: SOFTWARE_CONFIG >>> software_config_transport: POLL_SERVER_HEAT >>> user_data: {get_resource: agent_config} >>> networks: >>> - port: {get_resource: kube_master_eth0} >>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>> availability_zone: {get_param: availability_zone} >>> block_device_mapping_v2: >>> - boot_index: 0 >>> volume_id: {get_resource: kube_node_volume} >>> >>> If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? >>> >>> On 07/04/2021 10:24, Ammad Syed wrote: >>>> Hi Luke, >>>> >>>> You may refer to below guide for magnum installation and its template >>>> >>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>>> >>>> It worked pretty well for me. >>>> >>>> - Ammad >>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri wrote: >>>>> Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? >>>>> >>>>> As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? >>>>> >>>>> On 07/04/2021 00:03, feilong wrote: >>>>>> Hi Luke, >>>>>> >>>>>> The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 >>>>>> >>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>>>> >>>>>> Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>>>> We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. >>>>>>> >>>>>>> The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>>>> >>>>>>> Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further >>>>>>> >>>>>>> + echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s' >>>>>>> + sleep 5s >>>>>>> + ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\''' >>>>>>> bash: /usr/bin/podman: No such file or directory >>>>>>> ERROR Unable to install kubectl. Abort. >>>>>>> + i=61 >>>>>>> + '[' 61 -gt 60 ']' >>>>>>> + echo 'ERROR Unable to install kubectl. Abort.' >>>>>>> + exit 1 >>>>>>> >>>>>>> The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: >>>>>>> >>>>>>> Starting to run kube-apiserver-to-kubelet-role >>>>>>> Waiting for Kubernetes API... >>>>>>> + echo 'Waiting for Kubernetes API...' >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. >>>>>>> >>>>>>> Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? >>>>>>> >>>>>>> Thanks in advance for any asistance >>>>>>> >>>>>> -- >>>>>> Cheers & Best regards, >>>>>> Feilong Wang (王飞龙) >>>>>> ------------------------------------------------------ >>>>>> Senior Cloud Software Engineer >>>>>> Tel: +64-48032246 >>>>>> Email: flwang at catalyst.net.nz >>>>>> Catalyst IT Limited >>>>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>>>> ------------------------------------------------------ >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Thu Apr 8 06:19:38 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Thu, 8 Apr 2021 07:19:38 +0100 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: As in, do you have that label set in the image property? Sent from my iPhone > On 8 Apr 2021, at 07:05, Bharat Kunwar wrote: > > Is your os_distro=fedora-coreos or fedora-atomic? > > Sent from my iPhone > >>> On 7 Apr 2021, at 22:12, Luke Camilleri wrote: >>> >>  >> Hi Bharat, I am on Victoria so that should satisfy the requirement: >> >> # rpm -qa | grep -i heat >> openstack-heat-api-cfn-15.0.0-1.el8.noarch >> openstack-heat-api-15.0.0-1.el8.noarch >> python3-heatclient-2.2.1-2.el8.noarch >> openstack-heat-common-15.0.0-1.el8.noarch >> openstack-heat-engine-15.0.0-1.el8.noarch >> openstack-heat-ui-4.0.0-1.el8.noarch >> >> So from what I can see during the stack's step at OS::Heat::SoftwareConfig is the step that gets the data right? >> >> agent_config: >> type: OS::Heat::SoftwareConfig >> properties: >> group: ungrouped >> config: >> list_join: >> - "\n" >> - >> - str_replace: >> template: {get_file: user_data.json} >> params: >> __HOSTNAME__: {get_param: name} >> __SSH_KEY_VALUE__: {get_param: ssh_public_key} >> __OPENSTACK_CA__: {get_param: openstack_ca} >> __CONTAINER_INFRA_PREFIX__: >> >> >> >> In the stack I can see that the step below which corresponds to the agent_config above and has just been initialized: >> >> kube_cluster_config >> OS::Heat::SoftwareConfig 46 minutes Init Complete >> My question here would be: >> >> 1- is the file the user_data >> >> 2- at which step is this data aplied to the instance as from the fedora docs ( https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview ) this step seems to be at the initial stages of the boot process >> >> Thanks in advance for any assistance >> >> On 07/04/2021 22:54, Bharat Kunwar wrote: >>> The ssh key gets injected via ignition which is why it’s not present in the HOT template. You need minimum train release of Heat for this to work however. >>> >>> Sent from my iPhone >>> >>>> On 7 Apr 2021, at 21:45, Luke Camilleri wrote: >>>> >>>>  >>>> Hello Ammad and thanks for your assistance. I followed the guide and it has all the details and steps except for one thing, the ssh key is not being passed over to the instance, if I deploy an instance from that image and pass the ssh key it works fine but if I use the image as part of the HOT it lists the key as "-" >>>> >>>> Did you have this issue by any chance? Never thought I would be asking this question as it is a basic thing but I find it very strange that this is not working. I tried to pass the ssh key in either the template or in the cluster creation command but for both options the Key Name metadata option for the instance remains "None" when the instance is deployed. >>>> >>>> I then went on and checked the yaml file the resource uses that loads/gets the parameters /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml has the below yaml configurations: >>>> >>>> kube-master: >>>> type: OS::Nova::Server >>>> condition: image_based >>>> properties: >>>> name: {get_param: name} >>>> image: {get_param: server_image} >>>> flavor: {get_param: master_flavor} >>>> MISSING -----> key_name: {get_param: ssh_key_name} >>>> user_data_format: SOFTWARE_CONFIG >>>> software_config_transport: POLL_SERVER_HEAT >>>> user_data: {get_resource: agent_config} >>>> networks: >>>> - port: {get_resource: kube_master_eth0} >>>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>> availability_zone: {get_param: availability_zone} >>>> >>>> kube-master-bfv: >>>> type: OS::Nova::Server >>>> condition: volume_based >>>> properties: >>>> name: {get_param: name} >>>> flavor: {get_param: master_flavor} >>>> MISSING -----> key_name: {get_param: ssh_key_name} >>>> user_data_format: SOFTWARE_CONFIG >>>> software_config_transport: POLL_SERVER_HEAT >>>> user_data: {get_resource: agent_config} >>>> networks: >>>> - port: {get_resource: kube_master_eth0} >>>> scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>> availability_zone: {get_param: availability_zone} >>>> block_device_mapping_v2: >>>> - boot_index: 0 >>>> volume_id: {get_resource: kube_node_volume} >>>> >>>> If i add the lines which show as missing, then everything works well and the key is actually injected in the kubemaster. Did anyone had this issue? >>>> >>>> On 07/04/2021 10:24, Ammad Syed wrote: >>>>> Hi Luke, >>>>> >>>>> You may refer to below guide for magnum installation and its template >>>>> >>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>>>> >>>>> It worked pretty well for me. >>>>> >>>>> - Ammad >>>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri wrote: >>>>>> Thanks for your quick reply. Do you have a download link for that image as I cannot find an archive for the 32 release? >>>>>> >>>>>> As for the image upload into openstack you still use the fedora-atomic property right to be available for coe deployments? >>>>>> >>>>>>> On 07/04/2021 00:03, feilong wrote: >>>>>>> Hi Luke, >>>>>>> >>>>>>> The Fedora Atomic driver has been deprecated a while since the Fedora Atomic has been deprecated by upstream. For now, I would suggest using Fedora CoreOS 32.20201104.3.0 >>>>>>> >>>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are something when booting based my testing, see https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>>>>> >>>>>>> Please feel free to let me know if you have any question about using Magnum. We're using stable/victoria on our public cloud and it works very well. I can share our public templates if you want. Cheers. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>>>>> We have insatlled magnum following the installation guide here https://docs.openstack.org/magnum/victoria/install/install-rdo.html and the process was quite smooth but we have been having some issues with the deployment of the clusters. >>>>>>>> >>>>>>>> The image being used as per the documentation is https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>>>>> >>>>>>>> Our first issue was that podman was being used even if we specified the use_podman=false (since the image above did not include podman) but this was resulting in a timeout and the cluster would fail to deploy. We have then installed podman in the image and the cluster progressed a bit further >>>>>>>> >>>>>>>> + echo 'WARNING Attempt 60: Trying to install kubectl. Sleeping 5s' >>>>>>>> + sleep 5s >>>>>>>> + ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '\''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\''' >>>>>>>> bash: /usr/bin/podman: No such file or directory >>>>>>>> ERROR Unable to install kubectl. Abort. >>>>>>>> + i=61 >>>>>>>> + '[' 61 -gt 60 ']' >>>>>>>> + echo 'ERROR Unable to install kubectl. Abort.' >>>>>>>> + exit 1 >>>>>>>> >>>>>>>> The cluster is now failing here at "kube_cluster_deploy" and when checking the logs on the master node we noticed the following in the log files: >>>>>>>> >>>>>>>> Starting to run kube-apiserver-to-kubelet-role >>>>>>>> Waiting for Kubernetes API... >>>>>>>> + echo 'Waiting for Kubernetes API...' >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> This is because the kubernetes API server is not installed either. I have noticed some scripts that should handle the installation but I would like to know if anyone here has had similar issues with a clean Victoria installation. >>>>>>>> >>>>>>>> Also should we have to install any packages in the fedora atomic image file or should the installation requirements be part of the stack? >>>>>>>> >>>>>>>> Thanks in advance for any asistance >>>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> Feilong Wang (王飞龙) >>>>>>> ------------------------------------------------------ >>>>>>> Senior Cloud Software Engineer >>>>>>> Tel: +64-48032246 >>>>>>> Email: flwang at catalyst.net.nz >>>>>>> Catalyst IT Limited >>>>>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>>>>> ------------------------------------------------------ >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Apr 8 06:36:15 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 8 Apr 2021 12:06:15 +0530 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: I would like to sign up for glance but I will not be around April 15th for the session. Let me know if you can just show the presentation during the session or not. Thanks & Best Regards, Abhishek Kekane On Thu, Apr 8, 2021 at 5:54 AM Brian Rosmaita wrote: > On 4/7/21 7:09 PM, Goutham Pacha Ravi wrote: > > On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org > > wrote: > >> > >> Hello ptls, > >> > >> > >> > >> The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session > via Zoom as well as live-streamed to YouTube. > >> > >> > >> > >> If you are a PTL interested in presenting an update for your project at > the Wallaby community meeting, please let me know by this Friday, April > 9th. Slides will be due next Tuesday, April 13th, and please find a > template attached you may use if you wish. > >> > >> > >> > >> Let me know if you have any other questions! > > > > Hi Helena, > > > > Thanks for the information. I'd like to sign up on behalf of the > > Manila project team. > > Is this a live presentation unlike last time where we pre-recorded ~10 > > minute updates? > > > > Thanks, > > Goutham > > I'd like to sign up on behalf of Cinder. Same questions as Goutham, > though: will it be "live", and what are your expectations about length > of presentation? > > > cheers, > brian > > > > > > >> > >> > >> > >> Thank you for your participation, > >> > >> Helena > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Thu Apr 8 06:37:24 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 08 Apr 2021 06:37:24 +0000 Subject: [Keystone] Managing keystone tokens in high availability environment In-Reply-To: Message-ID: <20210408063724.Horde.JNPDsvSBrElhLX4emySpDwo@webmail.nde.ag> Hi, my first guess would be permissions. Did you check if the directory and files have the correct permissions? How did you distribute the keys? Zitat von Taha Adel : > Hello Engineers and Developers, > > I'm currently deploying a three-nodes openstack controller cluster, > controller-01, controller-02, anc controller-03. I have installed the > keystone service on the three controllers and generated fernet keys on one > node and distributed the keys to the other nodes of the cluster. Hence, I > have configured an HAProxy in front of them that would distribute the > incoming requests over them. > > The issue is, when I try to access the keystone endpoint from using the VIP > of the loadbalancer, the service works ONLY on the node that I have > generated the keys on, and it doesn't work on the nodes that got the keys > by distribution. the error message I have got is *"INTERNAL SERVER ERROR > (500)"* > > In other words, the node that had* keystone-manage fernet_setup *command > ran on it, it can run the service properly, but the others can't. > > Is the way of replicating the key incorrect? is there any other way? > > Thanks in advance From skaplons at redhat.com Thu Apr 8 07:33:05 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 08 Apr 2021 09:33:05 +0200 Subject: [neutron] Drivers meeting agenda - 09.04.2021 Message-ID: <2516619.xdQ2LmAMnW@p1> Hi, Agenda for the tomorrow's drivers meeting is at [1]. We have 2 RFEs to discuss: https://bugs.launchpad.net/neutron/+bug/1922237 - [RFE][QoS] Add minimum guaranteed packet rate QoS rule https://bugs.launchpad.net/neutron/+bug/1921461 - [RFE] Enhancement to Neutron BGPaaS to directly support Neutron Routers & bgp-peering from such routers over internal & external Neutron Networks [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From hberaud at redhat.com Thu Apr 8 09:00:53 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 8 Apr 2021 11:00:53 +0200 Subject: [cinder] final reviews for RC-2 In-Reply-To: References: Message-ID: Hello, I submitted the final RC patches series so feel free to update the used hash for cinder when these changes will be merged. https://review.opendev.org/c/openstack/releases/+/785343 For now I hold this patch to allow you to release these changes. Le mer. 7 avr. 2021 à 20:22, Brian Rosmaita a écrit : > We have 3 patches that need review/revision/approval as soon as possible > before we release RC-2 tomorrow (Thursday 8 April). All 3 are updates > to the release notes: > > Release note for mTLS support cinder->glance > - https://review.opendev.org/c/openstack/cinder/+/783964 > > Release note about the cgroups v1 situation > - https://review.opendev.org/c/openstack/cinder/+/784179 > > Add known issue note about RBD encrypted volumes > - https://review.opendev.org/c/openstack/cinder/+/785235 > > Please review and leave comments as soon as you can. > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Apr 8 09:54:16 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 8 Apr 2021 11:54:16 +0200 Subject: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: References: Message-ID: FYI Looks similar to that story: - http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021002.html - http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021217.html I proposed a patch to move to nodejs10 all our projects that depend on nodejs: https://review.opendev.org/c/openstack/project-config/+/785353 When this patch will be merged I think that this job could be reenqueued. Le jeu. 8 avr. 2021 à 11:10, a écrit : > Build failed. > > - release-openstack-javascript > https://zuul.opendev.org/t/openstack/build/4062ea0df4e74565b9f8b443e550c0fd > : RETRY_LIMIT in 3m 53s > - announce-release https://zuul.opendev.org/t/openstack/build/None : > SKIPPED > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/2e9b658e70f340818cafefa88c5044e2 > : SUCCESS in 41s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Thu Apr 8 10:19:41 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 8 Apr 2021 12:19:41 +0200 Subject: [baremetal-sig][ironic] Tue Apr 13, 2021, 2pm UTC: Secure RBAC in Ironic Message-ID: <56c36688-95d2-4c35-f4ec-b4a20d884bb8@cern.ch> Dear all, The Bare Metal SIG will meet next week Tue Apr 13, 2021, at 2pm UTC on zoom. There will be two main points on the agenda: - A "topic-of-the-day" presentation by Julia Kreger (TheJulia) on     'Secure RBAC in Ironic'   and a - PTG pre-discussion on a potential integration of Ironic with   Kea DHCP. As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome! Cheers,  Arne From luke.camilleri at zylacomputing.com Thu Apr 8 11:06:22 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 8 Apr 2021 13:06:22 +0200 Subject: [victoria][magnum]fedora-atomic-27 image In-Reply-To: References: Message-ID: Hi Bharat, in fact I had noticed that property when creating the image in OS and made some more research about this. I now have 2 images (atomic and coreos) and have set the different flags in the image creation process. The documentation from Victoria to latest has also changed to this: Victoria (Kubernetes cluster creation) - Create a cluster template for a Kubernetes cluster using the |fedora-atomic-latest| image latest - Create a cluster template for a Kubernetes cluster using the |fedora-coreos-latest| image So in the end it seems that the CoreOS image is now being suggested for the Kubernetes cluster creation. The bootstrapping process seems to be handled by ignition which handles the ssh keys (I need to find out in more detail how the ignition mechanism works to better understand this process) Thanks On 08/04/2021 08:19, Bharat Kunwar wrote: > As in, do you have that label set in the image property? > > Sent from my iPhone > >> On 8 Apr 2021, at 07:05, Bharat Kunwar wrote: >> >>  Is your os_distro=fedora-coreos or fedora-atomic? >> >> Sent from my iPhone >> >>> On 7 Apr 2021, at 22:12, Luke Camilleri >>> wrote: >>> >>>  >>> >>> Hi Bharat, I am on Victoria so that should satisfy the requirement: >>> >>> # rpm -qa | grep -i heat >>> openstack-heat-api-cfn-15.0.0-1.el8.noarch >>> openstack-heat-api-15.0.0-1.el8.noarch >>> python3-heatclient-2.2.1-2.el8.noarch >>> openstack-heat-common-15.0.0-1.el8.noarch >>> openstack-heat-engine-15.0.0-1.el8.noarch >>> openstack-heat-ui-4.0.0-1.el8.noarch >>> >>> So from what I can see during the stack's step at >>> OS::Heat::SoftwareConfig is the step that gets the data right? >>> >>> agent_config: >>>     type: OS::Heat::SoftwareConfig >>>     properties: >>>       group: ungrouped >>>       config: >>>         list_join: >>>           - "\n" >>>           - >>>             - str_replace: >>>                 template: {get_file: user_data.json} >>>                 params: >>>                   __HOSTNAME__: {get_param: name} >>>                   __SSH_KEY_VALUE__: {get_param: ssh_public_key} >>>                   __OPENSTACK_CA__: {get_param: openstack_ca} >>>                   __CONTAINER_INFRA_PREFIX__: >>> >>> >>> In the stack I can see that the step below which corresponds to the >>> agent_config above and has just been initialized: >>> >>> kube_cluster_config >>> >>> >>> OS::Heat::SoftwareConfig 46 minutes Init Complete >>> >>> My question here would be: >>> >>> 1- is the file the user_data >>> >>> 2- at which step is this data aplied to the instance as from the >>> fedora docs ( >>> https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview >>> ) this step seems to be at the initial stages of the boot process >>> >>> Thanks in advance for any assistance >>> >>> On 07/04/2021 22:54, Bharat Kunwar wrote: >>>> The ssh key gets injected via ignition which is why it’s not >>>> present in the HOT template. You need minimum train release of Heat >>>> for this to work however. >>>> >>>> Sent from my iPhone >>>> >>>>> On 7 Apr 2021, at 21:45, Luke Camilleri >>>>> wrote: >>>>> >>>>>  >>>>> >>>>> Hello Ammad and thanks for your assistance. I followed the guide >>>>> and it has all the details and steps except for one thing, the ssh >>>>> key is not being passed over to the instance, if I deploy an >>>>> instance from that image and pass the ssh key it works fine but if >>>>> I use the image as part of the HOT it lists the key as "-" >>>>> >>>>> Did you have this issue by any chance? Never thought I would be >>>>> asking this question as it is a basic thing but I find it very >>>>> strange that this is not working. I tried to pass the ssh key in >>>>> either the template or in the cluster creation command but for >>>>> both options the Key Name metadata option for the instance remains >>>>> "None" when the instance is deployed. >>>>> >>>>> I then went on and checked the yaml file the resource uses that >>>>> loads/gets the parameters >>>>> /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >>>>> has the below yaml configurations: >>>>> >>>>> kube-master: >>>>>     type: OS::Nova::Server >>>>>     condition: image_based >>>>>     properties: >>>>>       name: {get_param: name} >>>>>       image: {get_param: server_image} >>>>>       flavor: {get_param: master_flavor} >>>>> MISSING ----->   key_name: {get_param: ssh_key_name} >>>>>       user_data_format: SOFTWARE_CONFIG >>>>>       software_config_transport: POLL_SERVER_HEAT >>>>>       user_data: {get_resource: agent_config} >>>>>       networks: >>>>>         - port: {get_resource: kube_master_eth0} >>>>>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>>>       availability_zone: {get_param: availability_zone} >>>>> >>>>> kube-master-bfv: >>>>>     type: OS::Nova::Server >>>>>     condition: volume_based >>>>>     properties: >>>>>       name: {get_param: name} >>>>>       flavor: {get_param: master_flavor} >>>>> MISSING ----->   key_name: {get_param: ssh_key_name} >>>>>       user_data_format: SOFTWARE_CONFIG >>>>>       software_config_transport: POLL_SERVER_HEAT >>>>>       user_data: {get_resource: agent_config} >>>>>       networks: >>>>>         - port: {get_resource: kube_master_eth0} >>>>>       scheduler_hints: { group: { get_param: nodes_server_group_id }} >>>>>       availability_zone: {get_param: availability_zone} >>>>>       block_device_mapping_v2: >>>>>         - boot_index: 0 >>>>>           volume_id: {get_resource: kube_node_volume} >>>>> >>>>> If i add the lines which show as missing, then everything works >>>>> well and the key is actually injected in the kubemaster. Did >>>>> anyone had this issue? >>>>> >>>>> On 07/04/2021 10:24, Ammad Syed wrote: >>>>>> Hi Luke, >>>>>> >>>>>> You may refer to below guide for magnum installation and its >>>>>> template >>>>>> >>>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10 >>>>>> >>>>>> >>>>>> It worked pretty well for me. >>>>>> >>>>>> - Ammad >>>>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri >>>>>> >>>>> > wrote: >>>>>> >>>>>> Thanks for your quick reply. Do you have a download link for >>>>>> that image as I cannot find an archive for the 32 release? >>>>>> >>>>>> As for the image upload into openstack you still use the >>>>>> fedora-atomic property right to be available for coe deployments? >>>>>> >>>>>> On 07/04/2021 00:03, feilong wrote: >>>>>>> >>>>>>> Hi Luke, >>>>>>> >>>>>>> The Fedora Atomic driver has been deprecated a while since >>>>>>> the Fedora Atomic has been deprecated by upstream. For now, >>>>>>> I would suggest using Fedora CoreOS 32.20201104.3.0 >>>>>>> >>>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are >>>>>>> something when booting based my testing, see >>>>>>> https://github.com/coreos/fedora-coreos-tracker/issues/735 >>>>>>> >>>>>>> >>>>>>> Please feel free to let me know if you have any question >>>>>>> about using Magnum. We're using stable/victoria on our >>>>>>> public cloud and it works very well. I can share our public >>>>>>> templates if you want. Cheers. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote: >>>>>>>> >>>>>>>> We have insatlled magnum following the installation guide >>>>>>>> here >>>>>>>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html >>>>>>>> >>>>>>>> and the process was quite smooth but we have been having >>>>>>>> some issues with the deployment of the clusters. >>>>>>>> >>>>>>>> The image being used as per the documentation is >>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64 >>>>>>>> >>>>>>>> >>>>>>>> Our first issue was that podman was being used even if we >>>>>>>> specified the use_podman=false (since the image above did >>>>>>>> not include podman) but this was resulting in a timeout and >>>>>>>> the cluster would fail to deploy. We have then installed >>>>>>>> podman in the image and the cluster progressed a bit further >>>>>>>> >>>>>>>> /+ echo 'WARNING Attempt 60: Trying to install kubectl. >>>>>>>> Sleeping 5s'// >>>>>>>> //+ sleep 5s// >>>>>>>> //+ ssh -F /srv/magnum/.ssh/config root at localhost >>>>>>>> '/usr/bin/podman run     --entrypoint /bin/bash     --name >>>>>>>> install-kubectl     --net host     --privileged --rm     >>>>>>>> --user root --volume /srv/magnum/bin:/host/srv/magnum/bin >>>>>>>> k8s.gcr.io/hyperkube:v1.15.7 >>>>>>>> -c '\''cp >>>>>>>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''// >>>>>>>> //bash: /usr/bin/podman: No such file or directory// >>>>>>>> //ERROR Unable to install kubectl. Abort.// >>>>>>>> //+ i=61// >>>>>>>> //+ '[' 61 -gt 60 ']'// >>>>>>>> //+ echo 'ERROR Unable to install kubectl. Abort.'// >>>>>>>> //+ exit 1/ >>>>>>>> >>>>>>>> The cluster is now failing here at "kube_cluster_deploy" >>>>>>>> and when checking the logs on the master node we noticed >>>>>>>> the following in the log files: >>>>>>>> >>>>>>>> /Starting to run kube-apiserver-to-kubelet-role// >>>>>>>> //Waiting for Kubernetes API...// >>>>>>>> //+ echo 'Waiting for Kubernetes API...'// >>>>>>>> //++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> // >>>>>>>> //+ '[' ok = '' ']'// >>>>>>>> //+ sleep 5/ >>>>>>>> >>>>>>>> This is because the kubernetes API server is not installed >>>>>>>> either. I have noticed some scripts that should handle the >>>>>>>> installation but I would like to know if anyone here has >>>>>>>> had similar issues with a clean Victoria installation. >>>>>>>> >>>>>>>> Also should we have to install any packages in the fedora >>>>>>>> atomic image file or should the installation requirements >>>>>>>> be part of the stack? >>>>>>>> >>>>>>>> Thanks in advance for any asistance >>>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> Feilong Wang (王飞龙) >>>>>>> ------------------------------------------------------ >>>>>>> Senior Cloud Software Engineer >>>>>>> Tel: +64-48032246 >>>>>>> Email:flwang at catalyst.net.nz >>>>>>> Catalyst IT Limited >>>>>>> Level 6, Catalyst House,150 Willis Street, Wellington >>>>>>> ------------------------------------------------------ >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> >>>>>> >>>>>> Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Thu Apr 8 13:50:06 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 09:50:06 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <1617889806.416816408@apps.rackspace.com> Hi Brian and Goutham, Thank you for participating! Yes, it will be a live session. As for length, we are aiming for 5-10 minutes per presenter (this number kind of depends on how many people we have signup to present, so I can give y'all a better idea tomorrow what the length should be). Cheers, Helena -----Original Message----- From: "Brian Rosmaita" Sent: Wednesday, April 7, 2021 8:20pm To: openstack-discuss at lists.openstack.org Subject: Re: [ptl] Wallaby Release Community Meeting On 4/7/21 7:09 PM, Goutham Pacha Ravi wrote: > On Wed, Apr 7, 2021 at 12:46 PM helena at openstack.org > wrote: >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! > > Hi Helena, > > Thanks for the information. I'd like to sign up on behalf of the > Manila project team. > Is this a live presentation unlike last time where we pre-recorded ~10 > minute updates? > > Thanks, > Goutham I'd like to sign up on behalf of Cinder. Same questions as Goutham, though: will it be "live", and what are your expectations about length of presentation? cheers, brian > > >> >> >> >> Thank you for your participation, >> >> Helena >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Apr 8 14:17:22 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 08 Apr 2021 16:17:22 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: On Wed, Apr 7, 2021 at 14:37, helena at openstack.org wrote: > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live > session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project > at the Wallaby community meeting, please let me know by this Friday, > April 9th. Slides will be due next Tuesday, April 13th, and please > find a template attached you may use if you wish. > Please sign me up, I will give a short update from Nova perspective. Cheers, gibi > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > From ykarel at redhat.com Thu Apr 8 14:17:14 2021 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 8 Apr 2021 19:47:14 +0530 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Ruslanas, For the issue see https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, The puppet-neutron issue in above was specific to victoria but since there is new release for ussuri recently, it also hit there too. Thanks and Regards Yatin Karel On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis wrote: > > Hi all, > > While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: > openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml > > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ > > builddir/install-undercloud.log ( contains info about container-puppet-neutron ) > http://paste.openstack.org/show/804181/ > > undercloud.conf: > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > > dnf list installed > http://paste.openstack.org/show/804182/ > > -- > Ruslanas Gžibovskis > +370 6030 7030 From helena at openstack.org Thu Apr 8 14:23:58 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 10:23:58 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <1617891838.939328095@apps.rackspace.com> Perfect! Thank you for participating :) Cheers, Helena -----Original Message----- From: "Balazs Gibizer" Sent: Thursday, April 8, 2021 10:17am To: helena at openstack.org Cc: "OpenStack Discuss" Subject: Re: [ptl] Wallaby Release Community Meeting On Wed, Apr 7, 2021 at 14:37, helena at openstack.org wrote: > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live > session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project > at the Wallaby community meeting, please let me know by this Friday, > April 9th. Slides will be due next Tuesday, April 13th, and please > find a template attached you may use if you wish. > Please sign me up, I will give a short update from Nova perspective. Cheers, gibi > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Apr 8 14:29:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 8 Apr 2021 16:29:06 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617891838.939328095@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> <1617891838.939328095@apps.rackspace.com> Message-ID: Please count me in for Masakari. Kind regards, -yoctozepto From helena at openstack.org Thu Apr 8 14:34:50 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 10:34:50 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <1617891838.939328095@apps.rackspace.com> Message-ID: <1617892490.730925429@apps.rackspace.com> Awesome! Thank you Cheers, Helena -----Original Message----- From: "Radosław Piliszek" Sent: Thursday, April 8, 2021 10:29am To: "helena at openstack.org" Cc: "OpenStack Discuss" , "Ashlee Ferguson" , "Erin Disney" Subject: Re: [ptl] Wallaby Release Community Meeting Please count me in for Masakari. Kind regards, -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Apr 8 14:43:40 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 8 Apr 2021 07:43:40 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Hey Allison, Metrics would be awesome and I'm just looking for the key high level adoption information as that is good to put into the presentation. -Julia On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > > Hi Julia, > > I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > > Thanks! > Allison > > > > > > On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > > Related, Is there 2020 user survey data available? > > On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > wrote: > > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > > From juliaashleykreger at gmail.com Thu Apr 8 14:45:52 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 8 Apr 2021 07:45:52 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: Hi Helena, I would be happy to participate on behalf of Ironic. -Julia On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > From helena at openstack.org Thu Apr 8 14:48:20 2021 From: helena at openstack.org (helena at openstack.org) Date: Thu, 8 Apr 2021 10:48:20 -0400 (EDT) Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <1617893300.94195859@apps.rackspace.com> Awesome, thank you! Cheers, Helena -----Original Message----- From: "Julia Kreger" Sent: Thursday, April 8, 2021 10:45am To: "helena at openstack.org" Cc: "OpenStack Discuss" Subject: Re: [ptl] Wallaby Release Community Meeting Hi Helena, I would be happy to participate on behalf of Ironic. -Julia On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org wrote: > > Hello ptls, > > > > The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > > > If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu Apr 8 14:48:15 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 8 Apr 2021 23:48:15 +0900 Subject: [puppet] Proposing Alan Bishop (abishop) for puppet-cinder core and puppet-glance core In-Reply-To: References: Message-ID: Hello, Thank you, all who shared your feedback ! Because we have only positive responses and I got +2 from Emilien locally, I'll invite Alan to the core team for these two projects based on my proposal. I'll request new groups specific to puppet-cinder and puppet-glance in a few days and add him to these groups once prepared. Thank you, Alan, for your nice work so far, and I'm looking forward to your further contributions ! Thank you, Takashi On Wed, Mar 31, 2021 at 6:24 PM Takashi Kajinami wrote: > Hello, > > > I'd like to propose Alan Bishop (abishop) for the core team of > puppet-cinder > and puppet-glance. > Alan has been actively involved in these 2 modules for a few years > and has implemented some nice features like multiple backend support in > glance, > cinder s3 backup driver and etc, which expanded adoption of > puppet-openstack. > He has also provided good reviews on patches for these 2 repos based > on his understanding about our code, puppet and serverspec. > > He is an active contributor to cinder and has deep knowledge about it. > In addition He is also a core review in TripleO, which consumes our puppet > modules, > and mainly covers storage components like cinder and glance, so he is > familiar > with the way how these two components are deployed and configured. > > I believe adding him to our board helps us improve our review of these two > modules. > > I'll wait for one week to hear any feedback from other core reviewers. > > Thank you, > Takashi > > -- ---------- Takashi Kajinami Principal Software Maintenance Engineer Customer Experience and Engagement Red Hat email: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Thu Apr 8 15:22:01 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 8 Apr 2021 17:22:01 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <512255613.133816.1617895321756@ox.dhbw-mannheim.de> Hi Dmitriy, > I'm wondering if you see also stack trace in keystone logs? Running 'journalctl' on the keystone container, I don't see any tracebacks. Or is there a specific service I should check? Kind regards, Oliver From ruslanas at lpic.lt Thu Apr 8 16:11:09 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 8 Apr 2021 18:11:09 +0200 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Yatin, I have spotted that version of puppet-tripleo, but even after downgrade I had/have same issue. should I downgrade even more? :) OR You know when fixed version might get in for production centos ussuri release repo? As you know now that it is affected also :) On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: > Hi Ruslanas, > > For the issue see > https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, > The puppet-neutron issue in above was specific to victoria but since > there is new release for ussuri recently, it also hit there too. > > > Thanks and Regards > Yatin Karel > > On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > While deploying undercloud, always fails on puppet-container-neutron > configuration, it fails with missing ml2 ovs_driver plugin... downloading > them using: > > openstack tripleo container image prepare default --output-env-file > containers-prepare-parameters.yaml > > > > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log > http://paste.openstack.org/show/804180/ > > > > builddir/install-undercloud.log ( contains info about > container-puppet-neutron ) > > http://paste.openstack.org/show/804181/ > > > > undercloud.conf: > > > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > > > > dnf list installed > > http://paste.openstack.org/show/804182/ > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Thu Apr 8 16:20:37 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 8 Apr 2021 16:20:37 +0000 Subject: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? Message-ID: Hello stackers, I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. What works: - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. - I can create all my neutron networks, routers, subnets, without (obvious) errors. - I can update security groups on the container and see the iptables rules updated appropriately. - I can directly create Docker networks using the kuryr driver/type. What doesn’t work: - I can’t see any vxlan ports on the br-tun OVS bridge - I can’t access the exposed container ports from the control/network node via the router netns - Because of that, I can’t assign floating IPs because NAT effectively won’t work to reach the containers The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. Has anybody deployed such a setup / are there limitations I should know about? Thank you! Jason Anderson DevOps Lead, Chameleon --- Department of Computer Science, University of Chicago Mathematics and Computer Science, Argonne National Laboratory jasonanderson at uchicago.edu From marios at redhat.com Thu Apr 8 16:20:44 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 8 Apr 2021 19:20:44 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 7:55 PM John Fulton wrote: > > On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou wrote: >> >> Hello TripleO o/ >> >> Thanks again to everybody who has volunteered to lead a session for >> the coming Xena TripleO project teams gathering. >> >> I've had a go at the agenda [1] trying to keep it to max 4 or 5 >> sessions per day with some breaks. >> >> Please review the slot assigned for your session at [1]. If that time >> is not ok then please let me know as soon as possible and indicate if >> you want it later or earlier or on any other day. > > > On Monday I see: > > 1. STORAGE: 1430-1510 (ceph) > 2. DF: 1510-1550 (ephemeral heat) > 3. DF/Networking: 1600-1700 (ports v2 "no heat") > > If Harald and James are OK with it, could it be changed to the following? > > A. DF: 1430-1510 (ephemeral heat) > B. DF/Networking: 1510-1550 (ports v2 "no heat") > C. STORAGE: 1600-1700 (ceph) > > I ask because a portion of C depends on B, so it would be helpful to have that context first. If the presenters have conflicts however, we don't need this change. > ACK thanks John that totally makes sense... as just discussed on irc [1] I've updated the schedule to reflect your proposal. I haven't heard back from slagle yet but cc'ing him here and if there are any issues we can work them out thanks [1] http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2021-04-08.log.html#t2021-04-08T15:47:12 > Thanks, > John > > >> >> If you've decided >> the session no longer makes sense then also please tell me and we can >> move things around accordingly to finish earlier. >> >> I'd like to finalise the schedule by next Monday 12 April which is a >> week before PTG. We can and likely will make changes after this date >> but last minute changes are best avoided to allow folks to schedule >> their PTG attendance across projects. >> >> Thanks everybody for your help! Looking forward to interesting >> presentations and discussions as always >> >> regards, marios >> >> [1] https://etherpad.opendev.org/p/tripleo-ptg-xena >> >> From james.slagle at gmail.com Thu Apr 8 16:32:16 2021 From: james.slagle at gmail.com (James Slagle) Date: Thu, 8 Apr 2021 12:32:16 -0400 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On Thu, Apr 8, 2021 at 12:24 PM Marios Andreou wrote: > On Wed, Apr 7, 2021 at 7:55 PM John Fulton wrote: > > > > On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou > wrote: > >> > >> Hello TripleO o/ > >> > >> Thanks again to everybody who has volunteered to lead a session for > >> the coming Xena TripleO project teams gathering. > >> > >> I've had a go at the agenda [1] trying to keep it to max 4 or 5 > >> sessions per day with some breaks. > >> > >> Please review the slot assigned for your session at [1]. If that time > >> is not ok then please let me know as soon as possible and indicate if > >> you want it later or earlier or on any other day. > > > > > > On Monday I see: > > > > 1. STORAGE: 1430-1510 (ceph) > > 2. DF: 1510-1550 (ephemeral heat) > > 3. DF/Networking: 1600-1700 (ports v2 "no heat") > > > > If Harald and James are OK with it, could it be changed to the following? > > > > A. DF: 1430-1510 (ephemeral heat) > > B. DF/Networking: 1510-1550 (ports v2 "no heat") > > C. STORAGE: 1600-1700 (ceph) > > > > I ask because a portion of C depends on B, so it would be helpful to > have that context first. If the presenters have conflicts however, we don't > need this change. > > > > ACK thanks John that totally makes sense... as just discussed on irc > [1] I've updated the schedule to reflect your proposal. > > I haven't heard back from slagle yet but cc'ing him here and if there > are any issues we can work them out > The change wfm, thanks. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Apr 8 16:50:37 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Apr 2021 16:50:37 +0000 Subject: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: References: Message-ID: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: [...] > I proposed a patch to move to nodejs10 all our projects that depend on > nodejs: > > https://review.opendev.org/c/openstack/project-config/+/785353 > > When this patch will be merged I think that this job could be reenqueued. I reenqueued the tag, but release-openstack-javascript failed on a different problem. NPM complains that there's already a eslint-config-openstack 4.0.1 published which can't be overwritten, but the tag is for 4.1.0... someone should probably update the version parameter in eslint-config-openstack's package.json file, which means it'll need another release tagged anyway (4.1.1?). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jasonanderson at uchicago.edu Thu Apr 8 17:00:19 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 8 Apr 2021 17:00:19 +0000 Subject: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? In-Reply-To: References: Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341@uchicago.edu> As usual, “rubber ducking” the openstack-discuss list yielded fruit. It turns out that I didn’t have the l2population mechanism driver enabled. I thought this was optional for some reason. It looks like enabling this and restarting the neutorn-openvswitch-agent has fixed connectivity! /Jason > On Apr 8, 2021, at 11:20 AM, Jason Anderson wrote: > > Hello stackers, > > I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. > > What works: > - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. > - I can create all my neutron networks, routers, subnets, without (obvious) errors. > - I can update security groups on the container and see the iptables rules updated appropriately. > - I can directly create Docker networks using the kuryr driver/type. > > What doesn’t work: > - I can’t see any vxlan ports on the br-tun OVS bridge > - I can’t access the exposed container ports from the control/network node via the router netns > - Because of that, I can’t assign floating IPs because NAT effectively won’t work to reach the containers > > The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. > > Has anybody deployed such a setup / are there limitations I should know about? > > Thank you! > > > Jason Anderson > > DevOps Lead, Chameleon > > --- > > Department of Computer Science, University of Chicago > Mathematics and Computer Science, Argonne National Laboratory > jasonanderson at uchicago.edu > From ildiko.vancsa at gmail.com Thu Apr 8 17:18:54 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 8 Apr 2021 19:18:54 +0200 Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> Hi, I’m reaching out to draw your attention to the Edge Computing Group sessions on the PTG in less than two weeks. We are still formalizing our agenda, but we have storage identified as one of the topics that the working group would like to discuss. It would be great to have the session also as a continuation to earlier discussions that we had on previous PTGs with relevant OpenStack project contributors. We have a few cross-community sessions scheduled already, but we still have some flexibility in our agenda to schedule this topic so the most people who are interested in participating can join. Our available options are: * Monday (April 19) between 1400 UTC and 1500 UTC * Tuesday (April) between 1400 UTC and 1600 UTC __Please let me know if you or your project would like to participate and if you have a time slot difference from the above.__ Thanks and Best Regards, Ildikó (IRC ildikov on Freenode) From johfulto at redhat.com Thu Apr 8 17:39:45 2021 From: johfulto at redhat.com (John Fulton) Date: Thu, 8 Apr 2021 13:39:45 -0400 Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG In-Reply-To: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> References: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> Message-ID: On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa wrote: > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group > sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as one > of the topics that the working group would like to discuss. It would be > great to have the session also as a continuation to earlier discussions > that we had on previous PTGs with relevant OpenStack project contributors. > > We have a few cross-community sessions scheduled already, but we still > have some flexibility in our agenda to schedule this topic so the most > people who are interested in participating can join. Our available options > are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > I'm not available Monday but could join Tuesday. I'd be curious to hear what others are doing with Storage on the Edge and could share some info on how TripleO does it. John > > __Please let me know if you or your project would like to participate and > if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Thu Apr 8 20:04:04 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 8 Apr 2021 20:04:04 +0000 Subject: [kolla] RHEL based container image Message-ID: Hi, Given [1], RHEL based container is supported on RHEL 8 by Kolla. Where can I get RHEL based container images? I see CentOS and Ubuntu based images on docker hub, but can't find RHEL based images. [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support-matrix.html Thanks! Tony From stephane.chalansonnet at acoss.fr Thu Apr 8 20:28:30 2021 From: stephane.chalansonnet at acoss.fr (=?utf-8?B?Q0hBTEFOU09OTkVUIFN0w6lwaGFuZSAoQWNvc3Mp?=) Date: Thu, 8 Apr 2021 20:28:30 +0000 Subject: [kolla] RHEL based container image (Tony Liu) Message-ID: Hello, You need an active subscription RHOSP for doing that , but Kolla was not supported by Redhat unfortunely ... Stéphane Chalansonnet -----Message d'origine----- De : openstack-discuss-request at lists.openstack.org Envoyé : jeudi 8 avril 2021 22:04 À : openstack-discuss at lists.openstack.org Objet : openstack-discuss Digest, Vol 30, Issue 56 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed (Jeremy Stanley) 2. Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? (Jason Anderson) 3. [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG (Ildiko Vancsa) 4. Re: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG (John Fulton) 5. [kolla] RHEL based container image (Tony Liu) ---------------------------------------------------------------------- Message: 1 Date: Thu, 8 Apr 2021 16:50:37 +0000 From: Jeremy Stanley To: openstack-discuss at lists.openstack.org Subject: Re: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed Message-ID: <20210408165036.6tkcfwwix5ms3ig4 at yuggoth.org> Content-Type: text/plain; charset="utf-8" On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: [...] > I proposed a patch to move to nodejs10 all our projects that depend on > nodejs: > > https://review.opendev.org/c/openstack/project-config/+/785353 > > When this patch will be merged I think that this job could be reenqueued. I reenqueued the tag, but release-openstack-javascript failed on a different problem. NPM complains that there's already a eslint-config-openstack 4.0.1 published which can't be overwritten, but the tag is for 4.1.0... someone should probably update the version parameter in eslint-config-openstack's package.json file, which means it'll need another release tagged anyway (4.1.1?). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: ------------------------------ Message: 2 Date: Thu, 8 Apr 2021 17:00:19 +0000 From: Jason Anderson To: openstack-discuss Subject: Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341 at uchicago.edu> Content-Type: text/plain; charset="utf-8" As usual, “rubber ducking” the openstack-discuss list yielded fruit. It turns out that I didn’t have the l2population mechanism driver enabled. I thought this was optional for some reason. It looks like enabling this and restarting the neutorn-openvswitch-agent has fixed connectivity! /Jason > On Apr 8, 2021, at 11:20 AM, Jason Anderson wrote: > > Hello stackers, > > I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. > > What works: > - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. > - I can create all my neutron networks, routers, subnets, without (obvious) errors. > - I can update security groups on the container and see the iptables rules updated appropriately. > - I can directly create Docker networks using the kuryr driver/type. > > What doesn’t work: > - I can’t see any vxlan ports on the br-tun OVS bridge > - I can’t access the exposed container ports from the control/network > node via the router netns > - Because of that, I can’t assign floating IPs because NAT effectively > won’t work to reach the containers > > The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. > > Has anybody deployed such a setup / are there limitations I should know about? > > Thank you! > > > Jason Anderson > > DevOps Lead, Chameleon > > --- > > Department of Computer Science, University of Chicago Mathematics and > Computer Science, Argonne National Laboratory > jasonanderson at uchicago.edu > ------------------------------ Message: 3 Date: Thu, 8 Apr 2021 19:18:54 +0200 From: Ildiko Vancsa To: OpenStack Discuss Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0 at gmail.com> Content-Type: text/plain; charset=utf-8 Hi, I’m reaching out to draw your attention to the Edge Computing Group sessions on the PTG in less than two weeks. We are still formalizing our agenda, but we have storage identified as one of the topics that the working group would like to discuss. It would be great to have the session also as a continuation to earlier discussions that we had on previous PTGs with relevant OpenStack project contributors. We have a few cross-community sessions scheduled already, but we still have some flexibility in our agenda to schedule this topic so the most people who are interested in participating can join. Our available options are: * Monday (April 19) between 1400 UTC and 1500 UTC * Tuesday (April) between 1400 UTC and 1600 UTC __Please let me know if you or your project would like to participate and if you have a time slot difference from the above.__ Thanks and Best Regards, Ildikó (IRC ildikov on Freenode) ------------------------------ Message: 4 Date: Thu, 8 Apr 2021 13:39:45 -0400 From: John Fulton To: Ildiko Vancsa Cc: OpenStack Discuss Subject: Re: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG Message-ID: Content-Type: text/plain; charset="utf-8" On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa wrote: > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group > sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as > one of the topics that the working group would like to discuss. It > would be great to have the session also as a continuation to earlier > discussions that we had on previous PTGs with relevant OpenStack project contributors. > > We have a few cross-community sessions scheduled already, but we still > have some flexibility in our agenda to schedule this topic so the most > people who are interested in participating can join. Our available > options > are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > I'm not available Monday but could join Tuesday. I'd be curious to hear what others are doing with Storage on the Edge and could share some info on how TripleO does it. John > > __Please let me know if you or your project would like to participate > and if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 5 Date: Thu, 8 Apr 2021 20:04:04 +0000 From: Tony Liu To: "openstack-discuss at lists.openstack.org" Subject: [kolla] RHEL based container image Message-ID: Content-Type: text/plain; charset="us-ascii" Hi, Given [1], RHEL based container is supported on RHEL 8 by Kolla. Where can I get RHEL based container images? I see CentOS and Ubuntu based images on docker hub, but can't find RHEL based images. [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support-matrix.html Thanks! Tony ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 30, Issue 56 ************************************************* From sbaker at redhat.com Thu Apr 8 22:17:21 2021 From: sbaker at redhat.com (Steve Baker) Date: Fri, 9 Apr 2021 10:17:21 +1200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me if it was earlier in the day. I'll probably make more sense at 1am than 3am :) Could I maybe swap with NETWORKING: 1300-1340? On 8/04/21 4:24 am, Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Apr 8 23:07:27 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 8 Apr 2021 16:07:27 -0700 Subject: [first contact] [SIG] PTG Planning! Message-ID: Hello! I didn't think we would need a ton of time and I tried to pick a time to balance everyone's timezones so we have an hour at 22 UTC on Monday in the Austin room. We have an etherpad that was autogenerated: https://etherpad.opendev.org/p/apr2021-ptg-first-contact Please add topics if you have them and your name if you plan to join us! -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Apr 9 00:36:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Apr 2021 00:36:17 +0000 Subject: [all][elections][tc] TC Vacancy Special Election Voting Kickoff Message-ID: <20210409003616.shwf353mgwlxjmwy@yuggoth.org> TC Vacancy Special Election Nomination period is now over. The four already elected TC members for this term are listed as candidates in the special election, but will not appear on any resulting poll as they have already been officially elected. Only new candidates who are not the four elected TC members for this term will appear on a subsequent poll for the TC vacancy special election. The poll for the TC Vacancy Special Election is now open and will remain open until Apr 15, 2021 23:45 UTC. We are selecting 1 additional TC member, please rank all candidates in your order of preference. You are eligible to vote if you are a Foundation individual member[1] that also has committed to one of the official project teams' deliverable repositories[2] over the Apr 24, 2020 00:00 UTC - Mar 08, 2021 00:00 UTC timeframe (Victoria to Wallaby) or if you are one of the extra-atcs.[3] Please note that in order to confirm contributors are foundation members, the preferred address in Gerrit must also be included in the addresses for the corresponding member profile. What to do if you don't see the email and have a commit in at least one of the official deliverables[2]: * check the trash or spam folder of your gerrit Preferred Email address[4], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from an official deliverable repo[2] and email the election officials[1]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Our democratic process is important to the health of OpenStack, please exercise your right to vote. Candidate statements/platforms can be found linked to Candidate names[6]. Happy voting! Thank you, [1] https://www.openstack.org/community/members/ [2] https://opendev.org/openstack/governance/src/commit/892c4f3a851428cf41bab57c6c283e82f1df06d8/reference/projects.yaml [3] Look for the extra-atcs element in [2] [4] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your preferred email. That is where the ballot has been sent. [5] https://governance.openstack.org/election/#election-officials [6] https://governance.openstack.org/election/#xena-tc-candidates -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tonyliu0592 at hotmail.com Fri Apr 9 01:03:45 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 9 Apr 2021 01:03:45 +0000 Subject: [kolla] RHEL based container image (Tony Liu) In-Reply-To: References: Message-ID: I have RHEL subscription and I want to know if it's possible to use Kolla deploy OpenStack. It's supposed to be yes based on the doc. I just want to know where I can get container images. The container image on RedHat is only for TripleO. Thanks! Tony > -----Original Message----- > From: CHALANSONNET Stéphane (Acoss) > Sent: Thursday, April 8, 2021 1:29 PM > To: openstack-discuss at lists.openstack.org > Subject: RE: [kolla] RHEL based container image (Tony Liu) > > Hello, > > You need an active subscription RHOSP for doing that , but Kolla was not > supported by Redhat unfortunely ... > > Stéphane Chalansonnet > > > -----Message d'origine----- > De : openstack-discuss-request at lists.openstack.org request at lists.openstack.org> > Envoyé : jeudi 8 avril 2021 22:04 > À : openstack-discuss at lists.openstack.org > Objet : openstack-discuss Digest, Vol 30, Issue 56 > > Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > discuss > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific than > "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Re: [Release-job-failures] Release of > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > (Jeremy Stanley) > 2. Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > Zun containers? (Jason Anderson) > 3. [edge][cinder][manila][swift][tripleo] Storage at the edge > discussions at the PTG (Ildiko Vancsa) > 4. Re: [edge][cinder][manila][swift][tripleo] Storage at the > edge discussions at the PTG (John Fulton) > 5. [kolla] RHEL based container image (Tony Liu) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 8 Apr 2021 16:50:37 +0000 > From: Jeremy Stanley > To: openstack-discuss at lists.openstack.org > Subject: Re: [Release-job-failures] Release of > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > Message-ID: <20210408165036.6tkcfwwix5ms3ig4 at yuggoth.org> > Content-Type: text/plain; charset="utf-8" > > On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: > [...] > > I proposed a patch to move to nodejs10 all our projects that depend on > > nodejs: > > > > https://review.opendev.org/c/openstack/project-config/+/785353 > > > > When this patch will be merged I think that this job could be > reenqueued. > > I reenqueued the tag, but release-openstack-javascript failed on a > different problem. NPM complains that there's already a eslint-config- > openstack 4.0.1 published which can't be overwritten, but the tag is for > 4.1.0... someone should probably update the version parameter in eslint- > config-openstack's package.json file, which means it'll need another > release tagged anyway (4.1.1?). > -- > Jeremy Stanley > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 833 bytes > Desc: not available > URL: discuss/attachments/20210408/961aecf4/attachment-0001.sig> > > ------------------------------ > > Message: 2 > Date: Thu, 8 Apr 2021 17:00:19 +0000 > From: Jason Anderson > To: openstack-discuss > Subject: Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > Zun containers? > Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341 at uchicago.edu> > Content-Type: text/plain; charset="utf-8" > > As usual, “rubber ducking” the openstack-discuss list yielded fruit. It > turns out that I didn’t have the l2population mechanism driver enabled. > I thought this was optional for some reason. It looks like enabling this > and restarting the neutorn-openvswitch-agent has fixed connectivity! > > /Jason > > > On Apr 8, 2021, at 11:20 AM, Jason Anderson > wrote: > > > > Hello stackers, > > > > I’m interested in using zun to launch containers and assign floating > IPs via neutron to those containers. I am deploying zun, kuryr- > libnetwork, and neutron with kolla-ansible on the Train release. I’ve > configured neutron with one physical network and I’d like to use a VXLAN > overlay for tenant networks. > > > > What works: > > - I can launch containers on a neutron tenant network, they start > successfully, they get an IP and can reach each other if they’re co- > located on a single host. > > - I can create all my neutron networks, routers, subnets, without > (obvious) errors. > > - I can update security groups on the container and see the iptables > rules updated appropriately. > > - I can directly create Docker networks using the kuryr driver/type. > > > > What doesn’t work: > > - I can’t see any vxlan ports on the br-tun OVS bridge > > - I can’t access the exposed container ports from the control/network > > node via the router netns > > - Because of that, I can’t assign floating IPs because NAT effectively > > won’t work to reach the containers > > > > The fact that there are no ports on br-tun is supicious, but I’m not > sure how this is supposed to work. I don’t see anything weird in > neutron-openvswitch-agent logs but those logs are quite noisy and I’m > not sure what to look for. > > > > Has anybody deployed such a setup / are there limitations I should > know about? > > > > Thank you! > > > > > > Jason Anderson > > > > DevOps Lead, Chameleon > > > > --- > > > > Department of Computer Science, University of Chicago Mathematics and > > Computer Science, Argonne National Laboratory > > jasonanderson at uchicago.edu > > > > > ------------------------------ > > Message: 3 > Date: Thu, 8 Apr 2021 19:18:54 +0200 > From: Ildiko Vancsa > To: OpenStack Discuss > Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge > discussions at the PTG > Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0 at gmail.com> > Content-Type: text/plain; charset=utf-8 > > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group > sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as > one of the topics that the working group would like to discuss. It would > be great to have the session also as a continuation to earlier > discussions that we had on previous PTGs with relevant OpenStack project > contributors. > > We have a few cross-community sessions scheduled already, but we still > have some flexibility in our agenda to schedule this topic so the most > people who are interested in participating can join. Our available > options are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > > __Please let me know if you or your project would like to participate > and if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > > > > ------------------------------ > > Message: 4 > Date: Thu, 8 Apr 2021 13:39:45 -0400 > From: John Fulton > To: Ildiko Vancsa > Cc: OpenStack Discuss > Subject: Re: [edge][cinder][manila][swift][tripleo] Storage at the > edge discussions at the PTG > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa > wrote: > > > Hi, > > > > I’m reaching out to draw your attention to the Edge Computing Group > > sessions on the PTG in less than two weeks. > > > > We are still formalizing our agenda, but we have storage identified as > > one of the topics that the working group would like to discuss. It > > would be great to have the session also as a continuation to earlier > > discussions that we had on previous PTGs with relevant OpenStack > project contributors. > > > > We have a few cross-community sessions scheduled already, but we still > > have some flexibility in our agenda to schedule this topic so the most > > people who are interested in participating can join. Our available > > options > > are: > > > > * Monday (April 19) between 1400 UTC and 1500 UTC > > * Tuesday (April) between 1400 UTC and 1600 UTC > > > > I'm not available Monday but could join Tuesday. I'd be curious to hear > what others are doing with Storage on the Edge and could share some info > on how TripleO does it. > > John > > > > > > __Please let me know if you or your project would like to participate > > and if you have a time slot difference from the above.__ > > > > Thanks and Best Regards, > > Ildikó > > (IRC ildikov on Freenode) > > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: discuss/attachments/20210408/ac34d099/attachment-0001.html> > > ------------------------------ > > Message: 5 > Date: Thu, 8 Apr 2021 20:04:04 +0000 > From: Tony Liu > To: "openstack-discuss at lists.openstack.org" > > Subject: [kolla] RHEL based container image > Message-ID: > .outlook.com> > > Content-Type: text/plain; charset="us-ascii" > > Hi, > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > Where can I get RHEL based container images? I see CentOS and Ubuntu > based images on docker hub, but can't find RHEL based images. > > [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support- > matrix.html > > > Thanks! > Tony > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > ------------------------------ > > End of openstack-discuss Digest, Vol 30, Issue 56 > ************************************************* From iwienand at redhat.com Fri Apr 9 04:01:11 2021 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 9 Apr 2021 14:01:11 +1000 Subject: [Multi-arch SIG] success to run full tempest tests on Arm64 env. What's next? In-Reply-To: References: Message-ID: On Tue, Apr 06, 2021 at 03:43:29PM +0800, Rico Lin wrote: > The job `devstack-platform-arm64` runs around 2.22 hrs to 3.04 hrs, which > is near two times slower than on x86 environment. It's not a solid number > as the performance might change a lot with different cloud environments and > different hardware. I guess right now we only have one ARM64 cloud so it won't vary that much :) But we're working on it ... I'd like to use this for nodepool / diskimage-builder end-to-end testing, where we bring up a devstack cloud, build images with dib, upload them to the devstack cloud with nodepool and boot them. But I found that there was no nested virtualisation and the binary translation mode was impractically slow; like I walked away for almost an hour and the serial console was putting out a letter every few seconds like a teletype from 1977 :) $ qemu-system-aarch64 -M virt -m 2048 -drive if=none,file=./test.qcow2,media=disk,id=hd0 -device virtio-blk-device,drive=hd0 -net none -pflash flash0.img -pflash flash1.img Maybe I have something wrong there? I couldn't find a lot of info on how to boot. I expected slow, but not that slow. Is binary translation practical? Is booting cirros images, etc. big part of this much longer runtime? -i From noonedeadpunk at ya.ru Fri Apr 9 05:24:53 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Fri, 09 Apr 2021 08:24:53 +0300 Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: <512255613.133816.1617895321756@ox.dhbw-mannheim.de> References: <512255613.133816.1617895321756@ox.dhbw-mannheim.de> Message-ID: <500221617945664@mail.yandex.ru> An HTML attachment was scrubbed... URL: From cgoncalves at redhat.com Fri Apr 9 06:27:55 2021 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Fri, 9 Apr 2021 08:27:55 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me > if it was earlier in the day. I'll probably make more sense at 1am than 3am > :) > > Could I maybe swap with NETWORKING: 1300-1340? > Fine with me. Michele, Dan? > On 8/04/21 4:24 am, Marios Andreou wrote: > > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michele at acksyn.org Fri Apr 9 06:44:49 2021 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 9 Apr 2021 08:44:49 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 09, 2021 at 08:27:55AM +0200, Carlos Goncalves wrote: > On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > > > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me > > if it was earlier in the day. I'll probably make more sense at 1am than 3am > > :) > > > > Could I maybe swap with NETWORKING: 1300-1340? > > > Fine with me. > Michele, Dan? Totally fine by me > > On 8/04/21 4:24 am, Marios Andreou wrote: > > > > Hello TripleO o/ > > > > Thanks again to everybody who has volunteered to lead a session for > > the coming Xena TripleO project teams gathering. > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > sessions per day with some breaks. > > > > Please review the slot assigned for your session at [1]. If that time > > is not ok then please let me know as soon as possible and indicate if > > you want it later or earlier or on any other day. If you've decided > > the session no longer makes sense then also please tell me and we can > > move things around accordingly to finish earlier. > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > week before PTG. We can and likely will make changes after this date > > but last minute changes are best avoided to allow folks to schedule > > their PTG attendance across projects. > > > > Thanks everybody for your help! Looking forward to interesting > > presentations and discussions as always > > > > regards, marios > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From skaplons at redhat.com Fri Apr 9 06:53:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 09 Apr 2021 08:53:42 +0200 Subject: [zun][kuryr][neutron] Missing vxlan ports in br-tun for Zun containers? In-Reply-To: <0A70CB1A-35AA-4782-8BF3-496080E47341@uchicago.edu> References: <0A70CB1A-35AA-4782-8BF3-496080E47341@uchicago.edu> Message-ID: <22354979.Dg0L681ARF@p1> Hi, Dnia czwartek, 8 kwietnia 2021 19:00:19 CEST Jason Anderson pisze: > As usual, “rubber ducking” the openstack-discuss list yielded fruit. It turns out that I didn’t have the l2population mechanism driver enabled. I thought this was optional for some reason. It looks like enabling this and restarting the neutorn-openvswitch-agent has fixed connectivity! L2pop should be optional. It's required only when DVR is used. But if You don't want to use it You should disable it on both agent and server's side. In such case neutron-openvswitcht-agent should establish vxlan tunnels to all other nodes just after start of the agent, during first rpc_loop iteration: https://github.com/openstack/neutron/blob/bdd661d21898d573ef39448316860aa4c692b834/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L2604 > > /Jason > > > On Apr 8, 2021, at 11:20 AM, Jason Anderson wrote: > > > > Hello stackers, > > > > I’m interested in using zun to launch containers and assign floating IPs via neutron to those containers. I am deploying zun, kuryr-libnetwork, and neutron with kolla-ansible on the Train release. I’ve configured neutron with one physical network and I’d like to use a VXLAN overlay for tenant networks. > > > > What works: > > - I can launch containers on a neutron tenant network, they start successfully, they get an IP and can reach each other if they’re co-located on a single host. > > - I can create all my neutron networks, routers, subnets, without (obvious) errors. > > - I can update security groups on the container and see the iptables rules updated appropriately. > > - I can directly create Docker networks using the kuryr driver/type. > > > > What doesn’t work: > > - I can’t see any vxlan ports on the br-tun OVS bridge > > - I can’t access the exposed container ports from the control/network node via the router netns > > - Because of that, I can’t assign floating IPs because NAT effectively won’t work to reach the containers > > > > The fact that there are no ports on br-tun is supicious, but I’m not sure how this is supposed to work. I don’t see anything weird in neutron-openvswitch-agent logs but those logs are quite noisy and I’m not sure what to look for. > > > > Has anybody deployed such a setup / are there limitations I should know about? > > > > Thank you! > > > > > > Jason Anderson > > > > DevOps Lead, Chameleon > > > > --- > > > > Department of Computer Science, University of Chicago > > Mathematics and Computer Science, Argonne National Laboratory > > jasonanderson at uchicago.edu > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 9 06:57:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 09 Apr 2021 08:57:33 +0200 Subject: [neutron][all] Tempest jobs running on rocky and queens branches are broken Message-ID: <4338832.CYQXJBBLPY@p1> Hi, I noticed it mostly in the neutron jobs but it seems that it's true also for other projects for jobs which still runs on Ubuntu 16.04. I Neutron case those are all jobs on stable/rocky and stable/queens branches. Due to [1] those jobs will end up with POST_FAILURE. So please don't recheck Your patches if You have such errors until that bug will be fixed. I think that gmann has or is working on fix for that. [1] https://bugs.launchpad.net/devstack/+bug/1923042 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From cjeanner at redhat.com Fri Apr 9 07:09:36 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 9 Apr 2021 09:09:36 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: <751d3ecf-b977-0557-de5f-7390e327db1d@redhat.com> Hey so far so good, my 2 slots are OK Cheers, C. On 4/7/21 6:24 PM, Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From marios at redhat.com Fri Apr 9 07:10:11 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Apr 2021 10:10:11 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 9, 2021 at 9:46 AM Michele Baldessari wrote: > > On Fri, Apr 09, 2021 at 08:27:55AM +0200, Carlos Goncalves wrote: > > On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > > > > > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better for me > > > if it was earlier in the day. I'll probably make more sense at 1am than 3am > > > :) > > > ouch sorry Steve and thank you for participating despite the bad time-difference for you! Yes we can make this change see below > > > Could I maybe swap with NETWORKING: 1300-1340? > > > > > Fine with me. > > Michele, Dan? > > Totally fine by me Great thanks folks - this works well actually since Dan S. already indicated (in another reply to me) that your current slot (1300-1340 UTC) is too early (like 5 am) so moving it to the later slot should work better for him too. I have just updated the schedule so on Tuesday 20 we have Baremetal sbaker @ 1300-1340 and then the networking/bgp/frr folks at 1510-1550 thank you! regards, marios > > > > On 8/04/21 4:24 am, Marios Andreou wrote: > > > > > > Hello TripleO o/ > > > > > > Thanks again to everybody who has volunteered to lead a session for > > > the coming Xena TripleO project teams gathering. > > > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > > sessions per day with some breaks. > > > > > > Please review the slot assigned for your session at [1]. If that time > > > is not ok then please let me know as soon as possible and indicate if > > > you want it later or earlier or on any other day. If you've decided > > > the session no longer makes sense then also please tell me and we can > > > move things around accordingly to finish earlier. > > > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > > week before PTG. We can and likely will make changes after this date > > > but last minute changes are best avoided to allow folks to schedule > > > their PTG attendance across projects. > > > > > > Thanks everybody for your help! Looking forward to interesting > > > presentations and discussions as always > > > > > > regards, marios > > > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > > > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > From hberaud at redhat.com Fri Apr 9 07:48:56 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 9 Apr 2021 09:48:56 +0200 Subject: [Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> References: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> Message-ID: Thanks Jeremy for your update. I'll try to discuss with the project team to see if a bugfix (4.1.1) version fits well for them. Anyway, the previous proposed fix seems to have helped us. We didn't face the max retry issue anymore, indeed, during the latest execution we faced a "post failure" so our job went further. http://lists.openstack.org/pipermail/release-job-failures/2021-April/001528.html Le jeu. 8 avr. 2021 à 18:53, Jeremy Stanley a écrit : > On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: > [...] > > I proposed a patch to move to nodejs10 all our projects that depend on > > nodejs: > > > > https://review.opendev.org/c/openstack/project-config/+/785353 > > > > When this patch will be merged I think that this job could be reenqueued. > > I reenqueued the tag, but release-openstack-javascript failed on a > different problem. NPM complains that there's already a > eslint-config-openstack 4.0.1 published which can't be overwritten, > but the tag is for 4.1.0... someone should probably update the > version parameter in eslint-config-openstack's package.json file, > which means it'll need another release tagged anyway (4.1.1?). > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Fri Apr 9 07:52:17 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 9 Apr 2021 09:52:17 +0200 Subject: [kolla] RHEL based container image In-Reply-To: References: Message-ID: W dniu 08.04.2021 o 22:04, Tony Liu pisze: > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > Where can I get RHEL based container images? I see CentOS and Ubuntu > based images on docker hub, but can't find RHEL based images. > > I have RHEL subscription and I want to know if it's possible to use > Kolla deploy OpenStack. It's supposed to be yes based on the doc. I > just want to know where I can get container images. The container > image on RedHat is only for TripleO. We (as a project) do not build RHEL based container images. During PTG we will discuss dropping it from code [1]. Please use Wallaby CentOS images instead. They are using CentOS Stream 8 so the only difference you would get is what container image was used as a base. 1. https://review.opendev.org/c/openstack/kolla/+/785569 From mark at stackhpc.com Fri Apr 9 07:52:26 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 08:52:26 +0100 Subject: [kolla] RHEL based container image (Tony Liu) In-Reply-To: References: Message-ID: On Fri, 9 Apr 2021 at 02:04, Tony Liu wrote: > > I have RHEL subscription and I want to know if it's possible > to use Kolla deploy OpenStack. It's supposed to be yes based > on the doc. I just want to know where I can get container > images. The container image on RedHat is only for TripleO. Hi Tony, RHEL support is one of those things that was added a long time ago, but is not tested in CI. It is therefore likely to break at any point, especially now that Tripleo does not use Kolla images. I know that RH were pushing the UBI images, and I don't think we've actively done anything to move to those. We've added the future of RHEL support as a discussion topic for the PTG [1]. If you are interested, I recommend that you attend, or at least add some notes to the Etherpad. Thanks, Mark [1] https://etherpad.opendev.org/p/kolla-xena-ptg > > > Thanks! > Tony > > -----Original Message----- > > From: CHALANSONNET Stéphane (Acoss) > > Sent: Thursday, April 8, 2021 1:29 PM > > To: openstack-discuss at lists.openstack.org > > Subject: RE: [kolla] RHEL based container image (Tony Liu) > > > > Hello, > > > > You need an active subscription RHOSP for doing that , but Kolla was not > > supported by Redhat unfortunely ... > > > > Stéphane Chalansonnet > > > > > > -----Message d'origine----- > > De : openstack-discuss-request at lists.openstack.org > request at lists.openstack.org> > > Envoyé : jeudi 8 avril 2021 22:04 > > À : openstack-discuss at lists.openstack.org > > Objet : openstack-discuss Digest, Vol 30, Issue 56 > > > > Send openstack-discuss mailing list submissions to > > openstack-discuss at lists.openstack.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > > discuss > > or, via email, send a message with subject or body 'help' to > > openstack-discuss-request at lists.openstack.org > > > > You can reach the person managing the list at > > openstack-discuss-owner at lists.openstack.org > > > > When replying, please edit your Subject line so it is more specific than > > "Re: Contents of openstack-discuss digest..." > > > > > > Today's Topics: > > > > 1. Re: [Release-job-failures] Release of > > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > > (Jeremy Stanley) > > 2. Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > > Zun containers? (Jason Anderson) > > 3. [edge][cinder][manila][swift][tripleo] Storage at the edge > > discussions at the PTG (Ildiko Vancsa) > > 4. Re: [edge][cinder][manila][swift][tripleo] Storage at the > > edge discussions at the PTG (John Fulton) > > 5. [kolla] RHEL based container image (Tony Liu) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Thu, 8 Apr 2021 16:50:37 +0000 > > From: Jeremy Stanley > > To: openstack-discuss at lists.openstack.org > > Subject: Re: [Release-job-failures] Release of > > openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed > > Message-ID: <20210408165036.6tkcfwwix5ms3ig4 at yuggoth.org> > > Content-Type: text/plain; charset="utf-8" > > > > On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: > > [...] > > > I proposed a patch to move to nodejs10 all our projects that depend on > > > nodejs: > > > > > > https://review.opendev.org/c/openstack/project-config/+/785353 > > > > > > When this patch will be merged I think that this job could be > > reenqueued. > > > > I reenqueued the tag, but release-openstack-javascript failed on a > > different problem. NPM complains that there's already a eslint-config- > > openstack 4.0.1 published which can't be overwritten, but the tag is for > > 4.1.0... someone should probably update the version parameter in eslint- > > config-openstack's package.json file, which means it'll need another > > release tagged anyway (4.1.1?). > > -- > > Jeremy Stanley > > -------------- next part -------------- > > A non-text attachment was scrubbed... > > Name: signature.asc > > Type: application/pgp-signature > > Size: 833 bytes > > Desc: not available > > URL: > discuss/attachments/20210408/961aecf4/attachment-0001.sig> > > > > ------------------------------ > > > > Message: 2 > > Date: Thu, 8 Apr 2021 17:00:19 +0000 > > From: Jason Anderson > > To: openstack-discuss > > Subject: Re: [zun][kuryr][neutron] Missing vxlan ports in br-tun for > > Zun containers? > > Message-ID: <0A70CB1A-35AA-4782-8BF3-496080E47341 at uchicago.edu> > > Content-Type: text/plain; charset="utf-8" > > > > As usual, “rubber ducking” the openstack-discuss list yielded fruit. It > > turns out that I didn’t have the l2population mechanism driver enabled. > > I thought this was optional for some reason. It looks like enabling this > > and restarting the neutorn-openvswitch-agent has fixed connectivity! > > > > /Jason > > > > > On Apr 8, 2021, at 11:20 AM, Jason Anderson > > wrote: > > > > > > Hello stackers, > > > > > > I’m interested in using zun to launch containers and assign floating > > IPs via neutron to those containers. I am deploying zun, kuryr- > > libnetwork, and neutron with kolla-ansible on the Train release. I’ve > > configured neutron with one physical network and I’d like to use a VXLAN > > overlay for tenant networks. > > > > > > What works: > > > - I can launch containers on a neutron tenant network, they start > > successfully, they get an IP and can reach each other if they’re co- > > located on a single host. > > > - I can create all my neutron networks, routers, subnets, without > > (obvious) errors. > > > - I can update security groups on the container and see the iptables > > rules updated appropriately. > > > - I can directly create Docker networks using the kuryr driver/type. > > > > > > What doesn’t work: > > > - I can’t see any vxlan ports on the br-tun OVS bridge > > > - I can’t access the exposed container ports from the control/network > > > node via the router netns > > > - Because of that, I can’t assign floating IPs because NAT effectively > > > won’t work to reach the containers > > > > > > The fact that there are no ports on br-tun is supicious, but I’m not > > sure how this is supposed to work. I don’t see anything weird in > > neutron-openvswitch-agent logs but those logs are quite noisy and I’m > > not sure what to look for. > > > > > > Has anybody deployed such a setup / are there limitations I should > > know about? > > > > > > Thank you! > > > > > > > > > Jason Anderson > > > > > > DevOps Lead, Chameleon > > > > > > --- > > > > > > Department of Computer Science, University of Chicago Mathematics and > > > Computer Science, Argonne National Laboratory > > > jasonanderson at uchicago.edu > > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Thu, 8 Apr 2021 19:18:54 +0200 > > From: Ildiko Vancsa > > To: OpenStack Discuss > > Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge > > discussions at the PTG > > Message-ID: <0A7B1EBD-0715-43ED-B388-A2011D437DD0 at gmail.com> > > Content-Type: text/plain; charset=utf-8 > > > > Hi, > > > > I’m reaching out to draw your attention to the Edge Computing Group > > sessions on the PTG in less than two weeks. > > > > We are still formalizing our agenda, but we have storage identified as > > one of the topics that the working group would like to discuss. It would > > be great to have the session also as a continuation to earlier > > discussions that we had on previous PTGs with relevant OpenStack project > > contributors. > > > > We have a few cross-community sessions scheduled already, but we still > > have some flexibility in our agenda to schedule this topic so the most > > people who are interested in participating can join. Our available > > options are: > > > > * Monday (April 19) between 1400 UTC and 1500 UTC > > * Tuesday (April) between 1400 UTC and 1600 UTC > > > > __Please let me know if you or your project would like to participate > > and if you have a time slot difference from the above.__ > > > > Thanks and Best Regards, > > Ildikó > > (IRC ildikov on Freenode) > > > > > > > > > > > > ------------------------------ > > > > Message: 4 > > Date: Thu, 8 Apr 2021 13:39:45 -0400 > > From: John Fulton > > To: Ildiko Vancsa > > Cc: OpenStack Discuss > > Subject: Re: [edge][cinder][manila][swift][tripleo] Storage at the > > edge discussions at the PTG > > Message-ID: > > > > Content-Type: text/plain; charset="utf-8" > > > > On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa > > wrote: > > > > > Hi, > > > > > > I’m reaching out to draw your attention to the Edge Computing Group > > > sessions on the PTG in less than two weeks. > > > > > > We are still formalizing our agenda, but we have storage identified as > > > one of the topics that the working group would like to discuss. It > > > would be great to have the session also as a continuation to earlier > > > discussions that we had on previous PTGs with relevant OpenStack > > project contributors. > > > > > > We have a few cross-community sessions scheduled already, but we still > > > have some flexibility in our agenda to schedule this topic so the most > > > people who are interested in participating can join. Our available > > > options > > > are: > > > > > > * Monday (April 19) between 1400 UTC and 1500 UTC > > > * Tuesday (April) between 1400 UTC and 1600 UTC > > > > > > > I'm not available Monday but could join Tuesday. I'd be curious to hear > > what others are doing with Storage on the Edge and could share some info > > on how TripleO does it. > > > > John > > > > > > > > > > __Please let me know if you or your project would like to participate > > > and if you have a time slot difference from the above.__ > > > > > > Thanks and Best Regards, > > > Ildikó > > > (IRC ildikov on Freenode) > > > > > > > > > > > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > discuss/attachments/20210408/ac34d099/attachment-0001.html> > > > > ------------------------------ > > > > Message: 5 > > Date: Thu, 8 Apr 2021 20:04:04 +0000 > > From: Tony Liu > > To: "openstack-discuss at lists.openstack.org" > > > > Subject: [kolla] RHEL based container image > > Message-ID: > > > .outlook.com> > > > > Content-Type: text/plain; charset="us-ascii" > > > > Hi, > > > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > > Where can I get RHEL based container images? I see CentOS and Ubuntu > > based images on docker hub, but can't find RHEL based images. > > > > [1] https://docs.openstack.org/kolla-ansible/ussuri/user/support- > > matrix.html > > > > > > Thanks! > > Tony > > > > > > > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > openstack-discuss mailing list > > openstack-discuss at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > > > > ------------------------------ > > > > End of openstack-discuss Digest, Vol 30, Issue 56 > > ************************************************* From mark at stackhpc.com Fri Apr 9 07:56:52 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 08:56:52 +0100 Subject: [kolla] RHEL based container image In-Reply-To: References: Message-ID: On Fri, 9 Apr 2021 at 08:52, Marcin Juszkiewicz wrote: > > W dniu 08.04.2021 o 22:04, Tony Liu pisze: > > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > > Where can I get RHEL based container images? I see CentOS and Ubuntu > > based images on docker hub, but can't find RHEL based images. > > > > I have RHEL subscription and I want to know if it's possible to use > > Kolla deploy OpenStack. It's supposed to be yes based on the doc. I > > just want to know where I can get container images. The container > > image on RedHat is only for TripleO. > > We (as a project) do not build RHEL based container images. During PTG > we will discuss dropping it from code [1]. > > Please use Wallaby CentOS images instead. They are using CentOS Stream 8 > so the only difference you would get is what container image was used as > a base. > > 1. https://review.opendev.org/c/openstack/kolla/+/785569 > For those not at the coal face... Wallaby isn't released yet - please use Victoria or earlier! From hberaud at redhat.com Fri Apr 9 08:49:40 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 9 Apr 2021 10:49:40 +0200 Subject: [telemetry][cyborg][heat][monasca][tacker][keystone][release] Last minute RC to land fixes Message-ID: Hello teams listed above, We identified fixes and significant changes in your repos so we proposed last minute RC to allow you to release them before the final release. Your teams patches are available here: https://review.opendev.org/q/topic:%22wallaby-final-rc%22 Deadline is today, please validate them ASAP to have a chance to see these fixes released. Patches without response from PTLs/liaisons will be abandoned. After this point final release for RC projects will be started. Notice that RC changes should be on stable/wallaby and not on master, all projects are now branched so your master branches are now for Xena purpose. Thank you for your understanding. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From janders at redhat.com Fri Apr 9 09:14:19 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 9 Apr 2021 19:14:19 +1000 Subject: [ironic] APAC-Europe SPUC time? In-Reply-To: References: Message-ID: Hi Dmitry, Thanks for your email and apologies for slow reply. Keeping the APAC SPUC at 10am UTC would work well for me. The only concern is it may fall in the lunch time slot in Europe but that might actually be a good thing - we can do lunch-dinner sessions and talk food if we want to :) @Riccardo what do you reckon? Cheers, Jacob On Thu, Apr 8, 2021 at 12:01 AM Dmitry Tantsur wrote: > Hi folks! > > The initial SPUC datetime was for 10am UTC, which was 11am for us in > central Europe, now is supposed to be 12pm. On one hand, I find it more > convenient to have SPUC at 11am still, on the other - I have German classes > at this time for a few months starting mid-April. > > What do you think? Should we keep it in UTC, i.e. 12pm for us in Europe? > Will that work for you Jacob? > > Dmitry > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Apr 9 09:40:29 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 9 Apr 2021 11:40:29 +0200 Subject: [neutron] Neutron, nftables support and other fantastic beasts Message-ID: Hello Neutrinos: During Wallaby I've been working on enabling "nftables" support in Neutron. The goal was to use the new Netfilter framework replacing the legacy tools ("iptables", "ip6tables", "arptables" and "ebtables"). Because each namespace has its own Netfilter process, isolated from other namespaces, the migration process could be segmented in several tasks: dnat, fip, router, dhcp, metadata, Linux Bridge FW and OVS hybrid FW (I think I'm not missing anything here). When swapping to the new "nftables" framework, we can use the legacy API tools provided. Those tools provide a smooth transition to the new tooling (we found some differences that are now solved). That means we can keep the current code while using "nftables". Please, read [3] before reading the next paragraph, explaining the three "Netfilter" available framework alternatives. I started creating a "nft" (the "nftables" native binary) parser [1] to implement a NFtablesManager class, same as IPtablesManager. But soon I found that the transition to the new API is not that easy. This is not only a matter of creating the equivalent rule in the "nft" API but considering how those rules are handled in "nftables". Other problems found when using the new "nft" API: - The "--checksum-fill" command used in OVN metadata and DHCP namespace has no equivalent in "nft". That means old DHCP servers incorrectly calculating the packet checksum or DKDP environments won't work correctly. - "ipset" tool, used to group IP addresses and reduce the LB FW rule size, can be converted into a "map" [3]. The problem is this is only understood by the new API, not the "nftables" binaries using the legacy API. In a nutshell, what is the current status? We support (a) legacy tools and (b) "nftables" binaries with legacy API. This is the list of patches enabling the second option: - https://review.opendev.org/c/openstack/neutron/+/784913: this problem was affecting LB FW when "ipset" was disabled (merged). - https://review.opendev.org/c/openstack/neutron/+/785177: reorder the "ebtables" rules and prevent execution error 4 with empty chains. - https://review.opendev.org/c/openstack/neutron/+/785144: this patch, on top of the other two, creates two new neutron-tempest-plugin CI jobs, based on "linuxbridge" and "openvswitch-iptables_hybrid", to test the execution with the new binaries. - https://review.opendev.org/c/openstack/neutron/+/775413: this patch tests what is implemented in the previous one but testing those jobs in the "check" queue (it is a DNM patch just for testing). About the third option, to support the native "nft" API, I don't know if now we have the resources (time) and the need for that. This could be discussed again in the next PTG and in this mail too. Regards. [1]https://review.opendev.org/c/openstack/neutron/+/759874 [2] https://review.opendev.org/c/openstack/neutron/+/785137/3/doc/source/admin/deploy-lb.rst [3] https://review.opendev.org/c/openstack/neutron/+/775413/10/neutron/agent/linux/ipset_manager.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Fri Apr 9 10:05:54 2021 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Fri, 9 Apr 2021 10:05:54 +0000 Subject: =?utf-8?B?562U5aSNOiBbdGVsZW1ldHJ5XVtjeWJvcmddW2hlYXRdW21vbmFzY2FdW3Rh?= =?utf-8?B?Y2tlcl1ba2V5c3RvbmVdW3JlbGVhc2VdIExhc3QgbWludXRlIFJDIHRvIGxh?= =?utf-8?Q?nd_fixes?= In-Reply-To: References: Message-ID: Thanks Herve Beraud, +1 for this patch. brinzhang Inspur Electronic Information Industry Co.,Ltd. 发件人: Herve Beraud [mailto:hberaud at redhat.com] 发送时间: 2021年4月9日 16:50 收件人: openstack-discuss 主题: [telemetry][cyborg][heat][monasca][tacker][keystone][release] Last minute RC to land fixes Hello teams listed above, We identified fixes and significant changes in your repos so we proposed last minute RC to allow you to release them before the final release. Your teams patches are available here: https://review.opendev.org/q/topic:%22wallaby-final-rc%22 Deadline is today, please validate them ASAP to have a chance to see these fixes released. Patches without response from PTLs/liaisons will be abandoned. After this point final release for RC projects will be started. Notice that RC changes should be on stable/wallaby and not on master, all projects are now branched so your master branches are now for Xena purpose. Thank you for your understanding. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Apr 9 12:28:11 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 9 Apr 2021 14:28:11 +0200 Subject: [QA][Release-job-failures] Release of openstack/eslint-config-openstack for ref refs/tags/4.1.0 failed In-Reply-To: References: <20210408165036.6tkcfwwix5ms3ig4@yuggoth.org> Message-ID: Adding the QA team (they own eslint-config-openstack) to highlight this topic. Le ven. 9 avr. 2021 à 09:48, Herve Beraud a écrit : > Thanks Jeremy for your update. > > I'll try to discuss with the project team to see if a bugfix (4.1.1) > version fits well for them. > > Anyway, the previous proposed fix seems to have helped us. We didn't face > the max retry issue anymore, indeed, during the latest execution we faced a > "post failure" so our job went further. > > > http://lists.openstack.org/pipermail/release-job-failures/2021-April/001528.html > > Le jeu. 8 avr. 2021 à 18:53, Jeremy Stanley a écrit : > >> On 2021-04-08 11:54:16 +0200 (+0200), Herve Beraud wrote: >> [...] >> > I proposed a patch to move to nodejs10 all our projects that depend on >> > nodejs: >> > >> > https://review.opendev.org/c/openstack/project-config/+/785353 >> > >> > When this patch will be merged I think that this job could be >> reenqueued. >> >> I reenqueued the tag, but release-openstack-javascript failed on a >> different problem. NPM complains that there's already a >> eslint-config-openstack 4.0.1 published which can't be overwritten, >> but the tag is for 4.1.0... someone should probably update the >> version parameter in eslint-config-openstack's package.json file, >> which means it'll need another release tagged anyway (4.1.1?). >> -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Apr 9 13:27:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Apr 2021 08:27:29 -0500 Subject: [neutron][all] Tempest jobs running on rocky and queens branches are broken In-Reply-To: <4338832.CYQXJBBLPY@p1> References: <4338832.CYQXJBBLPY@p1> Message-ID: <178b6d0ef1c.12a602b48166191.5024862720479411475@ghanshyammann.com> ---- On Fri, 09 Apr 2021 01:57:33 -0500 Slawek Kaplonski wrote ---- > Hi, > > I noticed it mostly in the neutron jobs but it seems that it's true also for other projects for jobs which still runs on Ubuntu 16.04. > I Neutron case those are all jobs on stable/rocky and stable/queens branches. > > Due to [1] those jobs will end up with POST_FAILURE. So please don't recheck Your patches if You have such errors until that bug will be fixed. > I think that gmann has or is working on fix for that. Yeah, making stackviz not to fail job is up, please wait until those land. - https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90 -gmann > > [1] https://bugs.launchpad.net/devstack/+bug/1923042 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From pramchan at yahoo.com Fri Apr 9 14:37:55 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 9 Apr 2021 14:37:55 +0000 (UTC) Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <1443143388.3156237.1617300716714@mail.yahoo.com> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> <1443143388.3156237.1617300716714@mail.yahoo.com> Message-ID: <633882697.270095.1617979075678@mail.yahoo.com> Hi all, Have a vaccine apptmt at 9.40 PDT. Depending on the schedule may get late or may miss. Wanted to get some feedback  on testing results but still have 1 more week and next one will depend on where I am and try catch up if I miss on etherpad as what is the results and where we are Wallaby on way  next  week and Xena  release planned for Ocober https://releases.openstack.org/wallaby/schedule.html#w-final What's new in Wallaby and what Tempest testing get impacted in vote and add-ons. ThanksPrakashFor InteropWG Sent from Yahoo Mail on Android On Thu, Apr 1, 2021 at 11:11 AM, prakash RAMCHANDRAN wrote: Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks,Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Apr 9 14:43:33 2021 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 9 Apr 2021 14:43:33 +0000 (UTC) Subject: [Interop][Refstack] this Friday meeting In-Reply-To: <633882697.270095.1617979075678@mail.yahoo.com> References: <1788dd204ea.d4e4ba611415310.2624563204106293527@ghanshyammann.com> <0B782D91-D8D9-4DED-8606-635E18D6098F@openstack.org> <1443143388.3156237.1617300716714@mail.yahoo.com> <633882697.270095.1617979075678@mail.yahoo.com> Message-ID: <481410903.269918.1617979413931@mail.yahoo.com> Typo:Wallaby not Vote. Tha's a different topic related to TC voting and who are contesting for TC? Plus PTG plans on 19th and beyond,  coverage and was planning to attend besides Interop, possibly Airship and Triple-O changes wrt Ironic. It's impact on Zun and COEs in OpenStack K over O. ThxPrakash Sent from Yahoo Mail on Android On Fri, Apr 9, 2021 at 7:37 AM, prakash RAMCHANDRAN wrote: Hi all, Have a vaccine apptmt at 9.40 PDT. Depending on the schedule may get late or may miss. Wanted to get some feedback  on testing results but still have 1 more week and next one will depend on where I am and try catch up if I miss on etherpad as what is the results and where we are Wallaby on way  next  week and Xena  release planned for Ocober https://releases.openstack.org/wallaby/schedule.html#w-final What's new in Wallaby and what Tempest testing get impacted in vote and add-ons. ThanksPrakashFor InteropWG Sent from Yahoo Mail on Android On Thu, Apr 1, 2021 at 11:11 AM, prakash RAMCHANDRAN wrote: Looks like we can skip this Friday call and sure Arkady - lets cancel it. If you have something urgent we can talk offline - Thanks Prakash On Thursday, April 1, 2021, 11:06:25 AM PDT, Vida Haririan wrote: Hi Arkady, Friday is a company holiday and I will be ooo. Thanks,Vida On Thu, Apr 1, 2021 at 11:10 AM Jimmy McArthur wrote: I forgot this is a holiday. Same on my side. Thanks, Jimmy > On Apr 1, 2021, at 9:25 AM, Ghanshyam Mann wrote: > >  > ---- On Thu, 01 Apr 2021 09:15:21 -0500 Kanevsky, Arkady wrote ---- >> >> Team, >> This Friday is Good Friday and some people have a day off. >> Should we cancel this week meeting? >> Please, respond so we can see if we will have quorum. > > Thanks Arkady, > > I will be off from work and would not be able to join. > > -gmann > >> Thanks, >> Arkady >> >> Arkady Kanevsky, Ph.D. >> SP Chief Technologist & DE >> Dell Technologies office of CTO >> Dell Inc. One Dell Way, MS PS2-91 >> Round Rock, TX 78682, USA >> Phone: 512 7204955 >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Apr 9 15:59:58 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Apr 2021 10:59:58 -0500 Subject: [dev][cinder][keystone] Properly consuming system-scope in cinder In-Reply-To: References: <20210129172347.7wi3cv3gnneb46dj@localhost> Message-ID: <178b75c896c.10c83cd90176713.774777712214605451@ghanshyammann.com> ---- On Thu, 18 Feb 2021 17:53:06 -0600 Lance Bragstad wrote ---- > Brian and I had a discussion about all of this yesterday and we revisited the idea of a project-less URL template. This would allow us to revisit system-scope support for Wallaby under the assumption the client handles project IDs properly for system-scoped requests and cinder relaxes its project ID validation for system-scoped contexts. > > It's possible to get a cinder endpoint in the service catalog if you create a separate endpoint without project ID templating in the URL. I hacked this together in devstack [0] using a couple of changes to python-cinderclient [1] and cinder's API [2]. After that, I was able to list all volumes in a deployment as a system-administrator (using a system-scoped admin token) [3]. > The only hiccup I hit was that I was supplying two endpoints for the volumev3 service. If the endpoint without project ID templating appears first in the catalog for project-scoped tokens, then requests to cinder will fail because the project ID isn't in the URL. Remember, the only cinder endpoint in the catalog for system-scoped tokens was the one without templating, so this issue doesn't appear there. Also, we would need a separate patch to the tempest volume client before we could add any system-scope testing there. > > Thoughts? To solve the issue of which service catalog Tempest service clients should query, We can register the new endpoint with new name. In nova case when we moved the URL without project-id, we did move the old endpoint (with project_id) with name 'compute_legacy' and added new endpoint (without project-id)to 'compute' which is the default service catalog in Tempest to query[2]. Same way we can do for cinder too, the new endpoint with the name 'volumev3' (default catalog for cinder in Tempest) and old one can be moved to 'volumev3_legacy'. And to keep testing the old endpoint, we can add a separate job to query on old endpoint and rest of everything default to new endpoint. [1] https://github.com/openstack/devstack/blob/e53142ed0d314f07d974a104005be2120056d629/lib/nova#L357-L363 [2] https://github.com/openstack/tempest/blob/fa0a40b8bbc4f7e93a976f5575f8ad7c1890e0f4/tempest/config.py#L331 -gmann > > [0] https://review.opendev.org/c/openstack/devstack/+/776520[1] https://review.opendev.org/c/openstack/python-cinderclient/+/776469[2] https://review.opendev.org/c/openstack/cinder/+/776468[3] http://paste.openstack.org/show/802786/ > > > > On Wed, Feb 17, 2021 at 12:11 PM Lance Bragstad wrote: > Circling back on this topic. > I marked all the patches that incorporate system-scope support as WIP [0]. I think we can come back to these after we have a chance to decouple project IDs from cinder's API in Xena. I imagine that's going to be a pretty big change so we can push those reviews to the back burner for now. > > In the meantime, I reproposed all patches that touch the ADMIN_OR_OWNER rule and updated them to use the member and reader roles [1]. I also removed any system-scope policies from those patches. The surface area of these changes is a lot less than what we were originally expecting to get done for Wallaby. These changes should at least allow operators to use the member and reader roles on projects consistently with cinder when Wallaby goes out the door. > > To recap, this would mean anyone with the admin role on a project is still considered a system administrator in cinder (we can try and fix this in Xena). Operators can now use the member role to denote owners and give users the reader role on a project and those users shouldn't be able to make writable changes within cinder. > > [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac+label:Workflow%253C0[1] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac+label:Workflow%253E-1 > On Fri, Jan 29, 2021 at 11:24 AM Gorka Eguileor wrote: > On 28/01, Lance Bragstad wrote: > > Hey folks, > > > > As I'm sure some of the cinder folks are aware, I'm updating cinder > > policies to include support for some default personas keystone ships with. > > Some of those personas use system-scope (e.g., system-reader and > > system-admin) and I've already proposed a series of patches that describe > > what those changes look like from a policy perspective [0]. > > > > The question now is how we test those changes. To help guide that decision, > > I worked on three different testing approaches. The first was to continue > > testing policy using unit tests in cinder with mocked context objects. The > > second was to use DDT with keystonemiddleware mocked to remove a dependency > > on keystone. The third also used DDT, but included changes to update > > NoAuthMiddleware so that it wasn't as opinionated about authentication or > > authorization. I brought each approach in the cinder meeting this week > > where we discussed a fourth approach, doing everything in tempest. I > > summarized all of this in an etherpad [1] > > > > Up to yesterday morning, the only approach I hadn't tinkered with manually > > was tempest. I spent some time today figuring that out, resulting in a > > patch to cinderlib [2] to enable a protection test job, and > > cinder_tempest_plugin [3] that adds the plumbing and some example tests. > > > > In the process of implementing support for tempest testing, I noticed that > > service catalogs for system-scoped tokens don't contain cinder endpoints > > [4]. This is because the cinder endpoint contains endpoint templating in > > the URL [5], which keystone will substitute with the project ID of the > > token, if and only if the catalog is built for a project-scoped token. > > System and domain-scoped tokens do not have a reasonable project ID to use > > in this case, so the templating is skipped, resulting in a cinder service > > in the catalog without endpoints [6]. > > > > This cascades in the client, specifically tempest's volume client, because > > it can't find a suitable endpoint for request to the volume service [7]. > > > > Initially, my testing approaches were to provide examples for cinder > > developers to assess the viability of each approach before committing to a > > protection testing strategy. But, the tempest approach highlighted a larger > > issue for how we integrate system-scope support into cinder because of the > > assumption there will always be a project ID in the path (for the majority > > of the cinder API). I can think of two ways to approach the problem, but > > I'm hoping others have more. > > > > Hi Lance, > > Sorry to hear that the Cinder is giving you such trouble. > > > First, we remove project IDs from cinder's API path. > > > > This would be similar to how nova (and I assume other services) moved away > > from project-specific URLs (e.g., /v3/%{project_id}s/volumes would become > > /v3/volumes). This would obviously require refactoring to remove any > > assumptions cinder has about project IDs being supplied on the request > > path. But, this would force all authorization information to come from the > > context object. Once a deployer removes the endpoint URL templating, the > > endpoints will populate in the cinder entry of the service catalog. Brian's > > been helping me understand this and we're unsure if this is something we > > could even do with a microversion. I think nova did it moving from /v2/ to > > /v2.0/, which was technically classified as a major bump? This feels like a > > moon shot. > > > > In my opinion such a change should not be treated as a microversion and > would require us to go into v4, which is not something that is feasible > in the short term. > > > > Second, we update cinder's clients, including tempest, to put the project > > ID on the URL. > > > > After we update the clients to append the project ID for cinder endpoints, > > we should be able to remove the URL templating in keystone, allowing cinder > > endpoints to appear in system-scoped service catalogs (just like the first > > approach). Clients can use the base URL from the catalog and append the > > I'm not familiar with keystone catalog entries, so maybe I'm saying > something stupid, but couldn't we have multiple entries? A > project-specific URL and another one for the project and system scoped > requests? > > I know it sounds kind of hackish, but if we add them in the right order, > first the project one and then the new one, it would probably be > backward compatible, as older clients would get the first endpoint and > new clients would be able to select the right one. > > > admin project ID before putting the request on the wire. Even though the > > request has a project ID in the path, cinder would ignore it for > > system-specific APIs. This is already true for users with an admin role on > > a project because cinder will allow you to get volumes in one project if > > you have a token scoped to another with the admin role [8]. One potential > > side-effect is that cinder clients would need *a* project ID to build a > > request, potentially requiring another roundtrip to keystone. > > What would happen in this additional roundtrip? Would we be converting > provided project's name into its UUID? > > If that's the case then it wouldn't happen when UUIDs are being > provided, so for cases where this extra request means a performance > problem they could just provide the UUID. > > > > > Thoughts? > > Truth is that I would love to see the Cinder API move into URLs without > the project id as well as move out everything from contrib, but that > doesn't seem like a realistic piece of work we can bite right now. > > So I think your second proposal is the way to go. > > Thanks for all the work you are putting into this. > > Cheers, > Gorka. > > > > > > [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac > > [1] https://etherpad.opendev.org/p/cinder-secure-rbac-protection-testing > > [2] https://review.opendev.org/c/openstack/cinderlib/+/772770 > > [3] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772915 > > [4] http://paste.openstack.org/show/802117/ > > [5] http://paste.openstack.org/show/802097/ > > [6] > > https://opendev.org/openstack/keystone/src/commit/c239cc66615b41a0c09e031b3e268c82678bac12/keystone/catalog/backends/sql.py > > [7] http://paste.openstack.org/show/802092/ > > [8] http://paste.openstack.org/show/802118/ > > From marios at redhat.com Fri Apr 9 16:02:26 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Apr 2021 19:02:26 +0300 Subject: [TripleO] stable/wallaby branching Message-ID: Hello TripleO, quick update on the plan for stable/wallaby branching. The goal is to release tripleo stable/wallaby just after PTG i.e. last week of April. The tripleo-ci team have spent the previous sprint preparing and we now have the integration and component pipelines in place [1][2]. As of today we should also have the upstream check/gate multinode branchful jobs. We are planning to use this current sprint to resolve issues and ensure we have the CI coverage in place so we can safely release all the tripleo things. As we usually do, we are going to first branch python-tripleoclient and tripleo-common so we can exercise and sanity check the CI jobs. The stable/wallaby for client and common will appear after we merge [3]. *** PLEASE AVOID *** posting patches to stable/wallaby python-tripleoclient or tripleo-common until the CI team has completed our testing. Basically until we are ready to create a stable/wallaby for all the tripleo things (which will be announced in due course). Obviously as always please speak up if you disagree with any of the above or if something doesn't make sense or if you have any concerns about the proposed timings regards, marios [1] https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-integration-stable1 [2] https://review.rdoproject.org/zuul/builds?pipeline=openstack-component-tripleo [3] https://review.opendev.org/c/openstack/releases/+/785670 From marios at redhat.com Fri Apr 9 16:18:24 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Apr 2021 19:18:24 +0300 Subject: [TripleO] next irc meeting Tuesday Apr 13 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 13 April at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Mar 30 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-03-30-14.00.html Hope you can make it on Tuesday, regards, marios From tonyliu0592 at hotmail.com Fri Apr 9 16:45:45 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 9 Apr 2021 16:45:45 +0000 Subject: [kolla] RHEL based container image In-Reply-To: References: Message-ID: Thank you Mark and Marcin for clarification! Will stay with Kolla and CentOS Stream. Tony > -----Original Message----- > From: Mark Goddard > Sent: Friday, April 9, 2021 12:57 AM > To: Marcin Juszkiewicz > Cc: openstack-discuss > Subject: Re: [kolla] RHEL based container image > > On Fri, 9 Apr 2021 at 08:52, Marcin Juszkiewicz > wrote: > > > > W dniu 08.04.2021 o 22:04, Tony Liu pisze: > > > > > Given [1], RHEL based container is supported on RHEL 8 by Kolla. > > > Where can I get RHEL based container images? I see CentOS and Ubuntu > > > based images on docker hub, but can't find RHEL based images. > > > > > > I have RHEL subscription and I want to know if it's possible to use > > > Kolla deploy OpenStack. It's supposed to be yes based on the doc. I > > > just want to know where I can get container images. The container > > > image on RedHat is only for TripleO. > > > > We (as a project) do not build RHEL based container images. During PTG > > we will discuss dropping it from code [1]. > > > > Please use Wallaby CentOS images instead. They are using CentOS Stream > > 8 so the only difference you would get is what container image was > > used as a base. > > > > 1. https://review.opendev.org/c/openstack/kolla/+/785569 > > > > For those not at the coal face... Wallaby isn't released yet - please > use Victoria or earlier! From mark at stackhpc.com Fri Apr 9 16:48:29 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 17:48:29 +0100 Subject: [kolla] PTL on holiday next week Message-ID: Hi, I'll be on holiday next week. Please keep in mind the current feature freeze and aim for stabilisation of the code and preparation for RC1 & branching. Let's aim to branch in the week beginning 19th April. Please also remember it's the PTG in the same week. Remember to add topics to the PTG Etherpad [1] as they come up in discussion or your thoughts. [1] https://etherpad.opendev.org/p/kolla-xena-ptg Thanks, Mark From mark at stackhpc.com Fri Apr 9 16:50:13 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 9 Apr 2021 17:50:13 +0100 Subject: [kolla] PTG Message-ID: Hi, Just a reminder that it's the Kolla Xena PTG from 19th - 21st April. Anyone is welcome to attend, but please add your name to the Etherpad [1], and follow the instructions to sign up. If there is something you would like to discuss, please add it to the list of topics. Thanks, Mark [1] https://etherpad.opendev.org/p/kolla-xena-ptg From allison at openstack.org Fri Apr 9 19:06:52 2021 From: allison at openstack.org (Allison Price) Date: Fri, 9 Apr 2021 14:06:52 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Hi Julia, It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. Let me know if there is any other data you would like pulled. Thanks! Allison > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > Hey Allison, > > Metrics would be awesome and I'm just looking for the key high level > adoption information as that is good to put into the presentation. > > -Julia > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: >> >> Hi Julia, >> >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. >> >> Thanks! >> Allison >> >> >> >> >> >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: >> >> Related, Is there 2020 user survey data available? >> >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org >> wrote: >> >> >> Hello ptls, >> >> >> >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. >> >> >> >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. >> >> >> >> Let me know if you have any other questions! >> >> >> >> Thank you for your participation, >> >> Helena >> >> >> >> > From juliaashleykreger at gmail.com Fri Apr 9 19:28:00 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 9 Apr 2021 12:28:00 -0700 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Thanks Allison! Even telling friends doesn't really help, since we would be self-skewing. I guess part of the conundrum is it is easy for people not to really be fully aware of the extent of their usage and the mix of various projects under the hood. They know they get a star ship and it has warp engines, but they may not know the factory that turned out the starship. Only the geekiest might know those details. Anyway, I've been down this path before w/r/t the user survey. C'est la vie. Back to work! -Julia On Fri, Apr 9, 2021 at 12:07 PM Allison Price wrote: > > Hi Julia, > > It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. > > Let me know if there is any other data you would like pulled. > > Thanks! > Allison > > > > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > > > Hey Allison, > > > > Metrics would be awesome and I'm just looking for the key high level > > adoption information as that is good to put into the presentation. > > > > -Julia > > > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > >> > >> Hi Julia, > >> > >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > >> > >> Thanks! > >> Allison > >> > >> > >> > >> > >> > >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > >> > >> Related, Is there 2020 user survey data available? > >> > >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > >> wrote: > >> > >> > >> Hello ptls, > >> > >> > >> > >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > >> > >> > >> > >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > >> > >> > >> > >> Let me know if you have any other questions! > >> > >> > >> > >> Thank you for your participation, > >> > >> Helena > >> > >> > >> > >> > > > From gmann at ghanshyammann.com Fri Apr 9 19:32:25 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Apr 2021 14:32:25 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 9th April, 21 Message-ID: <178b81f0933.10e4b896f183324.6966564323891095362@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. If you feel this email is lengthy and can take time to read, I tried to categorize the topic for an easy read and should not take more than 5 min of your time. 1. What we completed this week: ========================= Project updates: ------------------- ** Keystone is switched to DPL model[1]. ** Mistral is switched to DPL model[2]. ** Made devstack-plugin-(amqp1|kafka) branchless[3] ** Deprecated project/deliverables: 1. networking-midonet[4] 2. monasca-transform[5] 3. monasca-analytics[6] 4. monasca-ceilometer[7] 5. monasca-log-api[7] Other updates: ------------------ ** PTL assignment for Xena cycle leaderless projects: We have finished the leader assignments for the leaderless project for Xena cycle[8]. Total 8 projects were leaderless in Xena election. PTL assigned to 6 projects, and 2 projects (Keystone and Mistral) adopted DPL model. ** Radosław Piliszek(yoctozepto) is vice-chair of TC for Xena cycle. ** Prepared the Community newsletter: "OpenStack project news" for this month[9]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-04-08-15.00.log.html * We will have next week's meeting on April 15th, Thursday 15:00 UTC. 3. Activities In progress: ================== Open Reviews ----------------- * No open reviews this week[10]. This is good progress by TC this week. Gate performance and heavy job configs ------------------------------------------------ * dansmith sent the progress on ML[11], and there is a good improvement on gate utilization. Thanks to dansmith for keep monitoring it and collecting the data. Election for one Vacant TC seat ------------------------------------- Voting is started for one open seat for TC and open until April 15, 2021 23:45 UTC. You might have got the email with the voting link; if not please read the instruction in the email from fungi[12]. PTG ----- TC is planning to meet in PTG for Thursday 2 hrs and Friday 4 hrs, details are in etherpad[13], feel free to add topic you would like to discuss with TC in PTG. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers two office hours per week in #openstack-tc [15]: * Tuesday at 0100 UTC * Wednesday at 1500 UTC 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/784102 [2] https://review.opendev.org/c/openstack/governance/+/782195 [3] https://review.opendev.org/c/openstack/governance/+/784544 [4 ]https://review.opendev.org/c/openstack/governance/+/783799 [5] https://review.opendev.org/c/openstack/governance/+/783624 [6] https://review.opendev.org/c/openstack/governance/+/783659 [7] https://review.opendev.org/c/openstack/governance/+/783657 [8] https://etherpad.opendev.org/p/xena-leaderless [9] https://etherpad.opendev.org/p/newsletter-openstack-news [10] https://review.opendev.org/q/project:openstack/governance+status:open [11] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021534.html [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021718.html [13] https://etherpad.opendev.org/p/tc-xena-ptg [14] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [15] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [16] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From manteshpatil347 at gmail.com Fri Apr 9 03:18:07 2021 From: manteshpatil347 at gmail.com (Mantesh Patil) Date: Fri, 9 Apr 2021 08:48:07 +0530 Subject: [Group-based-policy] not able to create the policies Message-ID: Hi, I am deploying devstack Ussuri with GBP (stable/ussuri) on ubuntu 18.04 LTS(Updated the packages), after installation I am able to create the policies as mentioned in the wiki . But after creation, I am not able to list the policies. I am using the following command to list the policies and it is giving a warning "/usr/local/lib/python3.6/dist-packages/keystoneauth1/adapter.py:235: UserWarning: Using keystoneclient sessions has been deprecated. Please update your software to use keystoneauth1. warnings.warn('Using keystoneclient sessions has been deprecated. '" Command1: source admin-openrc.sh *Command2: gbp group-create web * and also getting the following error while creating the group [image: image.png] Command3: *gbp group-list -c name -c tenant_id -f value* Please give the information that how can I get a list of group policies using CLI and new authentication. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 37031 bytes Desc: not available URL: From zhaochenzhou at live.cn Fri Apr 9 03:42:23 2021 From: zhaochenzhou at live.cn (=?utf-8?B?6LW16ZmI5rSy?=) Date: Fri, 9 Apr 2021 11:42:23 +0800 Subject: migration maybe a iaas service for openstack Message-ID: An HTML attachment was scrubbed... URL: From zhaochenzhou at live.cn Fri Apr 9 03:55:21 2021 From: zhaochenzhou at live.cn (=?utf-8?B?6LW16ZmI5rSy?=) Date: Fri, 9 Apr 2021 11:55:21 +0800 Subject: =?utf-8?Q?Between_openstack_and_offline_IDC,_a_large_intranet_an?= =?utf-8?Q?d_a_large_second-tier_network_in_the_world_improve_the_feasibil?= =?utf-8?Q?ity_of_data_migration_and_disaster_recovery.?= Message-ID: An HTML attachment was scrubbed... URL: From rpittau at redhat.com Fri Apr 9 10:24:36 2021 From: rpittau at redhat.com (Riccardo Pittau) Date: Fri, 9 Apr 2021 12:24:36 +0200 Subject: [ironic] APAC-Europe SPUC time? In-Reply-To: References: Message-ID: 10am UTC works for me too, always in to talk about food! :) Thanks, Riccardo On Fri, Apr 9, 2021 at 11:14 AM Jacob Anders wrote: > Hi Dmitry, > > Thanks for your email and apologies for slow reply. > > Keeping the APAC SPUC at 10am UTC would work well for me. > > The only concern is it may fall in the lunch time slot in Europe but that > might actually be a good thing - we can do lunch-dinner sessions and talk > food if we want to :) @Riccardo what do you reckon? > > Cheers, > Jacob > > On Thu, Apr 8, 2021 at 12:01 AM Dmitry Tantsur > wrote: > >> Hi folks! >> >> The initial SPUC datetime was for 10am UTC, which was 11am for us in >> central Europe, now is supposed to be 12pm. On one hand, I find it more >> convenient to have SPUC at 11am still, on the other - I have German classes >> at this time for a few months starting mid-April. >> >> What do you think? Should we keep it in UTC, i.e. 12pm for us in Europe? >> Will that work for you Jacob? >> >> Dmitry >> >> -- >> Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael >> O'Neill >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Fri Apr 9 21:04:41 2021 From: dmeng at uvic.ca (dmeng) Date: Fri, 09 Apr 2021 14:04:41 -0700 Subject: [sdk]: compute service create_server method, how to create multiple servers Message-ID: Hello there, Hope this email finds you well. We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" attribute is supported by the create_server() method? We tried to create multiple servers by setting the max_count value, but only one server has been created. While when we use the python-novaclient package, novaclient.servers.create() method has the attribute max_count which allows creating multiple servers at once if set the value. So we would like to know is there a similar attribute like "max_count" that could allow us to create multiple servers at once in openstacksdk? Thanks and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Sat Apr 10 11:25:32 2021 From: hberaud at redhat.com (Herve Beraud) Date: Sat, 10 Apr 2021 13:25:32 +0200 Subject: [release] Release countdown for week R-0 Apr 12 - Apr 16 Message-ID: Development Focus ----------------- We will be releasing the coordinated OpenStack Wallaby release next week, on Wednesday, 14 April, 2021. Thanks to everyone involved in the Wallaby cycle! We are now in pre-release freeze, so no new deliverable will be created until final release, unless a release-critical regression is spotted. Otherwise, teams attending the virtual PTG should start to plan what they will be discussing there, by creating and filling team etherpads. You can access the list of PTG etherpads at: http://ptg.openstack.org/etherpads.html General Information ------------------- On release day, the release team will produce final versions of deliverables following the cycle-with-rc release model, by re-tagging the commit used for the last RC. A patch doing just that will be proposed. PTLs and release liaisons should watch for that final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 14th, so that their approval is included in the metadata that goes onto the signed tag. Upcoming Deadlines & Dates -------------------------- Final Wallaby release: 14 April, 2021 Xena virtual PTG: 19 - 23 April, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Apr 10 16:42:56 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 10 Apr 2021 18:42:56 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <1617820623.770226846@apps.rackspace.com> References: <1617820623.770226846@apps.rackspace.com> Message-ID: Hi, Thanks a lot for the initiative, I very much enjoy the updates each cycle. However... On 4/7/21 8:37 PM, helena at openstack.org wrote: > Hello ptls, > > The community meeting for the Wallaby release will be next Thursday, > April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live > session via Zoom as well as live-streamed to YouTube. I'm sorry, but as nothing changes, I have to state it again. https://www.zdnet.com/article/critical-zoom-vulnerability-triggers-remote-code-execution-without-user-input/ And that's not the first time. There's free software alternatives (like Jitsi) and the tooling to deploy them are also available [1]. It has been proven to work very well and scale nicely with thousands of viewers. I regret that I'm the only person protesting about Zoom... Cheers, Thomas Goirand (zigo) [1] https://debconf-video-team.pages.debian.net/docs/ From Arkady.Kanevsky at dell.com Sun Apr 11 00:33:46 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 00:33:46 +0000 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> <71EDF897-DCCB-4063-81F7-88A8456F6F6B@openstack.org> Message-ID: Dell Customer Communication - Confidential Suggest that for next survey we also ask which protocol(s) customer using in Ironic. -----Original Message----- From: Julia Kreger Sent: Friday, April 9, 2021 2:28 PM To: Allison Price Cc: helena at openstack.org; OpenStack Discuss Subject: Re: [ptl] Wallaby Release Community Meeting [EXTERNAL EMAIL] Thanks Allison! Even telling friends doesn't really help, since we would be self-skewing. I guess part of the conundrum is it is easy for people not to really be fully aware of the extent of their usage and the mix of various projects under the hood. They know they get a star ship and it has warp engines, but they may not know the factory that turned out the starship. Only the geekiest might know those details. Anyway, I've been down this path before w/r/t the user survey. C'est la vie. Back to work! -Julia On Fri, Apr 9, 2021 at 12:07 PM Allison Price wrote: > > Hi Julia, > > It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. > > Let me know if there is any other data you would like pulled. > > Thanks! > Allison > > > > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > > > Hey Allison, > > > > Metrics would be awesome and I'm just looking for the key high level > > adoption information as that is good to put into the presentation. > > > > -Julia > > > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > >> > >> Hi Julia, > >> > >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > >> > >> Thanks! > >> Allison > >> > >> > >> > >> > >> > >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > >> > >> Related, Is there 2020 user survey data available? > >> > >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > >> wrote: > >> > >> > >> Hello ptls, > >> > >> > >> > >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > >> > >> > >> > >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > >> > >> > >> > >> Let me know if you have any other questions! > >> > >> > >> > >> Thank you for your participation, > >> > >> Helena > >> > >> > >> > >> > > > From Arkady.Kanevsky at dell.com Sun Apr 11 20:23:06 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:23:06 +0000 Subject: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Brian, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for cinder tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:25:13 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:25:13 +0000 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Adding comminuty From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:25 PM To: 'johnsomor at gmail.com' Subject: [Designate][Interop] request for 15-30 min on Xena PTG for Interop John, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:26:50 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:26:50 +0000 Subject: [Glance][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for glance tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:27:58 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:27:58 +0000 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Brian, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Heat tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:30:24 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:30:24 +0000 Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Kristi, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Keystone tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:31:28 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:31:28 +0000 Subject: [Manila][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Goutham, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Manila tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:32:55 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:32:55 +0000 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Brian, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for neutron tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:34:09 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:34:09 +0000 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Balazs, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Nova tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sun Apr 11 20:37:19 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 11 Apr 2021 20:37:19 +0000 Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop Message-ID: Tim, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Swift tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Apr 11 23:27:55 2021 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 11 Apr 2021 18:27:55 -0500 Subject: [neutron] bug deputy report Abril 5th to 11th Message-ID: Hi, Here is this week's bugs deputy report: Critical ====== https://bugs.launchpad.net/neutron/+bug/1922563 [UT] py38 CI job failing frequently with TIMED_OUT. In progress. Proposed patch https://review.opendev.org/c/openstack/neutron/+/784771 High ==== https://bugs.launchpad.net/neutron/+bug/1922684 Functional dhcp agent tests fails to spawn metadata proxy. In progress. Proposed fix https://review.opendev.org/c/openstack/neutron/+/784903 https://bugs.launchpad.net/neutron/+bug/1923198 custom kill scripts don't works after migration to privsep. In progress. Proposed fix https://review.opendev.org/c/openstack/neutron/+/785638 https://bugs.launchpad.net/neutron/+bug/1923201 neutron-centos-8-tripleo-standalone in periodic queue runs Neutron from Victroria release. In progress. Proposed fix https://review.opendev.org/c/openstack/neutron/+/785660 Medium ====== https://bugs.launchpad.net/neutron/+bug/1922653 [L3][Port forwarding] multiple floating_ip:port to same internal fixed_ip:port (N-to-1 rule support). Waiting for owner, although it seems Liu Yulong might work on it https://bugs.launchpad.net/neutron/+bug/1922824 [ovn] external port always be scheduled on a single gateway. Needs owner https://bugs.launchpad.net/neutron/+bug/1922892 "ebtables-nft" returns error 4 when a new chain is created. . Needs owner https://bugs.launchpad.net/neutron/+bug/1922919 [FT] BaseOVSTestCase retrieving the wrong min BW queue/qos. In progress. Proposed patch https://review.opendev.org/c/openstack/neutron/+/785158 https://bugs.launchpad.net/neutron/+bug/1922934 [OVN] LSP register race condition with two controllers. In progress. Owner ralonsoh https://bugs.launchpad.net/neutron/+bug/1922923 OVS port issue. Liu Yulong suggested a solution. Awaiting update from submitter https://bugs.launchpad.net/neutron/+bug/1923083 python 3.9 failures. Confirmed. haleyb suggested a work around, which submitter reported as successful. Seems haleyb will work on a fix https://bugs.launchpad.net/neutron/+bug/1923161 DHCP notification could be optimized. In progress. Proposed patch: https://review.opendev.org/c/openstack/neutron/+/785581 RFE === https://bugs.launchpad.net/neutron/+bug/1922716 [RFE] BFD for BGP Dynamic Routing -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazumasa.nomura.rx at hitachi.com Mon Apr 12 04:52:24 2021 From: kazumasa.nomura.rx at hitachi.com (=?iso-2022-jp?B?GyRCTG5CPE9CQDUbKEIgLyBOT01VUkEbJEIhJBsoQktBWlVNQVNB?=) Date: Mon, 12 Apr 2021 04:52:24 +0000 Subject: [cinder] How to post multiple patches. Message-ID: Hi everyone, Hitachi has developed the out-of-tree driver as Cinder driver. But we want to deprecate the out-of-tree driver and support only the in-tree driver. We need to submit about ten more patches(*1) for full features which the out-of-tree driver has such as Consistency Group and Volume Replication. In that case, we have two options: 1. Submit two or three patches at once. In other words, submit two or three patches to Xena, then submit another two or three patches after previous patches were merged, and so on. This may give reviewers the feeling of endless. 2. Submit all patches at once to Xena. This will give reviewers the information how many patches remains from the beginning, but many pathes may bother them. Does anyone have an opinion as to which option is better? Thanks, Kazumasa Nomura E-mail: kazumasa.nomura.rx at hitachi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Apr 12 06:21:09 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 12 Apr 2021 08:21:09 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: <2752308.ClrQMDxLba@p1> Hi, Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > Brian, > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for neutron tempest or > tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. I just added it to our etherpad https://etherpad.opendev.org/p/neutron-xena-ptg I will be working on schedule of the sessions later this week and I will let You know what timeslot this session with Interop WG will be. Please let me know if You have any preferences. We have our sessions scheduled: Monday 1300 - 1600 UTC Tuesday 1300 - 1600 UTC Thursday 1300 - 1600 UTC Friday 1300 - 1600 UTC Our time slots which are already booked are: - Monday 15:00 - 16:00 UTC - Thursday 14:00 - 15:30 UTC - Friday 14:00 - 15:00 UTC > > Thanks, > Arkady > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From balazs.gibizer at est.tech Mon Apr 12 07:11:45 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 12 Apr 2021 09:11:45 +0200 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi Arkady, What about Wednesday 14:00 - 15:00 UTC? We don't have to fill a whole hour of course. Cheers, gibi On Sun, Apr 11, 2021 at 20:34, "Kanevsky, Arkady" wrote: > Balazs, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on PTG meeting to go over Interop testing and any changes for > Nova tempest or tempest configuration in Wallaby cycle or changes > planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > From ildiko.vancsa at gmail.com Mon Apr 12 10:50:32 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 12 Apr 2021 12:50:32 +0200 Subject: [edge][cinder][manila][swift][tripleo] Storage at the edge discussions at the PTG In-Reply-To: References: <0A7B1EBD-0715-43ED-B388-A2011D437DD0@gmail.com> Message-ID: <7361B0B1-10FB-4533-A168-2571BFFE39C7@gmail.com> > On Apr 8, 2021, at 19:39, John Fulton wrote: > > On Thu, Apr 8, 2021 at 1:21 PM Ildiko Vancsa wrote: > Hi, > > I’m reaching out to draw your attention to the Edge Computing Group sessions on the PTG in less than two weeks. > > We are still formalizing our agenda, but we have storage identified as one of the topics that the working group would like to discuss. It would be great to have the session also as a continuation to earlier discussions that we had on previous PTGs with relevant OpenStack project contributors. > > We have a few cross-community sessions scheduled already, but we still have some flexibility in our agenda to schedule this topic so the most people who are interested in participating can join. Our available options are: > > * Monday (April 19) between 1400 UTC and 1500 UTC > * Tuesday (April) between 1400 UTC and 1600 UTC > > I'm not available Monday but could join Tuesday. I'd be curious to hear what others are doing with Storage on the Edge and could share some info on how TripleO does it. Sounds good! We currently have storage scheduled for Tuesday. It may move within the 2-hour slot we have but I think we can consider the day fixed. If you or anyone has a time slot preference for the storage edge discussion next Tuesday please respond to this thread or reach out to me ASAP. Thanks, Ildikó (IRC: ildikov on Freenode) > > John > > > __Please let me know if you or your project would like to participate and if you have a time slot difference from the above.__ > > Thanks and Best Regards, > Ildikó > (IRC ildikov on Freenode) > > > From hjensas at redhat.com Mon Apr 12 11:03:08 2021 From: hjensas at redhat.com (Harald Jensas) Date: Mon, 12 Apr 2021 13:03:08 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: On 4/8/21 6:32 PM, James Slagle wrote: > > > On Thu, Apr 8, 2021 at 12:24 PM Marios Andreou > wrote: > > On Wed, Apr 7, 2021 at 7:55 PM John Fulton > wrote: > > > > On Wed, Apr 7, 2021 at 12:27 PM Marios Andreou > wrote: > >> > >> Hello TripleO o/ > >> > >> Thanks again to everybody who has volunteered to lead a session for > >> the coming Xena TripleO project teams gathering. > >> > >> I've had a go at the agenda [1] trying to keep it to max 4 or 5 > >> sessions per day with some breaks. > >> > >> Please review the slot assigned for your session at [1]. If that > time > >> is not ok then please let me know as soon as possible and > indicate if > >> you want it later or earlier or on any other day. > > > > > > On Monday I see: > > > > 1. STORAGE: 1430-1510 (ceph) > > 2. DF: 1510-1550 (ephemeral heat) > > 3. DF/Networking: 1600-1700 (ports v2 "no heat") > > > > If Harald and James are OK with it, could it be changed to the > following? > > > > A. DF: 1430-1510 (ephemeral heat) > > B. DF/Networking: 1510-1550 (ports v2 "no heat") > > C. STORAGE: 1600-1700 (ceph) > > > > I ask because a portion of C depends on B, so it would be helpful > to have that context first. If the presenters have conflicts > however, we don't need this change. > > > > ACK thanks John that totally makes sense... as just discussed on irc > [1] I've updated the schedule to reflect your proposal. > > I haven't heard back from slagle yet but cc'ing him here and if there > are any issues we can work them out > > > The change wfm, thanks. > Works for me too. -- Harald From toky0ghoul at yandex.com Mon Apr 12 12:01:23 2021 From: toky0ghoul at yandex.com (toky0) Date: Mon, 12 Apr 2021 12:01:23 +0000 Subject: MAAS dhcpd issue Message-ID: Hi, Just started a maas+juju deployment on bare metal. I’m facing some errors while PXE booting[1]. Any leads ? Regards, Sami [1] Mar 22 09:57:13 maas kernel: [ 1885.666813] audit: type=1400 audit(1616407033.279:95): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1131 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:57:22 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:0c:bb via eno1: network vlan-5002: no free leases Mar 22 09:57:37 maas systemd[1]: systemd-timedated.service: Succeeded. Mar 22 09:57:43 maas kernel: [ 1915.677372] audit: type=1400 audit(1616407063.291:96): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1132 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:58:13 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:13 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:59:13 maas kernel: [ 2005.666324] audit: type=1400 audit(1616407153.280:97): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:59:43 maas kernel: [ 2035.667450] audit: type=1400 audit(1616407183.280:98): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 From C-Albert.Braden at charter.com Mon Apr 12 12:54:40 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 12 Apr 2021 12:54:40 +0000 Subject: [EXTERNAL] MAAS dhcpd issue In-Reply-To: References: Message-ID: Apparmor is causing the errors. This Stack Exchange post explains how to read the error message: https://unix.stackexchange.com/questions/116591/why-am-i-getting-apparmor-error-messages-in-the-syslog-about-ntp-and-ldap -----Original Message----- From: toky0 Sent: Monday, April 12, 2021 8:01 AM To: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] MAAS dhcpd issue CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi, Just started a maas+juju deployment on bare metal. I’m facing some errors while PXE booting[1]. Any leads ? Regards, Sami [1] Mar 22 09:57:13 maas kernel: [ 1885.666813] audit: type=1400 audit(1616407033.279:95): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1131 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:57:22 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:0c:bb via eno1: network vlan-5002: no free leases Mar 22 09:57:37 maas systemd[1]: systemd-timedated.service: Succeeded. Mar 22 09:57:43 maas kernel: [ 1915.677372] audit: type=1400 audit(1616407063.291:96): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1132 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:58:13 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:13 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:15 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:17 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 Mar 22 09:58:21 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 Mar 22 09:59:13 maas kernel: [ 2005.666324] audit: type=1400 audit(1616407153.280:97): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Mar 22 09:59:43 maas kernel: [ 2035.667450] audit: type=1400 audit(1616407183.280:98): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From bkslash at poczta.onet.pl Mon Apr 12 13:24:06 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 12 Apr 2021 15:24:06 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer Message-ID: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Hi, Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? I’ve tried to use [service_providers] service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default in neutron.conf, but it doesn’t work either… Best regards Adam From radoslaw.piliszek at gmail.com Mon Apr 12 13:24:08 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 12 Apr 2021 15:24:08 +0200 Subject: [EXTERNAL] MAAS dhcpd issue In-Reply-To: References: Message-ID: Well, to me it looks like the apparmor errors are irrelevant. >From this log, I would assume that the dhcp client (PXE) has issues with the DHCPOFFER that it is receiving (or perhaps it cannot see it) as it sends DHCPDISCOVER again and again. Can you squeeze anything from the faulty PXE node? Did you manage to PXE boot any other machine? -yoctozepto On Mon, Apr 12, 2021 at 2:55 PM Braden, Albert wrote: > > Apparmor is causing the errors. This Stack Exchange post explains how to read the error message: > > https://unix.stackexchange.com/questions/116591/why-am-i-getting-apparmor-error-messages-in-the-syslog-about-ntp-and-ldap > > -----Original Message----- > From: toky0 > Sent: Monday, April 12, 2021 8:01 AM > To: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] MAAS dhcpd issue > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > Hi, > > Just started a maas+juju deployment on bare metal. I’m facing some errors while PXE booting[1]. > Any leads ? > > Regards, > Sami > > [1] > Mar 22 09:57:13 maas kernel: [ 1885.666813] audit: type=1400 audit(1616407033.279:95): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1131 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > Mar 22 09:57:22 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:0c:bb via eno1: network vlan-5002: no free leases > Mar 22 09:57:37 maas systemd[1]: systemd-timedated.service: Succeeded. > Mar 22 09:57:43 maas kernel: [ 1915.677372] audit: type=1400 audit(1616407063.291:96): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1132 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > Mar 22 09:58:13 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:13 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:15 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:15 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:17 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:17 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:21 maas dhcpd[3861]: DHCPDISCOVER from 4c:52:62:44:13:f3 via eno1 > Mar 22 09:58:21 maas dhcpd[3861]: DHCPOFFER on 192.168.202.10 to 4c:52:62:44:13:f3 via eno1 > Mar 22 09:59:13 maas kernel: [ 2005.666324] audit: type=1400 audit(1616407153.280:97): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > Mar 22 09:59:43 maas kernel: [ 2035.667450] audit: type=1400 audit(1616407183.280:98): apparmor="DENIED" operation="open" profile="snap.maas.supervisor" name="/etc/gss/mech.d/" pid=1134 comm="python3" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 > > > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From oliver.wenz at dhbw-mannheim.de Mon Apr 12 13:28:15 2021 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 12 Apr 2021 15:28:15 +0200 (CEST) Subject: [glance][openstack-ansible] Snapshots disappear during saving In-Reply-To: References: Message-ID: <1208016377.154990.1618234095430@ox.dhbw-mannheim.de> Hi Dmitriy, I checked nginx logs on the keystone container and there was no obvious error: ontainer-6f64e2e1 nginx: 192.168.110.211 - - [12/Apr/2021:13:25:56 +0000] "HEAD / HTTP/1.0" 300 0 "-" "osa-haproxy-healthcheck" Apr 12 13:26:07 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:07 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 316 "-" "openstack_auth keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:07 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:07 +0000] "POST /v3/auth/tokens HTTP/1.1" 401 109 "-" "openstack_auth keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:07 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:07 +0000] "GET /v3/users/e4e88b0e800e4a79905a586738c32bf1/projects HTTP/1.1" 200 768 "-" "python-keystoneclient" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "openstack_auth keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:08 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.211 - - [12/Apr/2021:13:26:08 +0000] "HEAD / HTTP/1.0" 300 0 "-" "osa-haproxy-healthcheck" Apr 12 13:26:08 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.100 - - [12/Apr/2021:13:26:08 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:18 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.28 - - [12/Apr/2021:13:26:18 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:18 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.100 - - [12/Apr/2021:13:26:18 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:19 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:19 +0000] "GET /v3/users/e4e88b0e800e4a79905a586738c32bf1/projects HTTP/1.1" 200 768 "-" "python-keystoneclient" Apr 12 13:26:20 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:20 +0000] "GET /v3 HTTP/1.1" 200 255 "-" "openstack_dashboard keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:20 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.211 - - [12/Apr/2021:13:26:20 +0000] "HEAD / HTTP/1.0" 300 0 "-" "osa-haproxy-healthcheck" Apr 12 13:26:21 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:21 +0000] "GET /v3/users/e4e88b0e800e4a79905a586738c32bf1/projects HTTP/1.1" 200 768 "-" "python-keystoneclient" Apr 12 13:26:24 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:24 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:24 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:24 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:25 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.100 - - [12/Apr/2021:13:26:25 +0000] "GET /v3/auth/tokens HTTP/1.1" 200 6779 "-" "python-keystoneclient" Apr 12 13:26:27 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:27 +0000] "GET / HTTP/1.1" 300 270 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" Apr 12 13:26:28 infra1-keystone-container-6f64e2e1 nginx[13641]: infra1-keystone-container-6f64e2e1 nginx: 192.168.110.250 - - [12/Apr/2021:13:26:28 +0000] "POST /v3/auth/tokens HTTP/1.1" 201 6779 "-" "python-novaclient keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.5" I also noticed, that glance logs show the keystone_authtoken message even when I successfully create a snapshot (e.g. for a cirros instance). So could this be a nova problem afterall? I'm confused why there's a NotImplementedError in the nova logs: http://paste.openstack.org/show/804398/ Kind regards, Oliver > ------------------------------ > > Message: 2 > Date: Fri, 09 Apr 2021 08:24:53 +0300 > From: Dmitriy Rabotyagov > To: "openstack-discuss at lists.openstack.org" > > Subject: Re: [glance][openstack-ansible] Snapshots disappear during > saving > Message-ID: <500221617945664 at mail.yandex.ru> > Content-Type: text/plain; charset="utf-8" > > An HTML attachment was scrubbed... > URL: > > From mnaser at vexxhost.com Mon Apr 12 13:33:23 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 12 Apr 2021 09:33:23 -0400 Subject: [sdk]: compute service create_server method, how to create multiple servers In-Reply-To: References: Message-ID: Hi Catherine, Have a look at min_count option :) Thanks Mohammed On Fri, Apr 9, 2021 at 5:12 PM dmeng wrote: > > Hello there, > > Hope this email finds you well. > > We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" attribute is supported by the create_server() method? We tried to create multiple servers by setting the max_count value, but only one server has been created. > > While when we use the python-novaclient package, novaclient.servers.create() method has the attribute max_count which allows creating multiple servers at once if set the value. So we would like to know is there a similar attribute like "max_count" that could allow us to create multiple servers at once in openstacksdk? > > Thanks and have a great day! > > Catherine -- Mohammed Naser VEXXHOST, Inc. From smooney at redhat.com Mon Apr 12 13:46:59 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 12 Apr 2021 14:46:59 +0100 Subject: [sdk]: compute service create_server method, how to create multiple servers In-Reply-To: References: Message-ID: <213fb67d-e907-401f-ca75-6555004d1261@redhat.com> On 12/04/2021 14:33, Mohammed Naser wrote: > Hi Catherine, > > Have a look at min_count option :) min_count is what you want yes altough nova generally discuorages use fo our server multi create feature and advise peopel to make multiple independent boot calls instead. the cageate to that is if you are using server groups and affinity or anti affinity. in that case multi create makes sense to use but if you can boot them serially in seperate boot requsts that is generally better. if you ask for min_count 4 and 3 boot successfully and the 4th fails we will remove all 4 instances and set them to error that is not necessary the behaviour your want so you should really orchestrate this your self and not realy on the basic support in nova if you want anything more complex. > > Thanks > Mohammed > > On Fri, Apr 9, 2021 at 5:12 PM dmeng wrote: >> Hello there, >> >> Hope this email finds you well. >> >> We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" attribute is supported by the create_server() method? We tried to create multiple servers by setting the max_count value, but only one server has been created. >> >> While when we use the python-novaclient package, novaclient.servers.create() method has the attribute max_count which allows creating multiple servers at once if set the value. So we would like to know is there a similar attribute like "max_count" that could allow us to create multiple servers at once in openstacksdk? >> >> Thanks and have a great day! >> >> Catherine > > From rosmaita.fossdev at gmail.com Mon Apr 12 15:53:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 11:53:43 -0400 Subject: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: On 4/11/21 4:23 PM, Kanevsky, Arkady wrote: > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min > on PTG meeting to go over Interop testing and any changes for cinder > tempest or tempest configuration in Wallaby cycle or changes planned for > Xena. Hi Arkady, I've virtually penciled you in for 1430-1500 on Tuesday 20 April. > Once on agenda one of the Interop WG person will attend and lead the > discussion. Sounds good. I've scheduled 30 min instead of 15 because it would be helpful for the cinder team to hear a quick synopsis of the current goals of the Interop WG and what the aim of the project is before we discuss the specifics of W and X. cheers, brian > > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > From johnsomor at gmail.com Mon Apr 12 15:57:24 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 12 Apr 2021 08:57:24 -0700 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi Arkady, I have added Interop to the Designate topics list (https://etherpad.opendev.org/p/xena-ptg-designate) and will schedule a slot this week when I put a rough agenda together. Thanks, Michael On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > Adding comminuty > > > > From: Kanevsky, Arkady > Sent: Sunday, April 11, 2021 3:25 PM > To: 'johnsomor at gmail.com' > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for Interop > > > > John, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > From rosmaita.fossdev at gmail.com Mon Apr 12 16:18:38 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 12:18:38 -0400 Subject: [cinder] How to post multiple patches. In-Reply-To: References: Message-ID: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: > Hi everyone, > > Hitachi has developed the out-of-tree driver as Cinder driver. But we > want to deprecate the out-of-tree driver and support only the in-tree > driver. > > We need to submit about ten more patches(*1) for full features which the > out-of-tree driver has such as Consistency Group and Volume Replication. > > In that case, we have two options: > > 1. Submit two or three patches at once. In other words, submit two or > three patches to Xena, then submit another two or three patches after > previous patches were merged, and so on. This may give reviewers the > feeling of endless. > > 2. Submit all patches at once to Xena. This will give reviewers the > information how many patches remains from the beginning, but many pathes > may bother them. > > Does anyone have an opinion as to which option is better? My opinion is that option #1 is better, because as the initial patches are reviewed, issues will come up in review that you will be able to apply proactively to later patches on your own without reviewers having to bring them up, which will result in a better experience for all concerned. Also, we can have an idea of how many patches to expect (without your filing them all at once) if you file blueprints in Launchpad for each feature. Please name them 'hitachi-consistency-group-support', 'hitachi-volume-replication', etc., so it's easy to see what driver they're for. The blueprint doesn't need much detail; it's primarily for tracking purposes. You can see some examples here: https://blueprints.launchpad.net/cinder/wallaby cheers, brian > > Thanks, > > Kazumasa Nomura > > E-mail: kazumasa.nomura.rx at hitachi.com > > From rosmaita.fossdev at gmail.com Mon Apr 12 16:34:25 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 12:34:25 -0400 Subject: [ops][cinder][nova] os-brick upcoming releases In-Reply-To: References: Message-ID: <7547ddfe-1182-8f72-fd3a-d13563fceb19@gmail.com> Just wanted to update this message: the os-brick releases mentioned below have occurred. On 3/30/21 11:24 AM, Brian Rosmaita wrote: > Hello operators, > > You may have heard about a potential data-loss bug [0] that was recently > discovered.  It has been fixed in the upcoming wallaby release and we > are planning to backport to all stable branches and do new os-brick > releases from the releasable stable branches. > > In the meantime, the bug occurs if the multipath configuration option on > a compute is changed while volumes are attached to instances on that > compute.  The possible data loss may occur when the volumes are detached > (migration, volume-detach, etc.).  Thus, before the new os-brick > releases are available, the issue can nonetheless be averted by not > making such a configuration change under those circumstances. > > The new os-brick releases will be: > - victoria: 4.0.3 Tagged on 2021-04-01 17:20:07 +0000 > - ussuri: 3.0.6 Tagged on 2021-04-06 13:50:04 +0000 > - train: 2.10.6 Tagged on 2021-04-08 10:49:41 +0000 > The stein, rocky, and queens branches are in Extended Maintenance mode > and are no longer released from, but critical fixes are backported to > them when possible, though it may take a while before these are merged. > > > [0] https://launchpad.net/bugs/1921381 From fungi at yuggoth.org Mon Apr 12 16:36:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Apr 2021 16:36:35 +0000 Subject: [oslo][security-sig] Please revisit your open vulnerability report In-Reply-To: <53ba75c8-dd82-c470-e564-d4dedfb5090a@nemebean.com> References: <20210218144904.xeek6zwlyntm24u5@yuggoth.org> <3c103743-898f-79e1-04cc-2f97a52fece3@nemebean.com> <20210218170318.kysdpzibsqnferj5@yuggoth.org> <203cbbfd-9ca8-3f0c-83e4-6d57588103cf@nemebean.com> <20210218191305.5psn6p3kp6tlexoq@yuggoth.org> <53ba75c8-dd82-c470-e564-d4dedfb5090a@nemebean.com> Message-ID: <20210412163635.nq45un4fw25m5cwr@yuggoth.org> On 2021-03-26 16:52:52 -0500 (-0500), Ben Nemec wrote: [...] > I have added the openstack-vuln-mgmt team to most of the Oslo > projects. Great, happy to help there. > I apparently don't have permission to change settings in > oslo.policy, This is maintained by oslo-policy-core which has Adam as its owner and only administrator, so he's currently the only one who can add more members to that group though any one of the group members could help us by switching the oslo.core maintainer to some other group owned by openstack-admins if Adam can't be reached to make openstack-admins the owner of oslo-policy-core. > oslo.windows, Similarly, maintainer is oslo-windows-drivers which has Claudiu as its owner and only administrator, but the project maintainer could optionally be adjusted to another group by Alessandro if Claudiu can't be reached. > and taskflow, Maintained by the taskflow-dev group for which Joshua is the owner and only administrator, but there are a lot of group members one of whom could switch the project maintainer for you. > so I will need help with that. After going through all of the > projects, my guess is that the individual people who have access > to the private security bugs are the ones who created the project > in the first place. I guess that's fine, but there's an argument > to be made that some of those should be cleaned up too. In all three cases, I expect the people who have access to these are no longer active in OpenStack, so yes getting them fixed would be a "good idea." > I also noticed that oslo-coresec is not listed in most of the > projects. Is there any sort of global setting that should give > coresec memebers access to private security bugs, or do I need to > add that to each project? You'd have to add it separately to each of them, yes. Though for any with VMT oversight, we suggest you not do that and instead let one of the vulnerability coordinators subscribe your security reviewer group after we've confirmed the report isn't misdirected at the wrong project, in order to minimize unnecessary initial spread of sensitive information. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Mon Apr 12 17:00:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Apr 2021 13:00:51 -0400 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train Message-ID: The neutron-grenade job on stable/train has been mostly failing since 8 April: https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch=stable%2Ftrain I spot-checked a few and it looks like the culprit is "Could not open requirements file: [Errno 2] No such file or directory: '/opt/stack/requirements/upper-constraints.txt'". See: - https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/log/logs/grenade.sh.txt#28855 - https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/log/logs/grenade.sh.txt#28849 - https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/log/logs/grenade.sh.txt#28855 The last 2 neutron-grenade jobs on openstack/devstack have passed, but there don't look to have been any changes in the devstack repo stable/train since 10 March, so I'm not sure if that was luck or if the QA team has made a change to get the job working. Any ideas? thanks, brian From ltoscano at redhat.com Mon Apr 12 17:05:26 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 12 Apr 2021 19:05:26 +0200 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train In-Reply-To: References: Message-ID: <4330364.cEBGB3zze1@whitebase.usersys.redhat.com> On Monday, 12 April 2021 19:00:51 CEST Brian Rosmaita wrote: > The neutron-grenade job on stable/train has been mostly failing since 8 > April: > > https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch= > stable%2Ftrain > > I spot-checked a few and it looks like the culprit is "Could not open > requirements file: [Errno 2] No such file or directory: > '/opt/stack/requirements/upper-constraints.txt'". See: > - > https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/ > log/logs/grenade.sh.txt#28855 - > https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/ > log/logs/grenade.sh.txt#28849 - > https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/ > log/logs/grenade.sh.txt#28855 > > The last 2 neutron-grenade jobs on openstack/devstack have passed, but > there don't look to have been any changes in the devstack repo > stable/train since 10 March, so I'm not sure if that was luck or if the > QA team has made a change to get the job working. > > Any ideas? https://review.opendev.org/c/openstack/grenade/+/785831 and all its backports (actually in reverse order) are in the gate queue. Ciao -- Luigi From iurygregory at gmail.com Mon Apr 12 17:08:00 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 12 Apr 2021 19:08:00 +0200 Subject: [ironic] No Review Jam Tomorrow Message-ID: Hello ironicers! During the upstream meeting today we decided to skip the Review Jam that will happen tomorrow, since we don't have any topics that would require attention. We also skipped the review jam from today (we totally forgot about it). -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 12 17:09:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Apr 2021 12:09:19 -0500 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train In-Reply-To: References: Message-ID: <178c70f1d0e.f997989d279125.7112266830106959000@ghanshyammann.com> ---- On Mon, 12 Apr 2021 12:00:51 -0500 Brian Rosmaita wrote ---- > The neutron-grenade job on stable/train has been mostly failing since 8 > April: > > https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch=stable%2Ftrain > > I spot-checked a few and it looks like the culprit is "Could not open > requirements file: [Errno 2] No such file or directory: > '/opt/stack/requirements/upper-constraints.txt'". See: > - > https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/log/logs/grenade.sh.txt#28855 > - > https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/log/logs/grenade.sh.txt#28849 > - > https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/log/logs/grenade.sh.txt#28855 > > The last 2 neutron-grenade jobs on openstack/devstack have passed, but > there don't look to have been any changes in the devstack repo > stable/train since 10 March, so I'm not sure if that was luck or if the > QA team has made a change to get the job working. There were multiple changes merged in devstack and grenade for tempest venv constraints at the same time and there were a few issues in sourcing the stackrc for checking the tempest venv constraints, let's wait for the below fixes to merged to get all cases green -https://review.opendev.org/q/If5f14654ab9aee2a140bbfb869b50d63cb289fdf -gmann > > Any ideas? > > > thanks, > brian > > From iurygregory at gmail.com Mon Apr 12 17:13:12 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 12 Apr 2021 19:13:12 +0200 Subject: [ironic] Meetings next week (19-23 April) Message-ID: Hello ironicers, Since next week is the PTG, we decided during our upstream meeting today to skip the following meetings: - Upstream Meeting (Monday) - Review Jams (Monday/Tuesday) - SPUC in the APAC time (Friday) because it overlaps with the PTG. Thank you =) -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Apr 12 19:40:59 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Apr 2021 21:40:59 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Message-ID: On 4/12/21 3:24 PM, Adam Tomas wrote: > Hi, > Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? > I’ve tried to use > [service_providers] > service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default > in neutron.conf, but it doesn’t work either… > > Best regards > Adam > > Hi Adam, It's up to Ceilometer to report it. Do create the resource types, add this to /etc/ceilometer/gnocchi_resources.yaml (note: if you don't have such a file in /etc/ceilometer, copy it there from somewhere below /usr/lib/python3/dist-packages/ceilometer): - resource_type: loadbalancer metrics: network.services.lb.outgoing.bytes: network.services.lb.incoming.bytes: network.services.lb.pool: network.services.lb.listener: network.services.lb.member: network.services.lb.health_monitor: network.services.lb.loadbalancer: network.services.lb.total.connections: network.services.lb.active.connections: Then do a ceilometer db_sync to populate the Gnocchi resource types. I hope this helps, Cheers, Thomas Goirand (zigo) From allison at openstack.org Mon Apr 12 19:59:45 2021 From: allison at openstack.org (Allison Price) Date: Mon, 12 Apr 2021 14:59:45 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: Hi Thomas, Yes, we have been exploring free software alternatives like Jitsi. We are actually using Jitsi for our upcoming PTG for almost half of the project teams. Researching the best solution that has the most accessibility for the global community is an ongoing initiative and we are trying to identify and implement the tools that make sense for the different use cases based on our experience. As we host more community meetings and continue our search of other tools (including some test runs), the tool may change, but for now we are planning to move forward with Zoom so that we can stream to YouTube and record for community members in different time zones. We will continue to keep the mailing list updated if we switch tools / move in a different direction. Appreciate your patience as we continue to navigate virtual meetings and events. Cheers, Allison > On Apr 10, 2021, at 11:42 AM, Thomas Goirand wrote: > > Hi, > > Thanks a lot for the initiative, I very much enjoy the updates each > cycle. However... > > On 4/7/21 8:37 PM, helena at openstack.org wrote: >> Hello ptls, >> >> The community meeting for the Wallaby release will be next Thursday, >> April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live >> session via Zoom as well as live-streamed to YouTube. > > I'm sorry, but as nothing changes, I have to state it again. > > https://www.zdnet.com/article/critical-zoom-vulnerability-triggers-remote-code-execution-without-user-input/ > > And that's not the first time. > > There's free software alternatives (like Jitsi) and the tooling to > deploy them are also available [1]. It has been proven to work very well > and scale nicely with thousands of viewers. > > I regret that I'm the only person protesting about Zoom... > > Cheers, > > Thomas Goirand (zigo) > > [1] https://debconf-video-team.pages.debian.net/docs/ > From zigo at debian.org Mon Apr 12 20:06:33 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Apr 2021 22:06:33 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: <1617820623.770226846@apps.rackspace.com> Message-ID: <6b0c7754-e237-10a4-d924-749aebead2dd@debian.org> Hi Allison, Thanks for the update. On 4/12/21 9:59 PM, Allison Price wrote: > Hi Thomas, > > for now we are planning to move forward with Zoom so that we can stream to YouTube and record for community members in different time zones. FYI, the Debian Video (for online Debconf /Mini-Debconf) can stream to the web or to VLC to a (very) large audience, the output of a Jitsi meeting. That proved to work perfectly for the last summer Debconf. Maybe you could dig into it? Of course, the video gets also recorded, so you may later upload it to Youtube / Peertube... Hoping this helps. > We will continue to keep the mailing list updated if we switch tools / move in a different direction. Appreciate your patience as we continue to navigate virtual meetings and events. > > Cheers, > Allison Cheers, Thomas Goirand (zigo) From allison at openstack.org Mon Apr 12 20:09:20 2021 From: allison at openstack.org (Allison Price) Date: Mon, 12 Apr 2021 15:09:20 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <6b0c7754-e237-10a4-d924-749aebead2dd@debian.org> References: <1617820623.770226846@apps.rackspace.com> <6b0c7754-e237-10a4-d924-749aebead2dd@debian.org> Message-ID: <9CBABC7C-18EC-4670-BEF3-B0C0FC391FBB@openstack.org> > On Apr 12, 2021, at 3:06 PM, Thomas Goirand wrote: > > Hi Allison, > > Thanks for the update. > > On 4/12/21 9:59 PM, Allison Price wrote: >> Hi Thomas, >> >> for now we are planning to move forward with Zoom so that we can stream to YouTube and record for community members in different time zones. > > FYI, the Debian Video (for online Debconf /Mini-Debconf) can stream to > the web or to VLC to a (very) large audience, the output of a Jitsi > meeting. That proved to work perfectly for the last summer Debconf. > Maybe you could dig into it? > > Of course, the video gets also recorded, so you may later upload it to > Youtube / Peertube... > > Hoping this helps. That is really helpful - I’ll share with the team and share any questions we may have along the way. > >> We will continue to keep the mailing list updated if we switch tools / move in a different direction. Appreciate your patience as we continue to navigate virtual meetings and events. >> >> Cheers, >> Allison > > Cheers, > > Thomas Goirand (zigo) > From victoria at vmartinezdelacruz.com Mon Apr 12 20:49:30 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Mon, 12 Apr 2021 22:49:30 +0200 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: <20210407194436.vbtmfwts3r7ighh3@barron.net> References: <20210407194436.vbtmfwts3r7ighh3@barron.net> Message-ID: +1! Happy to see these proposals! Thanks Vida and Liron for all your contributions! On Wed, Apr 7, 2021 at 9:46 PM Tom Barron wrote: > Ditto, including the big thanks. > > On 07/04/21 16:04 -0300, Carlos Silva wrote: > >Big +1! > > > >Thank you, Liron and Vida! :) > > > >Em qua., 7 de abr. de 2021 às 15:40, Goutham Pacha Ravi < > >gouthampravi at gmail.com> escreveu: > > > >> Hello Zorillas, > >> > >> Vida's been our bug czar since the Ussuri release and she's > >> conceptualized and executed our successful bug triage strategy. She > >> has also painstakingly organized several documentation and code bug > >> squash events and kept the pulse on multi-release efforts. She's > >> taught me a lot about project management and you can see tangible > >> results here, I suppose :) > >> > >> Liron's fixed a lot of test code bugs and covered some old and > >> important test gaps over the past few releases. He's driving > >> standardization of the tempest plugin and bringing in best practices > >> from tempest, refstack and elsewhere into our testing. It's always a > >> pleasure to work with Liron since he's happy to provide and welcome > >> feedback. > >> > >> More recently, Liron and Vida have enabled us to work with the > >> InteropWG and define refstack guidelines. They've also gotten us > >> closer to members from the QA community who they work with more > >> closely downstream. In short, they bring in different perspectives > >> while also espousing the team's core values. So I'd like to propose > >> their addition to the manila-tempest-plugin-core team. > >> > >> Please give me your +/- 1s for this proposal. > >> > >> Thanks, > >> Goutham > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 12 21:35:47 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 12 Apr 2021 16:35:47 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 15th at 1500 UTC Message-ID: <178c8031241.1163a8d15287294.6003502774058757783@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for April 15th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, April 14th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From jungleboyj at gmail.com Tue Apr 13 01:43:45 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 12 Apr 2021 20:43:45 -0500 Subject: [cinder] How to post multiple patches. In-Reply-To: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> References: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> Message-ID: <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> On 4/12/2021 11:18 AM, Brian Rosmaita wrote: > On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi everyone, >> >> Hitachi has developed the out-of-tree driver as Cinder driver. But >> wewant to deprecate the out-of-tree driver and support only the >> in-treedriver. >> >> We need to submit about ten more patches(*1) for full features which >> theout-of-tree driver has such as Consistency Group and Volume >> Replication. >> >> In that case, we have two options: >> >> 1. Submit two or three patches at once. In other words, submit two >> orthree patches to Xena, then submit another two or three patches >> afterprevious patches were merged, and so on. This may give reviewers >> thefeeling of endless. >> >> 2. Submit all patches at once to Xena. This will give reviewers >> theinformation how many patches remains from the beginning, but many >> pathesmay bother them. >> >> Does anyone have an opinion as to which option is better? > > My opinion is that option #1 is better, because as the initial patches > are reviewed, issues will come up in review that you will be able to > apply proactively to later patches on your own without reviewers > having to bring them up, which will result in a better experience for > all concerned. > > Also, we can have an idea of how many patches to expect (without your > filing them all at once) if you file blueprints in Launchpad for each > feature.  Please name them 'hitachi-consistency-group-support', > 'hitachi-volume-replication', etc., so it's easy to see what driver > they're for.  The blueprint doesn't need much detail; it's primarily > for tracking purposes. You can see some examples here: >   https://blueprints.launchpad.net/cinder/wallaby > > I concur with Brian.  I think doing a few at a time will be less likely to overwhelm the review team and it will help to prevent repeated comments in subsequent patches if you are able to proactively fix the subsequent patches before they are submitted. Thanks for seeking input on this! Jay > cheers, > brian > >> >> Thanks, >> >> Kazumasa Nomura >> >> E-mail: >> kazumasa.nomura.rx at hitachi.com >> > > From gouthampravi at gmail.com Tue Apr 13 05:21:48 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 12 Apr 2021 22:21:48 -0700 Subject: [Manila][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: On Sun, Apr 11, 2021 at 1:31 PM Kanevsky, Arkady wrote: > Goutham, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > PTG meeting to go over Interop testing and any changes for Manila tempest > or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > Thank you for your email Arkady. We’ll add this to the agenda - I’ll work out the schedule in a couple of days, please stay tuned for a specific time/day slot. In the meanwhile, happy to accommodate a recommendation if you have one. > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Apr 13 05:57:52 2021 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 13 Apr 2021 11:27:52 +0530 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Ruslanas, On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis wrote: > > Hi Yatin, > > I have spotted that version of puppet-tripleo, but even after downgrade I had/have same issue. should I downgrade even more? :) OR You know when fixed version might get in for production centos ussuri release repo? > I have requested the tag release of puppet-neutron to clear this https://review.opendev.org/c/openstack/releases/+/786006. Once it's merged it can be included in centos ussuri release repo, RDO bots will take care of it. If you want to test before it's released you can pick puppet-neutron from RDO trunk repo[1]. [1] https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ > As you know now that it is affected also :) > > > > > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >> >> Hi Ruslanas, >> >> For the issue see >> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, >> The puppet-neutron issue in above was specific to victoria but since >> there is new release for ussuri recently, it also hit there too. >> >> >> Thanks and Regards >> Yatin Karel >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis wrote: >> > >> > Hi all, >> > >> > While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: >> > openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml >> > >> > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ >> > >> > builddir/install-undercloud.log ( contains info about container-puppet-neutron ) >> > http://paste.openstack.org/show/804181/ >> > >> > undercloud.conf: >> > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> > >> > dnf list installed >> > http://paste.openstack.org/show/804182/ >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 Thanks and Regards Yatin Karel From yasufum.o at gmail.com Tue Apr 13 06:06:26 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 13 Apr 2021 15:06:26 +0900 Subject: [tacker] irc meeting Message-ID: <188f15f1-4bc9-5c46-15f8-b73cfb33e353@gmail.com> Hi team, I'll be off today's irc meeting, so please skip, or someone host the meeting if any topic. Thanks, Yasufumi From zhangbailin at inspur.com Tue Apr 13 06:46:26 2021 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Tue, 13 Apr 2021 06:46:26 +0000 Subject: =?utf-8?B?562U5aSNOiBbbm92YV0gTm9taW5hdGUgc2Vhbi1rLW1vb25leSBmb3Igbm92?= =?utf-8?Q?a-specs-core?= In-Reply-To: References: Message-ID: <96fd5ea1d09a40288a26810a64bfdc3d@inspur.com> +1 from me, I saw it late, but I think it's worth +1. brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: Stephen Finucane [mailto:stephenfin at redhat.com] 发送时间: 2021年3月31日 0:46 收件人: openstack-discuss 主题: [nova] Nominate sean-k-mooney for nova-specs-core Hey, Sean has been working on nova for what seems like yonks now. Each cycle, they spend a significant amount of time reviewing proposed specs and contributing to discussions at the PTG. This is important work and their contributions provide everyone with a deep pool of knowledge on all things networking and hardware upon which to draw. I think the nova project would benefit from their addition to the specs core reviewer team and I therefore propose we add Sean to nova- specs-core. Assuming there are no objections, I'll work with gibi to add Sean to nova-specs- core next week. Cheers, Stephen From sbauza at redhat.com Tue Apr 13 07:09:23 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 13 Apr 2021 09:09:23 +0200 Subject: [nova] Nominate sean-k-mooney for nova-specs-core In-Reply-To: References: Message-ID: On Tue, Mar 30, 2021 at 6:51 PM Stephen Finucane wrote: > Hey, > > Sean has been working on nova for what seems like yonks now. Each cycle, > they > spend a significant amount of time reviewing proposed specs and > contributing to > discussions at the PTG. This is important work and their contributions > provide > everyone with a deep pool of knowledge on all things networking and > hardware > upon which to draw. I think the nova project would benefit from their > addition > to the specs core reviewer team and I therefore propose we add Sean to > nova- > specs-core. > > Assuming there are no objections, I'll work with gibi to add Sean to > nova-specs- > core next week. > > +1, sorry for the late approval, forgot to reply. Cheers, > Stephen > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Apr 13 07:15:19 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 13 Apr 2021 09:15:19 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Message-ID: <83D97106-9C93-407E-8F5F-F93BABDAC0EB@poczta.onet.pl> Hi Thomas, thank you for the answer. I have this content in my gnocchi_resources.yaml - resource_type: loadbalancer metrics: network.services.lb.outgoing.bytes: network.services.lb.incoming.bytes: network.services.lb.pool: network.services.lb.listener: network.services.lb.member: network.services.lb.health_monitor: dynamic.network.services.lb.loadbalancer: network.services.lb.total.connections: network.services.lb.active.connections: But to be honest I didn’t do db_sync. I’m using kolla-ansible and I have all services in container, so I should run db_sync inside ceilometer-central container? It’s not automatically synced when a service/container is restarted? Best regards Adam > Wiadomość napisana przez Thomas Goirand w dniu 12.04.2021, o godz. 21:40: > > On 4/12/21 3:24 PM, Adam Tomas wrote: >> Hi, >> Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? >> I’ve tried to use >> [service_providers] >> service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default >> in neutron.conf, but it doesn’t work either… >> >> Best regards >> Adam >> >> > > Hi Adam, > > It's up to Ceilometer to report it. Do create the resource types, add > this to /etc/ceilometer/gnocchi_resources.yaml (note: if you don't have > such a file in /etc/ceilometer, copy it there from somewhere below > /usr/lib/python3/dist-packages/ceilometer): > > - resource_type: loadbalancer > metrics: > network.services.lb.outgoing.bytes: > network.services.lb.incoming.bytes: > network.services.lb.pool: > network.services.lb.listener: > network.services.lb.member: > network.services.lb.health_monitor: > network.services.lb.loadbalancer: > network.services.lb.total.connections: > network.services.lb.active.connections: > > Then do a ceilometer db_sync to populate the Gnocchi resource types. > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) From dsneddon at redhat.com Tue Apr 13 09:05:05 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 13 Apr 2021 02:05:05 -0700 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <519a70c1-1401-52e2-ae06-6be47e0e2c96@redhat.com> Message-ID: On Fri, Apr 9, 2021 at 12:10 AM Marios Andreou wrote: > On Fri, Apr 9, 2021 at 9:46 AM Michele Baldessari > wrote: > > > > On Fri, Apr 09, 2021 at 08:27:55AM +0200, Carlos Goncalves wrote: > > > On Fri, Apr 9, 2021 at 12:17 AM Steve Baker wrote: > > > > > > > My Tuesday Baremetal 1510-1550 slot is ok, but it would be better > for me > > > > if it was earlier in the day. I'll probably make more sense at 1am > than 3am > > > > :) > > > > > > > ouch sorry Steve and thank you for participating despite the bad > time-difference for you! Yes we can make this change see below > > > > > > Could I maybe swap with NETWORKING: 1300-1340? > > > > > > > Fine with me. > > > Michele, Dan? > > > > Totally fine by me > > Great thanks folks - this works well actually since Dan S. already > indicated (in another reply to me) that your current slot (1300-1340 > UTC) is too early (like 5 am) so moving it to the later slot should > work better for him too. > > I have just updated the schedule so on Tuesday 20 we have Baremetal > sbaker @ 1300-1340 and then the networking/bgp/frr folks at 1510-1550 > > thank you! > > regards, marios > > Thanks, I could do either, but 1510-1550 is better for me. -Dan > > > > > > > > On 8/04/21 4:24 am, Marios Andreou wrote: > > > > > > > > Hello TripleO o/ > > > > > > > > Thanks again to everybody who has volunteered to lead a session for > > > > the coming Xena TripleO project teams gathering. > > > > > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > > > sessions per day with some breaks. > > > > > > > > Please review the slot assigned for your session at [1]. If that time > > > > is not ok then please let me know as soon as possible and indicate if > > > > you want it later or earlier or on any other day. If you've decided > > > > the session no longer makes sense then also please tell me and we can > > > > move things around accordingly to finish earlier. > > > > > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > > > week before PTG. We can and likely will make changes after this date > > > > but last minute changes are best avoided to allow folks to schedule > > > > their PTG attendance across projects. > > > > > > > > Thanks everybody for your help! Looking forward to interesting > > > > presentations and discussions as always > > > > > > > > regards, marios > > > > > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > > > > > > > -- > > Michele Baldessari > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Apr 13 10:02:58 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 13 Apr 2021 12:02:58 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: <83D97106-9C93-407E-8F5F-F93BABDAC0EB@poczta.onet.pl> References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> <83D97106-9C93-407E-8F5F-F93BABDAC0EB@poczta.onet.pl> Message-ID: <658f3ac3-65dd-5e93-5c44-604ad0e7c0ad@debian.org> On 4/13/21 9:15 AM, Adam Tomas wrote: > Hi Thomas, thank you for the answer. > I have this content in my gnocchi_resources.yaml > - resource_type: loadbalancer > metrics: > network.services.lb.outgoing.bytes: > network.services.lb.incoming.bytes: > network.services.lb.pool: > network.services.lb.listener: > network.services.lb.member: > network.services.lb.health_monitor: > dynamic.network.services.lb.loadbalancer: > network.services.lb.total.connections: > network.services.lb.active.connections: > > But to be honest I didn’t do db_sync. I’m using kolla-ansible and I have all services in container, so I should run db_sync inside ceilometer-central container? It’s not automatically synced when a service/container is restarted? > Best regards > Adam Hi Adam, I have zero knowledge with Kolla, and can't help you with it. I'm using my own installer, which is developed for Debian (and released in the soon coming Debian 11, aka Bullseye): https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer Cheers, Thomas Goirand (zigo) From bkslash at poczta.onet.pl Tue Apr 13 10:17:16 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 13 Apr 2021 12:17:16 +0200 Subject: [ceilometer][octavia][Victoria] No metrics from octavia loadbalancer In-Reply-To: References: <46049C20-3C27-46E4-90D1-7FFBBCE96A9B@poczta.onet.pl> Message-ID: <7714915B-3B68-44CC-8270-412CDBAC3350@poczta.onet.pl> OK. I have some progress. I created meters.d and pollster.d (in pollsters.d I’ve created octavia.yaml with sample type gauge and unit load balancer) and now I can see some measures, but only if load balancer exists or not. Is there any way to force dynamic pollster to ask for url v2/lbaas/loadbalancers/[ID]/stats? Best regards Adam > Wiadomość napisana przez Thomas Goirand w dniu 12.04.2021, o godz. 21:40: > > On 4/12/21 3:24 PM, Adam Tomas wrote: >> Hi, >> Im trying to get metrics from octavia’s load balancer, but can’t get any (gnocchi metric list | grep loadbalancer not returning anything). How should I configure ceilometer to get metrics from octavia? Ceilometer asks neutron for load balancer metrics, and neutron responses „resource cannot be found” (and that is obvious, because Neutron LBaaS service is deprecated). How to force neutron to get these resources from Octavia? >> I’ve tried to use >> [service_providers] >> service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default >> in neutron.conf, but it doesn’t work either… >> >> Best regards >> Adam >> >> > > Hi Adam, > > It's up to Ceilometer to report it. Do create the resource types, add > this to /etc/ceilometer/gnocchi_resources.yaml (note: if you don't have > such a file in /etc/ceilometer, copy it there from somewhere below > /usr/lib/python3/dist-packages/ceilometer): > > - resource_type: loadbalancer > metrics: > network.services.lb.outgoing.bytes: > network.services.lb.incoming.bytes: > network.services.lb.pool: > network.services.lb.listener: > network.services.lb.member: > network.services.lb.health_monitor: > network.services.lb.loadbalancer: > network.services.lb.total.connections: > network.services.lb.active.connections: > > Then do a ceilometer db_sync to populate the Gnocchi resource types. > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) From dtantsur at redhat.com Tue Apr 13 11:48:32 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 13 Apr 2021 13:48:32 +0200 Subject: [ironic] Meetings next week (19-23 April) In-Reply-To: References: Message-ID: On Mon, Apr 12, 2021 at 7:15 PM Iury Gregory wrote: > Hello ironicers, > > Since next week is the PTG, we decided during our upstream meeting today > to skip the following meetings: > - Upstream Meeting (Monday) > - Review Jams (Monday/Tuesday) > - SPUC in the APAC time (Friday) because it overlaps with the PTG. > A small correction: in the USA time. The APAC one does not seem to overlap with the PTG. > > Thank you =) > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Apr 13 12:42:13 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 13 Apr 2021 15:42:13 +0300 Subject: [openstack-ansible] OSA Meeting Poll In-Reply-To: <202851617797329@mail.yandex.ru> References: <170911617794404@mail.yandex.ru> <202851617797329@mail.yandex.ru> Message-ID: <1324211618317615@mail.yandex.ru> An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Tue Apr 13 13:26:35 2021 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Tue, 13 Apr 2021 15:26:35 +0200 Subject: [OSSN-0089] Missing configuration option in Secure Live Migration guide leads to unencrypted traffic Message-ID: Missing configuration option in Secure Live Migration guide leads to unencrypted traffic -------------------------------------------------------------------------------------------------------------------- ### Summary ### The guide to enable secure live migration with QEMU-native tls on nova compute nodes missed an important config option. Without this option a hard-coded part in nova is triggerd which sets the default route to TCP instead of TLS. This leads to an unecrypted migration of the ram without throwing any kind of Error. ### Affected Services / Software ### Nova / Victoria, Ussuri, Train, Stein (might also be affected: Rocky, Queens, Pike, Ocata) ### Discussion ### In the OpenStack guide to setup secure live migration with QEMU-native tls there are a few configuration options given, which have to be applied to nova compute nodes. After following the instructions and setting up everything it seems to work as expected. But after checking that libvirt is able to use tls using tcpdump to listen on the port for tls while manually executing libvirt commands, the same check for live migration of an instance through openstack fails. Listening on the port for unencrypted tcp-traffic shows that OpenStack still uses the unencrypted TCP path instead of the TLS one for the migration. The reason for this is a patch from Ocata which adds the calculation of the live-migration-uri in code: https://review.opendev.org/c/openstack/nova/+/410817/ The config parameter ``live_migration_uri`` was deprecated in favor of ``live_migration_scheme`` and the default set to tcp. This leads to the problem that if none of these two config options are set, libvirt will always use the default tcp connection. To enable QEMU-native TLS to be used in nova one of them has to be set so that a TLS connection can be established. Currently the guide does not show that this is necessary and there was no other documentation indicating that these config options are important for the usage of QEMU-native TLS. As there is no documentation which recognizes this and it is hard to find this problem as the migration happens even without those config option set - not stating that it is still unencrypted, it might have been unrecognized in various deployments, which followed the guide. ### Recommended Actions ### For deployments using secure live migration with QEMU-native TLS: 1. Check the config of all nova compute nodes. The ``libvirt`` section needs to have either ``live_migration_uri`` (deprecated) or ``live_migration_scheme`` configured. 2. If neither of those config options are present, add ``live_migration_scheme = tls`` to enable the use of the tls connection. #### Patches #### The guide for secure live migration was updated to reflect the necessary configuration options and now has a note, which warns users that not setting all config options may lead into a seemingly working deployment, which still uses unencrypted traffic for the ram-migration. Master(Wallaby): https://review.opendev.org/c/openstack/nova/+/781030 Victoria: https://review.opendev.org/c/openstack/nova/+/781211 Ussuri: https://review.opendev.org/c/openstack/nova/+/782126 Train: https://review.opendev.org/c/openstack/nova/+/782430 Stein: https://review.opendev.org/c/openstack/nova/+/783199 ### Contacts / References ### Author: Josephine Seifert, secustack GmbH This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0089 Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1919357 Mailing List : [Security] tag on openstack-discuss at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg From whayutin at redhat.com Tue Apr 13 14:22:57 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 13 Apr 2021 08:22:57 -0600 Subject: [tripleo][ci] long queues, ceph / scenario001 issue Message-ID: Greetings, FYI.. https://bugs.launchpad.net/tripleo/+bug/1923529 The ceph folks and CI are working together to successfully transition ceph from octopus to pacific. At the moment any scenario001 job will fail. To ease the transition, we're proposing [1] Please note: If you have a change in the gate that triggers scenario001 I will be rebasing or adding a depends-on w/ [1] to ensure the gate is not reset multiple times today. The gate queue will probably peak well over 26hr today.. so please lay off the workflows a bit. Your patience is appreciated! [1] https://review.opendev.org/c/openstack/tripleo-common/+/786053 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Apr 13 14:29:54 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 13 Apr 2021 17:29:54 +0300 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Yatin, Thank you for your work on this. Much appreciated! On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: > Hi Ruslanas, > > On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis > wrote: > > > > Hi Yatin, > > > > I have spotted that version of puppet-tripleo, but even after downgrade > I had/have same issue. should I downgrade even more? :) OR You know when > fixed version might get in for production centos ussuri release repo? > > > I have requested the tag release of puppet-neutron to clear this > https://review.opendev.org/c/openstack/releases/+/786006. Once it's > merged it can be included in centos ussuri release repo, RDO bots will > take care of it. If you want to test before it's released you can pick > puppet-neutron from RDO trunk repo[1]. > > [1] > https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ > > > As you know now that it is affected also :) > > > > > > > > > > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: > >> > >> Hi Ruslanas, > >> > >> For the issue see > >> > https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, > >> The puppet-neutron issue in above was specific to victoria but since > >> there is new release for ussuri recently, it also hit there too. > >> > >> > >> Thanks and Regards > >> Yatin Karel > >> > >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis > wrote: > >> > > >> > Hi all, > >> > > >> > While deploying undercloud, always fails on puppet-container-neutron > configuration, it fails with missing ml2 ovs_driver plugin... downloading > them using: > >> > openstack tripleo container image prepare default --output-env-file > containers-prepare-parameters.yaml > >> > > >> > grep -v Warning > /var/log/containers/stdouts/container-puppet-neutron.log > http://paste.openstack.org/show/804180/ > >> > > >> > builddir/install-undercloud.log ( contains info about > container-puppet-neutron ) > >> > http://paste.openstack.org/show/804181/ > >> > > >> > undercloud.conf: > >> > > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > >> > > >> > dnf list installed > >> > http://paste.openstack.org/show/804182/ > >> > > >> > -- > >> > Ruslanas Gžibovskis > >> > +370 6030 7030 > >> > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > Thanks and Regards > Yatin Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Apr 13 14:30:44 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 13 Apr 2021 17:30:44 +0300 Subject: [openstack-ansible] OSA Meeting Poll In-Reply-To: <1324211618317615@mail.yandex.ru> References: <170911617794404@mail.yandex.ru> <202851617797329@mail.yandex.ru> <1324211618317615@mail.yandex.ru> Message-ID: <1304221618324125@mail.yandex.ru> Thanks everyone for participating. New selected OpenStack-Ansible meeting time is: 15:00 UTC, Tuesday. New time is applicable starting from today, Apr 13, 2021. 13.04.2021, 15:48, "Dmitriy Rabotyagov" : > Despite time for vote has passed, I will hold voting opened for several more hours. So it's final call to vote for the new meeting time for interested parties. > > Link to poll is: https://doodle.com/poll/m554dx4mrsideuzi > > 07.04.2021, 15:15, "Dmitriy Rabotyagov" : >> Sorry for the typo in the link, added extra slash in the end. >> >> Correct link is: https://doodle.com/poll/m554dx4mrsideuzi >> >> 07.04.2021, 14:31, "Dmitriy Rabotyagov" : >>>  Hi! >>> >>>  We haven't changed OSA meeting time for a while and stick with the current option (Tuesday, 16:00 UTC) for a while. >>> >>>  So we decided it's time to make a poll regarding preferred time for OSA meetings since list of the interested parties and circumstances might have changed since picking meeting time. >>> >>>  You can find the poll via link [1]. Poll is open till Monday, April 12 2021. Please, make sure you vote before this time. >>> >>>  [1] https://doodle.com/poll/m554dx4mrsideuzi/ >>> >>>  -- >>>  Kind Regards, >>>  Dmitriy Rabotyagov >> >> -- >> Kind Regards, >> Dmitriy Rabotyagov > > -- > Kind Regards, > Dmitriy Rabotyagov --  Kind Regards, Dmitriy Rabotyagov From kennelson11 at gmail.com Tue Apr 13 15:01:44 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 13 Apr 2021 08:01:44 -0700 Subject: [PTG] vPTG April 2021 PTGBot, Etherpads, & IRC Message-ID: Hello! We just wanted to take a second to point out a couple things as we all get ready for the PTG. Firstly, the PTGBot is up to date and ready to go-- as are the autogenerated etherpads! You can see the schedule page, etherpads, etc here[1]. If you/your team have already created an etherpad, please feel free to use the PTGBot[2] to override the default, auto-generated one. Secondly, but perhaps more importantly, with the migration to being more inclusive of projects outside of just openstack, we will be using the #openinfra-events IRC channel! The redirect is in place so you should automatically get sent to the right channel if you try to join the old one. And one more plug: Please register! Its free and important for getting the zoom information, etc. Thanks! -The Kendalls (diablo_rojo & wendallkaters) [1] PTG Website www.openstack.org/ptg [2] PTGbot Etherpad Override Command: https://github.com/openstack/ptgbot#etherpad [3] PTG Registration: https://april2021-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Tue Apr 13 15:56:37 2021 From: abishop at redhat.com (Alan Bishop) Date: Tue, 13 Apr 2021 08:56:37 -0700 Subject: [cinder] How to post multiple patches. In-Reply-To: <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> References: <42af7b9a-73ce-4780-b787-c36901f5cc1a@gmail.com> <98285557-7db7-592a-c160-b6dad4971a1e@gmail.com> Message-ID: On Mon, Apr 12, 2021 at 6:47 PM Jay Bryant wrote: > > On 4/12/2021 11:18 AM, Brian Rosmaita wrote: > > On 4/12/21 12:52 AM, 野村和正 / NOMURA,KAZUMASA wrote: > >> Hi everyone, > >> > >> Hitachi has developed the out-of-tree driver as Cinder driver. But > >> wewant to deprecate the out-of-tree driver and support only the > >> in-treedriver. > >> > >> We need to submit about ten more patches(*1) for full features which > >> theout-of-tree driver has such as Consistency Group and Volume > >> Replication. > >> > >> In that case, we have two options: > >> > >> 1. Submit two or three patches at once. In other words, submit two > >> orthree patches to Xena, then submit another two or three patches > >> afterprevious patches were merged, and so on. This may give reviewers > >> thefeeling of endless. > I just want to add that you are not limited to submitting a single batch of patches in a cycle. If you can get the first batch accepted in Xena, you are free to submit other batches in Xena. Just continue to bear in mind the date for freezing driver patches. The bottom line is the sooner you submit patches and work on resolving reviewer feedback, the sooner you can propose additional patches. Alan >> > >> 2. Submit all patches at once to Xena. This will give reviewers > >> theinformation how many patches remains from the beginning, but many > >> pathesmay bother them. > >> > >> Does anyone have an opinion as to which option is better? > > > > My opinion is that option #1 is better, because as the initial patches > > are reviewed, issues will come up in review that you will be able to > > apply proactively to later patches on your own without reviewers > > having to bring them up, which will result in a better experience for > > all concerned. > > > > Also, we can have an idea of how many patches to expect (without your > > filing them all at once) if you file blueprints in Launchpad for each > > feature. Please name them 'hitachi-consistency-group-support', > > 'hitachi-volume-replication', etc., so it's easy to see what driver > > they're for. The blueprint doesn't need much detail; it's primarily > > for tracking purposes. You can see some examples here: > > https://blueprints.launchpad.net/cinder/wallaby > > > > > I concur with Brian. I think doing a few at a time will be less likely > to overwhelm the review team and it will help to prevent repeated > comments in subsequent patches if you are able to proactively fix the > subsequent patches before they are submitted. > > Thanks for seeking input on this! > > Jay > > > cheers, > > brian > > > >> > >> Thanks, > >> > >> Kazumasa Nomura > >> > >> E-mail: > >> kazumasa.nomura.rx at hitachi.com > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Tue Apr 13 16:50:27 2021 From: helena at openstack.org (helena at openstack.org) Date: Tue, 13 Apr 2021 12:50:27 -0400 (EDT) Subject: Join Us Live for the Wallaby Release Community Meeting Message-ID: <1618332627.338524703@apps.rackspace.com> The Wallaby release is here (whoop! whoop!) and the Community Meeting for it will be hosted by Technical Committee members, Ghanshyam Mann and Kendall Nelson, this Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom [1] as well as live-streamed to YouTube [2]. The meeting will be kicked off with some exciting news from the OpenInfra Foundation and will be followed by updates from PTLs. The Community Meeting agenda is as follows: Wallaby Overview - Ghanshyam Mann & Kendall Nelson Cinder Update - Brian Rosmaita Neutron Update - Slawek Kaplonski Ironic Update - Julia Kreger Nova Update - Balazs Gibizer Cyborg Update - Xin-Ran Wang Masakari Update - Radosław Piliszek Manila Update - Goutham Pacha Ravi Live Q&A session - Ghanshyam Mann & Kendall Nelson PTLs that are presenting: Please make sure your slides are turned in to me EOD (Tuesday, April 13th). Let me know if you have any other questions. Cheers, Helena [1] [ https://zoom.us/j/94881181840?pwd=cmc2Wk1wYlcwNnVOTk9lYWQxVlRadz09 ]( https://zoom.us/j/94881181840?pwd=cmc2Wk1wYlcwNnVOTk9lYWQxVlRadz09 ) [2] [ https://www.youtube.com/channel/UCQ74G2gKXdpwZkXEsclzcrA ]( https://www.youtube.com/channel/UCQ74G2gKXdpwZkXEsclzcrA ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 13 16:51:25 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 13 Apr 2021 09:51:25 -0700 Subject: [tact-sig][dev][infra][qa] Join OpenDev and the TaCT SIG at the PTG Message-ID: <1f425c10-d677-4935-8db7-247755b3d96d@www.fastmail.com> The PTG is next week, and OpenDev is participating alongside the OpenStack TaCT SIG. We are going to try something a bit different this time around, which is to treat the time as office hours rather than time for our own projects. We will be meeting on April 22 from 14:00 - 16:00 UTC and 22:00 - 00:00 UTC in https://meetpad.opendev.org/apr2021-ptg-opendev. Join us if you would like to: * Start contributing to either OpenDev or the TaCT sig. * Debug a particular job problem. * Learn how to write and review Zuul jobs and related configs. * Learn about specific services or how they are deployed. * And anything else related to OpenDev and our project infrastructure. Feel free to add your topics and suggest preferred times for those topics here: https://etherpad.opendev.org/p/apr2021-ptg-opendev. This etherpad corresponds to the document that will be auto loaded in our meetpad room above. I will also be around next week and will try to keep a flexible schedule. Feel free to reach out if you would like us to join discussions as they happen. See you there, Clark From ildiko.vancsa at gmail.com Tue Apr 13 17:48:54 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 13 Apr 2021 19:48:54 +0200 Subject: [edge][ptg] Edge WG sessions at the PTG Message-ID: <0A4AC929-2304-45BC-9A82-6B2BB5CD73BC@gmail.com> Hi, I’m reaching out to you to draw your attention to the Edge Computing Group’s agenda for the PTG next week. We are having a couple of cross-project and cross-community discussions next week as well as discussing topics around use cases and reference architectures which can be a good input for most OpenStack project teams as well. Our agenda is the following: * Monday (April 19) * 1400 UTC - Intro and Agenda bashing * 1415 UTC - Use cases * 1500 UTC - ETSI MEC cross-community session * Tuesday (April 20) * 1400 UTC - Storage discussion * 1430 UTC - Applications and underlying network transport * 1500 UTC - Reference architectures * Wednesday * 1300 UTC - StarlingX cross-project session * 1400 UTC - Akraino cross-community session * 1500 UTC - GSMA cross-community session For more detailed information about the above topics please see our etherpad: https://etherpad.opendev.org/p/ecg-ptg-april-2021 Please let me know if you have additional topics for the sessions next week or if you have questions to the items on the agenda. Thanks and Best Regards, Ildikó (IRC: ildikov on Freenode) From amy at demarco.com Tue Apr 13 19:15:51 2021 From: amy at demarco.com (Amy Marrich) Date: Tue, 13 Apr 2021 14:15:51 -0500 Subject: Diversity and Inclusion Social Hour Sponsored by RDO at the PTG Message-ID: On behalf of the RDO Community, please join us for an hour of Trivia on Thursday April 22 at 17:00 UTC. We will have trivia related to OpenStack and the other OIF projects as well as the cities we've held events. Time permitting we'll have some Pop Culture trivia as well. Prizes for the first 3 placings and registration is Free! https://eventyay.com/e/5f05de57 Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Apr 13 19:26:54 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 13 Apr 2021 14:26:54 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: Message-ID: Just a quick follow up that the 2020 User Survey Analytics are up on the openstack.org site: https://www.openstack.org/analytics Cheers, Jimmy On Apr 9 2021, at 2:28 pm, Julia Kreger wrote: > Thanks Allison! > > Even telling friends doesn't really help, since we would be > self-skewing. I guess part of the conundrum is it is easy for people > not to really be fully aware of the extent of their usage and the mix > of various projects under the hood. They know they get a star ship and > it has warp engines, but they may not know the factory that turned out > the starship. Only the geekiest might know those details. Anyway, I've > been down this path before w/r/t the user survey. > > C'est la vie. Back to work! > -Julia > On Fri, Apr 9, 2021 at 12:07 PM Allison Price wrote: > > > > Hi Julia, > > > > It looks like for Ironic, 19% of production environments are running it in production, 15% running it in testing, and 22% are interested. It’s a little down from 2019, but it was also a smaller sample size (2019: 331; 2020: 209). I am hoping to get a bigger turnout this year (tell all your friends!) so that we can get a better picture. > > > > Let me know if there is any other data you would like pulled. > > > > Thanks! > > Allison > > > > > > > On Apr 8, 2021, at 9:43 AM, Julia Kreger wrote: > > > > > > Hey Allison, > > > > > > Metrics would be awesome and I'm just looking for the key high level > > > adoption information as that is good to put into the presentation. > > > > > > -Julia > > > > > > On Wed, Apr 7, 2021 at 3:15 PM Allison Price wrote: > > >> > > >> Hi Julia, > > >> > > >> I see we haven’t pushed it live to openstack.org/analytics yet. I have pinged our team so that we can, but if you need metrics in the meantime, please let me know. > > >> > > >> Thanks! > > >> Allison > > >> > > >> > > >> > > >> > > >> > > >> On Apr 7, 2021, at 4:42 PM, Julia Kreger wrote: > > >> > > >> Related, Is there 2020 user survey data available? > > >> > > >> On Wed, Apr 7, 2021 at 12:40 PM helena at openstack.org > > >> wrote: > > >> > > >> > > >> Hello ptls, > > >> > > >> > > >> > > >> The community meeting for the Wallaby release will be next Thursday, April 15th at 14:00 UTC (9:00 AM CST). The meeting will be a live session via Zoom as well as live-streamed to YouTube. > > >> > > >> > > >> > > >> If you are a PTL interested in presenting an update for your project at the Wallaby community meeting, please let me know by this Friday, April 9th. Slides will be due next Tuesday, April 13th, and please find a template attached you may use if you wish. > > >> > > >> > > >> > > >> Let me know if you have any other questions! > > >> > > >> > > >> > > >> Thank you for your participation, > > >> > > >> Helena > > >> > > >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Apr 13 21:19:42 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 13 Apr 2021 23:19:42 +0200 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: References: Message-ID: <6e6f3449-b621-d327-d12e-d4e29e2920f4@debian.org> On 4/13/21 9:26 PM, Jimmy McArthur wrote: > Just a quick follow up that the 2020 User Survey Analytics are up on the > openstack.org  site: > > https://www.openstack.org/analytics > Cheers, > Jimmy Hi Jimmy, Could we get the possibility to choose Wallaby in the market place admin please? It's already working for me in Debian (I could spawn VMs) and I'd like to edit the part for Debian. Cheers, Thomas Goirand (zigo) From jimmy at openstack.org Tue Apr 13 21:27:25 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 13 Apr 2021 16:27:25 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <6e6f3449-b621-d327-d12e-d4e29e2920f4@debian.org> References: <6e6f3449-b621-d327-d12e-d4e29e2920f4@debian.org> Message-ID: <21AAF6D4-BEA9-44EC-BC5D-12FB2124D2D3@getmailspring.com> Hi Thomas - Working on that one :) Thanks for the heads up. I'll ping you as soon as available. Cheers, Jimmy On Apr 13 2021, at 4:19 pm, Thomas Goirand wrote: > On 4/13/21 9:26 PM, Jimmy McArthur wrote: > > Just a quick follow up that the 2020 User Survey Analytics are up on the > > openstack.org site: > > > > https://www.openstack.org/analytics > > Cheers, > > Jimmy > > Hi Jimmy, > Could we get the possibility to choose Wallaby in the market place admin > please? It's already working for me in Debian (I could spawn VMs) and > I'd like to edit the part for Debian. > > Cheers, > Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Apr 13 21:59:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Apr 2021 16:59:17 -0500 Subject: [infra][cinder][neutron][qa] neutron-grenade failures on stable/train In-Reply-To: <178c70f1d0e.f997989d279125.7112266830106959000@ghanshyammann.com> References: <178c70f1d0e.f997989d279125.7112266830106959000@ghanshyammann.com> Message-ID: <178cd3ef0ad.dba74c6e357934.970580417774215231@ghanshyammann.com> ---- On Mon, 12 Apr 2021 12:09:19 -0500 Ghanshyam Mann wrote ---- > ---- On Mon, 12 Apr 2021 12:00:51 -0500 Brian Rosmaita wrote ---- > > The neutron-grenade job on stable/train has been mostly failing since 8 > > April: > > > > https://zuul.opendev.org/t/openstack/builds?job_name=neutron-grenade&branch=stable%2Ftrain > > > > I spot-checked a few and it looks like the culprit is "Could not open > > requirements file: [Errno 2] No such file or directory: > > '/opt/stack/requirements/upper-constraints.txt'". See: > > - > > https://zuul.opendev.org/t/openstack/build/421f7d57bc234119963edf3e9101ca43/log/logs/grenade.sh.txt#28855 > > - > > https://zuul.opendev.org/t/openstack/build/915395d34fa34e99a6ec544ad1d3141b/log/logs/grenade.sh.txt#28849 > > - > > https://zuul.opendev.org/t/openstack/build/9eaacc244047408d9e2d9ed09529ff3f/log/logs/grenade.sh.txt#28855 > > > > The last 2 neutron-grenade jobs on openstack/devstack have passed, but > > there don't look to have been any changes in the devstack repo > > stable/train since 10 March, so I'm not sure if that was luck or if the > > QA team has made a change to get the job working. > > There were multiple changes merged in devstack and grenade for tempest venv constraints at the same time and > there were a few issues in sourcing the stackrc for checking the tempest venv constraints, let's > wait for the below fixes to merged to get all cases green > > -https://review.opendev.org/q/If5f14654ab9aee2a140bbfb869b50d63cb289fdf All patches are merged now along with the making stackviz non-failing [1]. All stable branch should be green now for this issue, please recheck. [1] https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90 -gmann > > -gmann > > > > > Any ideas? > > > > > > thanks, > > brian > > > > > > From gmann at ghanshyammann.com Tue Apr 13 22:00:57 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Apr 2021 17:00:57 -0500 Subject: [qa][heat][stable] grenade jobs with tempest plugins on stable/train broken In-Reply-To: <178a4b1e326.db78f8f289143.8139427571865552389@ghanshyammann.com> References: <178a4b1e326.db78f8f289143.8139427571865552389@ghanshyammann.com> Message-ID: <178cd4077a2.ca9b6368357948.1378438997562868912@ghanshyammann.com> Just updating the status here too. All fixes are merged on devstack, grenade side, and those should make the stable branch green now. -gmann ---- On Mon, 05 Apr 2021 20:00:24 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > I capped stable/stein to use the Tempest 26.0.0 which means grenade jobs that > run the tests from tempest plugins started using the Tempest 26.0.0. But the constraints > used in Tempest virtual env are mismatched between when Tempest virtual env was created > and when tests are run from grenade or grenade plugins scripts. > > Due to these two different constraint used, tox recreate the tempest virtual env which remove > all already installed tempest plugins and their deps and it fails to run the smoke tests. > > This constraints mismatch issue occurred in stable/train and I standardized these for devstack based jobs > - https://review.opendev.org/q/topic:%2522standardize-tempest-tox-constraints%2522+status:merged > > But this issue is occurring for grenade jobs that do not run the tests via run-tempest role (run-tempest role > take care of constraints things). Rabi observed this in threat grenade jobs today. I have reported this as a bug > in LP[1] and making it standardize from the master branch so that this kind of issue does not occur again when > any stable branch starts using the non-master Tempest. > > Please don't recheck if your grenade job is failing with the same issue and wait for the updates on this ML thread. > > [1] https://bugs.launchpad.net/grenade/+bug/1922597 > > -gmann > > > From jay.faulkner at verizonmedia.com Tue Apr 13 22:48:19 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Tue, 13 Apr 2021 15:48:19 -0700 Subject: [cinder] Requesting reviews for mTLS support in client Message-ID: Hi all, A few of us here at Verizon Media have been working to ensure all services can support authenticating with mTLS certificate and key. We've had success with this, and if you've reviewed patches related to this, thank you! There's still an outstanding patch for python-cinderclient that has not gotten attention. It has +1s from contributors across the community, but hasn't gotten any core review attention. It's a trivial change, adding support for mTLS certificate passing for server version requests. The bug is here: https://bugs.launchpad.net/python-cinderclient/+bug/1915996 and the code is here: https://review.opendev.org/c/openstack/python-cinderclient/+/776311. If there are any concerns about this change, please let us know on the gerrit change itself, or feel free to reach out to me on IRC (JayF in #openstack-ironic, among many others). We have been successfully running this code downstream for a while, and hope to share the added mTLS love with the rest of the community. Thanks, Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Wed Apr 14 05:04:17 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 14 Apr 2021 10:34:17 +0530 Subject: [tripleo][ci] long queues, ceph / scenario001 issue In-Reply-To: References: Message-ID: On Tue, Apr 13, 2021 at 7:57 PM Wesley Hayutin wrote: > Greetings, > > FYI.. > https://bugs.launchpad.net/tripleo/+bug/1923529 > > The ceph folks and CI are working together to successfully transition ceph > from octopus to pacific. > At the moment any scenario001 job will fail. > To ease the transition, we're proposing [1] > > Please note: > If you have a change in the gate that triggers scenario001 I will be > rebasing or adding a depends-on w/ [1] to ensure the gate is not reset > multiple times today. > The changes have lost all their votes by adding the depends-on. We could have abandoned/restored the changes like we used to do earlier(?), if they are already in the gate and there was no better way to clear the gate I guess. > The gate queue will probably peak well over 26hr today.. so please lay off > the workflows a bit. Your patience is appreciated! > > [1] https://review.opendev.org/c/openstack/tripleo-common/+/786053 > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Apr 14 06:13:48 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 14 Apr 2021 11:43:48 +0530 Subject: [glance] Xena PTG schedule Message-ID: Hello All, Greetings!!! Xena PTG is around the corner and if you haven't already registered, please do so as soon as possible [1]. I have created a Virtual PTG planning etherpad [2] and also added day wise topics along with timings we are going to discuss. Kindly let me know if you have any concerns with allotted time slots. We also have some slots open on Tuesday, Thursday and Friday for unplanned discussions. So please feel free to add your topics if you still haven't added yet. As a reminder, these are the time slots for our discussion. Tuesday 20 April 2021 1400 UTC to 1700 UTC Wednesday 21 April 2021 1400 UTC to 1700 UTC Thursday 22 April 2021 1400 UTC to 1700 UTC Friday 23 April 2021 1400 UTC to 1700 UTC We will be using bluejeans for our discussion, kindly try to use it once before the actual discussion. The meeting URL is mentioned in etherpad [2] and will be the same throughout the PTG. [1] https://april2021-ptg.eventbrite.com/ [2] https://etherpad.opendev.org/p/xena-glance-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Apr 14 08:42:35 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 14 Apr 2021 10:42:35 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: <2752308.ClrQMDxLba@p1> References: <2752308.ClrQMDxLba@p1> Message-ID: <4135616.GcyNBQpf4Z@p1> Hi Arkady, Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > Hi, > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > > PTG meeting to go over Interop testing and any changes for neutron tempest or > > > tempest configuration in Wallaby cycle or changes planned for Xena. Once on > > agenda one of the Interop WG person will attend and lead the discussion. > > I just added it to our etherpad https://etherpad.opendev.org/p/neutron-xena-ptg > I will be working on schedule of the sessions later this week and I will let > You know what timeslot this session with Interop WG will be. > Please let me know if You have any preferences. We have our sessions > scheduled: > > Monday 1300 - 1600 UTC > Tuesday 1300 - 1600 UTC > Thursday 1300 - 1600 UTC > Friday 1300 - 1600 UTC > > Our time slots which are already booked are: > - Monday 15:00 - 16:00 UTC > - Thursday 14:00 - 15:30 UTC > - Friday 14:00 - 15:00 UTC > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. Please let me know if that isn't good time slot for You. Please also add topics which You want to discuss to our etherpad https:// etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Apr 14 08:48:35 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 14 Apr 2021 10:48:35 +0200 Subject: [neutron] Xena PTG schedule Message-ID: <6863474.G8OYYvop51@p1> Hi neutrinos, I just prepared agenda for our PTG sessions. It's available in our etherpad [1]. Please let me know if topics You are interested in are in not good time slots for You. I will try to move things around if possible. Also, if You have any other topic to discuss, please let me know too so I can include it in the agenda. [1] https://etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Wed Apr 14 12:40:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Apr 2021 12:40:12 +0000 Subject: [all][elections][tc] TC Vacancy Special Election voting ends soon Message-ID: <20210414120553.fe7r7q5e65mckdiu@yuggoth.org> We are coming down to the last hours for voting in the TC Vacancy special election. Voting ends Apr 15, 2021 23:45 UTC. Search your gerrit preferred email address[0] for the following subject: Poll: April 2021 Special Technical Committee Election That is your ballot and links you to the voting application. Please vote. If you have voted, please encourage your colleagues to vote. Candidate statements are linked to the names of all confirmed candidates: https://governance.openstack.org/election/#xena-tc-candidates What to do if you don't see the email and have a commit in at least one of the official project teams' deliverable repositories[1]: * check the trash of your gerrit Preferred Email address[0], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from an official deliverable repo[1] and email the election officials[2]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Please vote! Thank you, [0] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [1] https://opendev.org/openstack/governance/src/commit/892c4f3a851428cf41bab57c6c283e82f1df06d8/reference/projects.yaml [2] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Elections Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mephmanx at gmail.com Wed Apr 14 13:05:09 2021 From: mephmanx at gmail.com (Chris Lyons) Date: Wed, 14 Apr 2021 13:05:09 +0000 Subject: Openstack External Network connectivity assistance Message-ID: All, I am working on getting a private Openstack cloud set up to support a project I am planning. I have the install complete and do not get any errors and I can connect to hosted VM’s but the VM’s cannot connect to the internet. I have checked OVS logs and it looks like packets are being dropped. I have 2 network nodes and 1 compute node. All are using centos 8. The nodes have 3 NIC’s; NIC 1 is for the internal network and has no connectivity outside of the Openstack cluster; NIC’s 2 & 3 have external & internet connectivity (behind another router/firewall). The br-int,br-ex,and br-tun exist on all nodes. Here is where I think I see packets being dropped: [root at compute01 ~]# docker exec openvswitch_vswitchd ovs-dpctl show -s system at ovs-system: lookups: hit:38597645 missed:256444 lost:0 flows: 38 masks: hit:40505463 total:5 hit/pkt:1.04 port 0: ovs-system (internal) RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 1: br-ex (internal) RX packets:0 errors:0 dropped:566073 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 2: eth2 RX packets:60413543 errors:0 dropped:384 overruns:0 frame:0 TX packets:11059 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:43092601338 (40.1 GiB) TX bytes:1099133 (1.0 MiB) port 3: br-int (internal) RX packets:0 errors:0 dropped:539653 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 4: br-tun (internal) RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:0 TX bytes:0 port 5: qr-317618cc-cc (internal) RX packets:14050 errors:0 dropped:0 overruns:0 frame:0 TX packets:4164 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:900953 (879.8 KiB) TX bytes:318526 (311.1 KiB) port 6: vxlan_sys_4789 (vxlan: packet_type=ptap) RX packets:0 errors:? dropped:? overruns:? frame:? TX packets:0 errors:? dropped:? aborted:? carrier:? collisions:? RX bytes:0 TX bytes:0 port 7: qvoa777fa8d-fb RX packets:3259 errors:0 dropped:0 overruns:0 frame:0 TX packets:13643 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:193660 (189.1 KiB) TX bytes:1126219 (1.1 MiB) port 8: fg-cbe0bbae-e9 (internal) RX packets:518682 errors:0 dropped:24 overruns:0 frame:0 TX packets:4386 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:145164850 (138.4 MiB) TX bytes:328630 (320.9 KiB) port 9: qvo83a439a5-52 RX packets:2642 errors:0 dropped:0 overruns:0 frame:0 TX packets:9553 errors:0 dropped:0 aborted:0 carrier:0 collisions:0 RX bytes:308479 (301.2 KiB) TX bytes:718738 (701.9 KiB) port 10: qvo5fe2d158-f0 …. I would appreciate any ideas or assistance. Id be willing to pay for help as well. Horizon console is at https://app-external.lyonsgroup.family user: support pwd: default -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Apr 14 13:27:53 2021 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 14 Apr 2021 18:57:53 +0530 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Hi Ruslanas, On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis wrote: > > Hi Yatin, > > Thank you for your work on this. Much appreciated! > > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: >> >> Hi Ruslanas, >> >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis wrote: >> > >> > Hi Yatin, >> > >> > I have spotted that version of puppet-tripleo, but even after downgrade I had/have same issue. should I downgrade even more? :) OR You know when fixed version might get in for production centos ussuri release repo? >> > >> I have requested the tag release of puppet-neutron to clear this >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's >> merged it can be included in centos ussuri release repo, RDO bots will >> take care of it. If you want to test before it's released you can pick >> puppet-neutron from RDO trunk repo[1]. >> >> [1] https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ >> It's released, and updated rpm now available at both c8 and c8-stream CloudSIG repos:- - http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D - http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >> > As you know now that it is affected also :) >> > >> > >> > >> > >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >> >> >> >> Hi Ruslanas, >> >> >> >> For the issue see >> >> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, >> >> The puppet-neutron issue in above was specific to victoria but since >> >> there is new release for ussuri recently, it also hit there too. >> >> >> >> >> >> Thanks and Regards >> >> Yatin Karel >> >> >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis wrote: >> >> > >> >> > Hi all, >> >> > >> >> > While deploying undercloud, always fails on puppet-container-neutron configuration, it fails with missing ml2 ovs_driver plugin... downloading them using: >> >> > openstack tripleo container image prepare default --output-env-file containers-prepare-parameters.yaml >> >> > >> >> > grep -v Warning /var/log/containers/stdouts/container-puppet-neutron.log http://paste.openstack.org/show/804180/ >> >> > >> >> > builddir/install-undercloud.log ( contains info about container-puppet-neutron ) >> >> > http://paste.openstack.org/show/804181/ >> >> > >> >> > undercloud.conf: >> >> > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> >> > >> >> > dnf list installed >> >> > http://paste.openstack.org/show/804182/ >> >> > >> >> > -- >> >> > Ruslanas Gžibovskis >> >> > +370 6030 7030 >> >> >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> >> Thanks and Regards >> Yatin Karel >> Thanks and Regards Yatin Karel From ruslanas at lpic.lt Wed Apr 14 13:31:46 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Apr 2021 16:31:46 +0300 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Thank you, will check in the eve. Will let you know. Thanks 🎉 On Wed, 14 Apr 2021, 16:28 Yatin Karel, wrote: > Hi Ruslanas, > > On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis > wrote: > > > > Hi Yatin, > > > > Thank you for your work on this. Much appreciated! > > > > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: > >> > >> Hi Ruslanas, > >> > >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis > wrote: > >> > > >> > Hi Yatin, > >> > > >> > I have spotted that version of puppet-tripleo, but even after > downgrade I had/have same issue. should I downgrade even more? :) OR You > know when fixed version might get in for production centos ussuri release > repo? > >> > > >> I have requested the tag release of puppet-neutron to clear this > >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's > >> merged it can be included in centos ussuri release repo, RDO bots will > >> take care of it. If you want to test before it's released you can pick > >> puppet-neutron from RDO trunk repo[1]. > >> > >> [1] > https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ > >> > It's released, and updated rpm now available at both c8 and c8-stream > CloudSIG repos:- > - > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D > - > http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D > >> > As you know now that it is affected also :) > >> > > >> > > >> > > >> > > >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: > >> >> > >> >> Hi Ruslanas, > >> >> > >> >> For the issue see > >> >> > https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html, > >> >> The puppet-neutron issue in above was specific to victoria but since > >> >> there is new release for ussuri recently, it also hit there too. > >> >> > >> >> > >> >> Thanks and Regards > >> >> Yatin Karel > >> >> > >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis > wrote: > >> >> > > >> >> > Hi all, > >> >> > > >> >> > While deploying undercloud, always fails on > puppet-container-neutron configuration, it fails with missing ml2 > ovs_driver plugin... downloading them using: > >> >> > openstack tripleo container image prepare default > --output-env-file containers-prepare-parameters.yaml > >> >> > > >> >> > grep -v Warning > /var/log/containers/stdouts/container-puppet-neutron.log > http://paste.openstack.org/show/804180/ > >> >> > > >> >> > builddir/install-undercloud.log ( contains info about > container-puppet-neutron ) > >> >> > http://paste.openstack.org/show/804181/ > >> >> > > >> >> > undercloud.conf: > >> >> > > https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf > >> >> > > >> >> > dnf list installed > >> >> > http://paste.openstack.org/show/804182/ > >> >> > > >> >> > -- > >> >> > Ruslanas Gžibovskis > >> >> > +370 6030 7030 > >> >> > >> > > >> > > >> > -- > >> > Ruslanas Gžibovskis > >> > +370 6030 7030 > >> > >> Thanks and Regards > >> Yatin Karel > >> > Thanks and Regards > Yatin Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 14 13:32:21 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Apr 2021 13:32:21 +0000 Subject: Openstack External Network connectivity assistance In-Reply-To: References: Message-ID: <20210414133220.5o2v3yk5yiaevagj@yuggoth.org> On 2021-04-14 13:05:09 +0000 (+0000), Chris Lyons wrote: [...] > Horizon console is at > > https://app-external.lyonsgroup.family > > user: > > support > > pwd: > > default You might want to change that password, you've E-mailed it to a public mailing list for which the archive is published on the World Wide Web. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From senrique at redhat.com Wed Apr 14 13:56:51 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 14 Apr 2021 10:56:51 -0300 Subject: [cinder] Bug deputy report for week of 2021-04-14 Message-ID: Hello, This is a bug report from 2021-04-07 to 2021-04-14. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC in #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - Medium: - https://bugs.launchpad.net/cinder/+bug/1922920 "Incorrect volume usage notifications on migration". Assigned to Gorka Eguileo. - https://bugs.launchpad.net/cinder/+bug/1923830 "Backup of in-use volume using temp snapshot messes up quota usage". Assigned to Gorka Eguileo. - https://bugs.launchpad.net/cinder/+bug/1923829 " Backup of in-use volume using temp snapshot messes up quota usage". Assigned to Gorka Eguileo. - https://bugs.launchpad.net/cinder/+bug/1923828 " Snapshot quota usage sync counts temporary snapshots". Assigned to Gorka Eguileo. Low:- Undecided: - https://bugs.launchpad.net/cinder/+bug/1922939 "Volume backup deletion leaves orphaned files on object storage". Unassigned Regards, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Apr 14 14:45:44 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 14 Apr 2021 09:45:44 -0500 Subject: [ptl] Wallaby Release Community Meeting In-Reply-To: <21AAF6D4-BEA9-44EC-BC5D-12FB2124D2D3@getmailspring.com> References: <21AAF6D4-BEA9-44EC-BC5D-12FB2124D2D3@getmailspring.com> Message-ID: <624F5D81-3792-400B-89EF-A8B16AF2A425@getmailspring.com> Wallaby is now an option in the Marketplace Admin. Cheers, Jimmy On Apr 13 2021, at 4:27 pm, Jimmy McArthur wrote: > Hi Thomas - > > Working on that one :) Thanks for the heads up. I'll ping you as soon as available. > Cheers, > Jimmy > > On Apr 13 2021, at 4:19 pm, Thomas Goirand wrote: > > On 4/13/21 9:26 PM, Jimmy McArthur wrote: > > > Just a quick follow up that the 2020 User Survey Analytics are up on the > > > openstack.org site: > > > > > > https://www.openstack.org/analytics > > > Cheers, > > > Jimmy > > > > Hi Jimmy, > > Could we get the possibility to choose Wallaby in the market place admin > > please? It's already working for me in Debian (I could spawn VMs) and > > I'd like to edit the part for Debian. > > > > Cheers, > > Thomas Goirand (zigo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 14 15:30:36 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 14 Apr 2021 17:30:36 +0200 Subject: OpenStack Wallaby is officially released! Message-ID: The official OpenStack Wallaby release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2021-April/002047.html Thanks to all who were a part of the Wallaby development cycle! This marks the official opening of the releases repo for Xena, and freezes are now lifted. Wallaby is now a fully normal stable branch, and the normal stable policy now applies. Thanks! Hervé Beraud and the Release Management team -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Apr 14 15:59:02 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 14 Apr 2021 11:59:02 -0400 Subject: [cinder] festival of XS reviews 16 April 2021 Message-ID: <0c73cc15-92ed-2f95-efdd-dead8f60125e@gmail.com> Hello Cinder community members, This is a reminder that the Third Cinder Festival of XS Reviews will be held at the end of this week on Friday 16 April. what: The Cinder Festival of XS Reviews when: Friday 16 April 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews (Note that we've moved to meetpad!) Now that we've made this a recurring meeting, here's an ICS file for your calendar: http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics See you there! brian From gthiemon at redhat.com Wed Apr 14 16:20:48 2021 From: gthiemon at redhat.com (Gregory Thiemonge) Date: Wed, 14 Apr 2021 18:20:48 +0200 Subject: [octavia] Next week meeting Message-ID: Hi, Next week is the PTG, so we decided during our weekly upstream meeting to cancel the next Octavia meeting (April 21st). Thank you, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Apr 14 16:25:44 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Apr 2021 09:25:44 -0700 Subject: Retiring the Infra mqtt service running on firehose Message-ID: <08ecc3f0-4777-4f66-8bec-af472a936342@www.fastmail.com> Hello everyone, This is a short note to announce that we will be retiring the mqtt service that was running on our firehose.openstack.org server. The server itself will also be removed. This should happen in the next day or two. This service never saw production use. It was a great little experiment, and I think several of us learned a lot in the process. Unfortunately, the service needs more care than we can provide it (config management updates and upgrades primarily). Considering the maintenance needs and the lack of use(rs) our best option appears to be simply turning it off. As a note, it appears the service may have died at some point anyway and hasn't been functioning. The lack of complaints are another indications that we are fine to turn it off. If you need access to the data the firehose was providing, you should be able to procure it via other methods (like the Gerrit event stream). Clark From ruslanas at lpic.lt Wed Apr 14 16:52:08 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Apr 2021 18:52:08 +0200 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: Yatin, I still see the same version. puppet-tripleo noarch 12.5.0-1.el8 centos-openstack-ussuri 278 k Will try to monitor changes. On Wed, 14 Apr 2021 at 15:31, Ruslanas Gžibovskis wrote: > Thank you, will check in the eve. Will let you know. Thanks 🎉 > > On Wed, 14 Apr 2021, 16:28 Yatin Karel, wrote: > >> Hi Ruslanas, >> >> On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis >> wrote: >> > >> > Hi Yatin, >> > >> > Thank you for your work on this. Much appreciated! >> > >> > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: >> >> >> >> Hi Ruslanas, >> >> >> >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis >> wrote: >> >> > >> >> > Hi Yatin, >> >> > >> >> > I have spotted that version of puppet-tripleo, but even after >> downgrade I had/have same issue. should I downgrade even more? :) OR You >> know when fixed version might get in for production centos ussuri release >> repo? >> >> > >> >> I have requested the tag release of puppet-neutron to clear this >> >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's >> >> merged it can be included in centos ussuri release repo, RDO bots will >> >> take care of it. If you want to test before it's released you can pick >> >> puppet-neutron from RDO trunk repo[1]. >> >> >> >> [1] >> https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ >> >> >> It's released, and updated rpm now available at both c8 and c8-stream >> CloudSIG repos:- >> - >> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >> - >> http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >> >> > As you know now that it is affected also :) >> >> > >> >> > >> >> > >> >> > >> >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >> >> >> >> >> >> Hi Ruslanas, >> >> >> >> >> >> For the issue see >> >> >> >> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html >> , >> >> >> The puppet-neutron issue in above was specific to victoria but since >> >> >> there is new release for ussuri recently, it also hit there too. >> >> >> >> >> >> >> >> >> Thanks and Regards >> >> >> Yatin Karel >> >> >> >> >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis < >> ruslanas at lpic.lt> wrote: >> >> >> > >> >> >> > Hi all, >> >> >> > >> >> >> > While deploying undercloud, always fails on >> puppet-container-neutron configuration, it fails with missing ml2 >> ovs_driver plugin... downloading them using: >> >> >> > openstack tripleo container image prepare default >> --output-env-file containers-prepare-parameters.yaml >> >> >> > >> >> >> > grep -v Warning >> /var/log/containers/stdouts/container-puppet-neutron.log >> http://paste.openstack.org/show/804180/ >> >> >> > >> >> >> > builddir/install-undercloud.log ( contains info about >> container-puppet-neutron ) >> >> >> > http://paste.openstack.org/show/804181/ >> >> >> > >> >> >> > undercloud.conf: >> >> >> > >> https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >> >> >> > >> >> >> > dnf list installed >> >> >> > http://paste.openstack.org/show/804182/ >> >> >> > >> >> >> > -- >> >> >> > Ruslanas Gžibovskis >> >> >> > +370 6030 7030 >> >> >> >> >> > >> >> > >> >> > -- >> >> > Ruslanas Gžibovskis >> >> > +370 6030 7030 >> >> >> >> Thanks and Regards >> >> Yatin Karel >> >> >> Thanks and Regards >> Yatin Karel >> >> -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 14 17:01:08 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Apr 2021 17:01:08 +0000 Subject: Retiring the Infra mqtt service running on firehose In-Reply-To: <08ecc3f0-4777-4f66-8bec-af472a936342@www.fastmail.com> References: <08ecc3f0-4777-4f66-8bec-af472a936342@www.fastmail.com> Message-ID: <20210414170108.mi6ibxr6kaw4x6n5@yuggoth.org> On 2021-04-14 09:25:44 -0700 (-0700), Clark Boylan wrote: [...] > If you need access to the data the firehose was providing, you > should be able to procure it via other methods (like the Gerrit > event stream). Also, while we are likely to retire the various MQTT bridge projects we developed around it in the near future, you can still fork or ask to have control of them transferred to you if you find them useful for running a similar service yourself. None of the things we were reporting in the firehose required privileged access (well, except for the configuration management update stream, which we stopped publishing there a while back), so there's nothing stopping someone from setting up their own firehose as a fully functional replacement. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ruslanas at lpic.lt Wed Apr 14 17:24:56 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Apr 2021 19:24:56 +0200 Subject: [TripleO][CentOS8] container-puppet-neutron Could not find class ::neutron::plugins::ml2::ovs_driver for undercloud In-Reply-To: References: Message-ID: it passed the step it was failing. Thank you Yatin On Wed, 14 Apr 2021 at 18:52, Ruslanas Gžibovskis wrote: > Yatin, I still see the same version. > > puppet-tripleo noarch 12.5.0-1.el8 > centos-openstack-ussuri 278 k > > Will try to monitor changes. > > On Wed, 14 Apr 2021 at 15:31, Ruslanas Gžibovskis > wrote: > >> Thank you, will check in the eve. Will let you know. Thanks 🎉 >> >> On Wed, 14 Apr 2021, 16:28 Yatin Karel, wrote: >> >>> Hi Ruslanas, >>> >>> On Tue, Apr 13, 2021 at 8:00 PM Ruslanas Gžibovskis >>> wrote: >>> > >>> > Hi Yatin, >>> > >>> > Thank you for your work on this. Much appreciated! >>> > >>> > On Tue, 13 Apr 2021, 08:58 Yatin Karel, wrote: >>> >> >>> >> Hi Ruslanas, >>> >> >>> >> On Thu, Apr 8, 2021 at 9:41 PM Ruslanas Gžibovskis >>> wrote: >>> >> > >>> >> > Hi Yatin, >>> >> > >>> >> > I have spotted that version of puppet-tripleo, but even after >>> downgrade I had/have same issue. should I downgrade even more? :) OR You >>> know when fixed version might get in for production centos ussuri release >>> repo? >>> >> > >>> >> I have requested the tag release of puppet-neutron to clear this >>> >> https://review.opendev.org/c/openstack/releases/+/786006. Once it's >>> >> merged it can be included in centos ussuri release repo, RDO bots will >>> >> take care of it. If you want to test before it's released you can pick >>> >> puppet-neutron from RDO trunk repo[1]. >>> >> >>> >> [1] >>> https://trunk.rdoproject.org/centos8-ussuri/component/tripleo/current-tripleo/ >>> >> >>> It's released, and updated rpm now available at both c8 and c8-stream >>> CloudSIG repos:- >>> - >>> http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >>> - >>> http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-ussuri/Packages/p/?C=M;O=D >>> >> > As you know now that it is affected also :) >>> >> > >>> >> > >>> >> > >>> >> > >>> >> > On Thu, 8 Apr 2021 at 16:18, Yatin Karel wrote: >>> >> >> >>> >> >> Hi Ruslanas, >>> >> >> >>> >> >> For the issue see >>> >> >> >>> https://lists.centos.org/pipermail/centos-devel/2021-February/076496.html >>> , >>> >> >> The puppet-neutron issue in above was specific to victoria but >>> since >>> >> >> there is new release for ussuri recently, it also hit there too. >>> >> >> >>> >> >> >>> >> >> Thanks and Regards >>> >> >> Yatin Karel >>> >> >> >>> >> >> On Tue, Apr 6, 2021 at 1:19 AM Ruslanas Gžibovskis < >>> ruslanas at lpic.lt> wrote: >>> >> >> > >>> >> >> > Hi all, >>> >> >> > >>> >> >> > While deploying undercloud, always fails on >>> puppet-container-neutron configuration, it fails with missing ml2 >>> ovs_driver plugin... downloading them using: >>> >> >> > openstack tripleo container image prepare default >>> --output-env-file containers-prepare-parameters.yaml >>> >> >> > >>> >> >> > grep -v Warning >>> /var/log/containers/stdouts/container-puppet-neutron.log >>> http://paste.openstack.org/show/804180/ >>> >> >> > >>> >> >> > builddir/install-undercloud.log ( contains info about >>> container-puppet-neutron ) >>> >> >> > http://paste.openstack.org/show/804181/ >>> >> >> > >>> >> >> > undercloud.conf: >>> >> >> > >>> https://raw.githubusercontent.com/qw3r3wq/OSP-ussuri/master/undercloud-v3/undercloud.conf >>> >> >> > >>> >> >> > dnf list installed >>> >> >> > http://paste.openstack.org/show/804182/ >>> >> >> > >>> >> >> > -- >>> >> >> > Ruslanas Gžibovskis >>> >> >> > +370 6030 7030 >>> >> >> >>> >> > >>> >> > >>> >> > -- >>> >> > Ruslanas Gžibovskis >>> >> > +370 6030 7030 >>> >> >>> >> Thanks and Regards >>> >> Yatin Karel >>> >> >>> Thanks and Regards >>> Yatin Karel >>> >>> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Apr 14 19:29:46 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 14 Apr 2021 15:29:46 -0400 Subject: [cinder] Xena PTG schedule Message-ID: As mentioned at today's cinder meeting, the Xena PTG schedule for cinder next week is available: https://etherpad.opendev.org/p/apr2021-ptg-cinder The sessions will be recorded. Connection info is on the etherpad. As usual, there are a few items scheduled for specific times; otherwise, we'll just go through topics in the order listed, giving each one as much time as it needs. If we are running long or short on any given day during the PTG, I'll move one of my topics. So you should be able to figure out the day/time of your topic within a half hour or so. Please look the schedule over and let me know about any conflicts as soon as possible. Also, feel free to start adding any notes or references about your topic to the etherpad. Depending on how things go, there may be room for another topic on Friday. Let me know if there's something you'd like to see discussed. Finally, don't forget to register for the PTG: https://april2021-ptg.eventbrite.com/ See you at the PTG! From gmann at ghanshyammann.com Thu Apr 15 00:40:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Apr 2021 19:40:17 -0500 Subject: [all][tc] Technical Committee next weekly meeting on April 15th at 1500 UTC In-Reply-To: <178c8031241.1163a8d15287294.6003502774058757783@ghanshyammann.com> References: <178c8031241.1163a8d15287294.6003502774058757783@ghanshyammann.com> Message-ID: <178d2f8b278.c8d3464f37790.5375231383528536308@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on April 15th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * PTG ** https://etherpad.opendev.org/p/tc-xena-ptg * Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Election for one Vacant TC seat (gmann) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021334.html * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 12 Apr 2021 16:35:47 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for April 15th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, April 14th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gouthampravi at gmail.com Thu Apr 15 07:57:52 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 15 Apr 2021 00:57:52 -0700 Subject: [manila][ptg] Xena PTG Planning In-Reply-To: References: Message-ID: Hello Zorillas, Thank you for proposing topics for the Xena PTG Discussions. I've assigned time slots to the discussion items proposed. Please see them in the etherpad we'll use on the day [0]. If you'd like to move something around, please let me know. If you have some last minute topics, please let me know, and add them to the planning etherpad [3]. We won't have our regularly scheduled IRC meeting during the PTG week. Hope to see you all virtually! Thanks, Goutham [0] https://etherpad.opendev.org/p/xena-ptg-manila [3] https://etherpad.opendev.org/p/xena-ptg-manila-planning On Wed, Mar 24, 2021 at 11:53 PM Goutham Pacha Ravi wrote: > > Hello Zorillas and Interested Stackers, > > As you're aware, the virtual PTG for the Xena release cycle is between > April 19-23, 2021. If you haven't registered yet, you must do so as > soon as possible! [1]. We've signed up for some slots on the PTG > timeslots ethercalc [2]. > > The PTG Planning etherpad [3] is now live. Please go ahead and add > your name/irc nick and propose any topics. You may propose topics even > if you wouldn't like to moderate the discussion. > > Thanks, and hope to see you all there! > Goutham > > [1] https://april2021-ptg.eventbrite.com/ > [2] https://ethercalc.net/oz7q0gds9zfi > [3] https://etherpad.opendev.org/p/xena-ptg-manila-planning From syedammad83 at gmail.com Thu Apr 15 08:51:37 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 15 Apr 2021 13:51:37 +0500 Subject: Openstack Databases Support Message-ID: Hi, I was working to have high availability of openstack components databases. I have used Percona XtraDB cluster 8.0 for one of my other project and it works pretty good. Is Percona XtraDB cluster 8.0 supported for openstack components databases ? - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Apr 15 13:02:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 15 Apr 2021 15:02:42 +0200 Subject: [neutron] Drivers meeting agenda - 16.04.2021 Message-ID: <6804092.R4j1StIJWZ@p1> Hi, Agenda for our tomorrow's drivers meeting is at [1]. We have 1 new RFE to discuss: - https://bugs.launchpad.net/neutron/+bug/1922716 - [RFE] BFD for BGP Dynamic Routing [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From Arkady.Kanevsky at dell.com Thu Apr 15 14:03:23 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:03:23 +0000 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: <4135616.GcyNBQpf4Z@p1> References: <2752308.ClrQMDxLba@p1> <4135616.GcyNBQpf4Z@p1> Message-ID: Thanks Slawek. I will check with the team and will get back to you. For now assume that Friday will work. Thanks, Arkady -----Original Message----- From: Slawek Kaplonski Sent: Wednesday, April 14, 2021 3:43 AM To: openstack-discuss at lists.openstack.org Cc: OpenStack Discuss; Kanevsky, Arkady Subject: Re: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop Hi Arkady, Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > Hi, > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > > min on > > PTG meeting to go over Interop testing and any changes for neutron > tempest or > > > tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on > > agenda one of the Interop WG person will attend and lead the discussion. > > I just added it to our etherpad > https://etherpad.opendev.org/p/neutron-xena-ptg > I will be working on schedule of the sessions later this week and I > will let You know what timeslot this session with Interop WG will be. > Please let me know if You have any preferences. We have our sessions > scheduled: > > Monday 1300 - 1600 UTC > Tuesday 1300 - 1600 UTC > Thursday 1300 - 1600 UTC > Friday 1300 - 1600 UTC > > Our time slots which are already booked are: > - Monday 15:00 - 16:00 UTC > - Thursday 14:00 - 15:30 UTC > - Friday 14:00 - 15:00 UTC > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. Please let me know if that isn't good time slot for You. Please also add topics which You want to discuss to our etherpad https:// etherpad.opendev.org/p/neutron-xena-ptg -- Slawek Kaplonski Principal Software Engineer Red Hat From Arkady.Kanevsky at dell.com Thu Apr 15 14:18:09 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:18:09 +0000 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Michael. Interop team will have a rep there. If you can schedule us at 14:00 UTC or 14:30, or 14:45 that will be the best. I think 15 min will be enough. Thanks, Arkady -----Original Message----- From: Michael Johnson Sent: Monday, April 12, 2021 10:57 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi Arkady, I have added Interop to the Designate topics list (https://urldefense.com/v3/__https://etherpad.opendev.org/p/xena-ptg-designate__;!!LpKI!yXIFUxciVfW5bKHaFIxjMmhoQrGASnWQVIz9UZY3oXExCpXgnM52TrpaajTFMP1HP3fc$ [etherpad[.]opendev[.]org]) and will schedule a slot this week when I put a rough agenda together. Thanks, Michael On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > Adding comminuty > > > > From: Kanevsky, Arkady > Sent: Sunday, April 11, 2021 3:25 PM > To: 'johnsomor at gmail.com' > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for > Interop > > > > John, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > From Arkady.Kanevsky at dell.com Thu Apr 15 14:18:59 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:18:59 +0000 Subject: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Brian. I will be there. -----Original Message----- From: Brian Rosmaita Sent: Monday, April 12, 2021 10:54 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Cinder][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] On 4/11/21 4:23 PM, Kanevsky, Arkady wrote: > Brian, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on PTG meeting to go over Interop testing and any changes for > cinder tempest or tempest configuration in Wallaby cycle or changes > planned for Xena. Hi Arkady, I've virtually penciled you in for 1430-1500 on Tuesday 20 April. > Once on agenda one of the Interop WG person will attend and lead the > discussion. Sounds good. I've scheduled 30 min instead of 15 because it would be helpful for the cinder team to hear a quick synopsis of the current goals of the Interop WG and what the aim of the project is before we discuss the specifics of W and X. cheers, brian > > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > From Arkady.Kanevsky at dell.com Thu Apr 15 14:22:58 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:22:58 +0000 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Gibi, That will work. We will have a person there. Can you provide a pointer to nova PTG agenda etherpad? Thanks, Arkady -----Original Message----- From: Balazs Gibizer Sent: Monday, April 12, 2021 2:12 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Nova][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi Arkady, What about Wednesday 14:00 - 15:00 UTC? We don't have to fill a whole hour of course. Cheers, gibi On Sun, Apr 11, 2021 at 20:34, "Kanevsky, Arkady" wrote: > Balazs, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > min on PTG meeting to go over Interop testing and any changes for Nova > tempest or tempest configuration in Wallaby cycle or changes planned > for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > Thanks, > > Arkady > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > From akekane at redhat.com Thu Apr 15 14:29:58 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 15 Apr 2021 19:59:58 +0530 Subject: [Glance][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi Arkady, We have some slots open on Tuesday, Thursday and Friday, you can go through the schedule [1] and decide on which day you want to sync with us. Kindly update the etherpad as well. [1] https://etherpad.opendev.org/p/xena-glance-ptg Thanks & Best Regards, Abhishek Kekane On Mon, Apr 12, 2021 at 2:08 AM Kanevsky, Arkady wrote: > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > PTG meeting to go over Interop testing and any changes for glance tempest > or tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Thu Apr 15 14:32:39 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 15 Apr 2021 07:32:39 -0700 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: Message-ID: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> On 4/7/21 9:24 AM, Marios Andreou wrote: > Hello TripleO o/ > > Thanks again to everybody who has volunteered to lead a session for > the coming Xena TripleO project teams gathering. > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > sessions per day with some breaks. > > Please review the slot assigned for your session at [1]. If that time > is not ok then please let me know as soon as possible and indicate if > you want it later or earlier or on any other day. If you've decided > the session no longer makes sense then also please tell me and we can > move things around accordingly to finish earlier. > > I'd like to finalise the schedule by next Monday 12 April which is a > week before PTG. We can and likely will make changes after this date > but last minute changes are best avoided to allow folks to schedule > their PTG attendance across projects. > > Thanks everybody for your help! Looking forward to interesting > presentations and discussions as always > > regards, marios > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > Marios, I have found a conflict between my Tuesday 1510-1550 "BGP Routing with FRR" and another discussion happening in the Neutron room about BGP. Would it be possible to move the "BGP Routing with FRR" talk on Tuesday to Wednesday? Perhaps a direct swap with the "One yaml to rule all tempest tests" discussion that is scheduled for Wednesday 1510-1550? Another time on Wednesday could also work. Thanks, -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From Arkady.Kanevsky at dell.com Thu Apr 15 14:43:16 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 15 Apr 2021 14:43:16 +0000 Subject: [Glance][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Abhishek. Done. Added for Tuesday From: Abhishek Kekane Sent: Thursday, April 15, 2021 9:30 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Glance][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi Arkady, We have some slots open on Tuesday, Thursday and Friday, you can go through the schedule [1] and decide on which day you want to sync with us. Kindly update the etherpad as well. [1] https://etherpad.opendev.org/p/xena-glance-ptg [etherpad.opendev.org] Thanks & Best Regards, Abhishek Kekane On Mon, Apr 12, 2021 at 2:08 AM Kanevsky, Arkady > wrote: As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for glance tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Apr 15 15:04:14 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 15 Apr 2021 18:04:14 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> References: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> Message-ID: On Thu, Apr 15, 2021 at 5:32 PM Dan Sneddon wrote: > > > > On 4/7/21 9:24 AM, Marios Andreou wrote: > > Hello TripleO o/ > > > > Thanks again to everybody who has volunteered to lead a session for > > the coming Xena TripleO project teams gathering. > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > sessions per day with some breaks. > > > > Please review the slot assigned for your session at [1]. If that time > > is not ok then please let me know as soon as possible and indicate if > > you want it later or earlier or on any other day. If you've decided > > the session no longer makes sense then also please tell me and we can > > move things around accordingly to finish earlier. > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > week before PTG. We can and likely will make changes after this date > > but last minute changes are best avoided to allow folks to schedule > > their PTG attendance across projects. > > > > Thanks everybody for your help! Looking forward to interesting > > presentations and discussions as always > > > > regards, marios > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > Marios, > > I have found a conflict between my Tuesday 1510-1550 "BGP Routing with > FRR" and another discussion happening in the Neutron room about BGP. > > Would it be possible to move the "BGP Routing with FRR" talk on Tuesday > to Wednesday? Perhaps a direct swap with the "One yaml to rule all > tempest tests" discussion that is scheduled for Wednesday 1510-1550? > Another time on Wednesday could also work. > ACK I just pinged arx (adding him into cc here too) ... once I hear back from him and if he doesn't have another conflict we can make the change. Arx are you OK with the proposed swap? Your session would move to Tuesday same time. Otherwise we can explore something else, regards, marios > Thanks, > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > From ltomasbo at redhat.com Thu Apr 15 15:12:56 2021 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Thu, 15 Apr 2021 17:12:56 +0200 Subject: [neutron] Xena PTG schedule In-Reply-To: <6863474.G8OYYvop51@p1> References: <6863474.G8OYYvop51@p1> Message-ID: Hi folks, In relation to the "updating OVN to Support BGP Routing" session at the next Neutron-Xena-PTG, I would like to bring up the attention to the next effort for context and discussions during the session. We are working on a solution based on FRR where a (python) agent reads from the OVN SB DB (port binding events) and triggers FRR so that the needed routes get advertised. It leverages host kernel networking to redirect the traffic to the OVN overlay, and therefore does not require any modifications to ovn itself (at least for now) though it won´t work for SR-IOV/DPDK use cases. The PoC code can be found here: https://github.com/luis5tb/bgp-agent There are a series of blog posts related to how to use it on OpenStack and how it works: - OVN-BGP agent introduction: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ - How to set ip up on DevStack Environment: https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ - In-depth traffic flow inspection: https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/ See you next week! Best regards, Luis On Wed, Apr 14, 2021 at 10:53 AM Slawek Kaplonski wrote: > Hi neutrinos, > > I just prepared agenda for our PTG sessions. It's available in our > etherpad > [1]. > Please let me know if topics You are interested in are in not good time > slots > for You. I will try to move things around if possible. > Also, if You have any other topic to discuss, please let me know too so I > can > include it in the agenda. > > [1] https://etherpad.opendev.org/p/neutron-xena-ptg > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- LUIS TOMÁS BOLÍVAR Principal Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Apr 15 15:30:08 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 15 Apr 2021 17:30:08 +0200 Subject: [Nova][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: <823MRQ.2KUNTQCR99JH1@est.tech> Hi, Sure, the etherpad is here https://etherpad.opendev.org/p/nova-xena-ptg I noted the interop discussion slot at L41 Cheers, gibi On Thu, Apr 15, 2021 at 14:22, "Kanevsky, Arkady" wrote: > Gibi, > That will work. We will have a person there. > Can you provide a pointer to nova PTG agenda etherpad? > Thanks, > Arkady > > -----Original Message----- > From: Balazs Gibizer > Sent: Monday, April 12, 2021 2:12 AM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: Re: [Nova][Interop] request for 15-30 min on Xena PTG for > Interop > > > [EXTERNAL EMAIL] > > Hi Arkady, > > What about Wednesday 14:00 - 15:00 UTC? We don't have to fill a whole > hour of course. > > Cheers, > gibi > > On Sun, Apr 11, 2021 at 20:34, "Kanevsky, Arkady" > wrote: >> Balazs, >> >> As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 >> min on PTG meeting to go over Interop testing and any changes for >> Nova >> tempest or tempest configuration in Wallaby cycle or changes planned >> for Xena. >> >> Once on agenda one of the Interop WG person will attend and lead the >> discussion. >> >> >> >> Thanks, >> >> Arkady >> >> >> >> Arkady Kanevsky, Ph.D. >> >> SP Chief Technologist & DE >> >> Dell Technologies office of CTO >> >> Dell Inc. One Dell Way, MS PS2-91 >> >> Round Rock, TX 78682, USA >> >> Phone: 512 7204955 >> >> >> > > From DHilsbos at performair.com Thu Apr 15 15:50:07 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 15 Apr 2021 15:50:07 +0000 Subject: [ops][victoria][cinder] Import volume? Message-ID: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> All; I'm looking to transfer several VMs from XenServer to an OpenStack Victoria cloud. Finding explanations for importing Glance images is easy, but I haven't been able to find a tutorial on importing Cinder volumes. Since they are currently independent servers / volumes it seems somewhat wasteful and messy to import each VMs disk as an image just to spawn a volume from it. We're using Ceph as the storage provider for Glance and Cinder. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From eblock at nde.ag Thu Apr 15 16:30:45 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 15 Apr 2021 16:30:45 +0000 Subject: [ops][victoria][cinder] Import volume? In-Reply-To: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> Message-ID: <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> Hi, there’s a ‚cinder manage‘ command to import an rbd image into openstack. But be aware that if you delete it in openstack it will be removed from ceph, too (like a regular cinder volume). I don’t have the exact command syntax at hand right now, but try ‚cinder help manage‘ Regards Eugen Zitat von DHilsbos at performair.com: > All; > > I'm looking to transfer several VMs from XenServer to an OpenStack > Victoria cloud. Finding explanations for importing Glance images is > easy, but I haven't been able to find a tutorial on importing Cinder > volumes. > > Since they are currently independent servers / volumes it seems > somewhat wasteful and messy to import each VMs disk as an image just > to spawn a volume from it. > > We're using Ceph as the storage provider for Glance and Cinder. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com From arxcruz at redhat.com Thu Apr 15 16:35:52 2021 From: arxcruz at redhat.com (Arx Cruz) Date: Thu, 15 Apr 2021 18:35:52 +0200 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> Message-ID: Hello, Sure, it's fine with me. Sorry the delay, I'm switching ISP, my internet is terrible today. On Thu, Apr 15, 2021 at 5:04 PM Marios Andreou wrote: > On Thu, Apr 15, 2021 at 5:32 PM Dan Sneddon wrote: > > > > > > > > On 4/7/21 9:24 AM, Marios Andreou wrote: > > > Hello TripleO o/ > > > > > > Thanks again to everybody who has volunteered to lead a session for > > > the coming Xena TripleO project teams gathering. > > > > > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 > > > sessions per day with some breaks. > > > > > > Please review the slot assigned for your session at [1]. If that time > > > is not ok then please let me know as soon as possible and indicate if > > > you want it later or earlier or on any other day. If you've decided > > > the session no longer makes sense then also please tell me and we can > > > move things around accordingly to finish earlier. > > > > > > I'd like to finalise the schedule by next Monday 12 April which is a > > > week before PTG. We can and likely will make changes after this date > > > but last minute changes are best avoided to allow folks to schedule > > > their PTG attendance across projects. > > > > > > Thanks everybody for your help! Looking forward to interesting > > > presentations and discussions as always > > > > > > regards, marios > > > > > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena > > > > > > > > > > Marios, > > > > I have found a conflict between my Tuesday 1510-1550 "BGP Routing with > > FRR" and another discussion happening in the Neutron room about BGP. > > > > Would it be possible to move the "BGP Routing with FRR" talk on Tuesday > > to Wednesday? Perhaps a direct swap with the "One yaml to rule all > > tempest tests" discussion that is scheduled for Wednesday 1510-1550? > > Another time on Wednesday could also work. > > > > ACK I just pinged arx (adding him into cc here too) ... once I hear > back from him and if he doesn't have another conflict we can make the > change. > Arx are you OK with the proposed swap? Your session would move to > Tuesday same time. > > Otherwise we can explore something else, > > regards, marios > > > Thanks, > > -- > > Dan Sneddon | Senior Principal Software Engineer > > dsneddon at redhat.com | redhat.com/cloud > > dsneddon:irc | @dxs:twitter > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunil.kathait at hotmail.com Thu Apr 15 07:27:34 2021 From: sunil.kathait at hotmail.com (Sunil kathait) Date: Thu, 15 Apr 2021 07:27:34 +0000 Subject: Project List Message-ID: hi team, we have several projects created in openstack stein release. we have created all the projects with an additional property (like OU and location) and now we need to list the projects with their OU properties which was set in the projects. openstack project show : This command shows the property field with the value on it. openstack project list --long : this only shows ID, Name, description, long, Enabled. How can we list the projects along with their property field value which was set at the time of creation of the project. TIA -------------- next part -------------- An HTML attachment was scrubbed... URL: From eng.taha1928 at gmail.com Thu Apr 15 09:57:40 2021 From: eng.taha1928 at gmail.com (Taha Adel) Date: Thu, 15 Apr 2021 11:57:40 +0200 Subject: [Placement] Weird issue in placement-api Message-ID: Hello, I currently have OpenStack manually deployed by following the official install documentation, but I have faced a weird situation. When I send an api request to placement api service using the following command: *curl -H "X-Auth-Token: $T" http://controller:8778 * I received a status code of "*200*", which indicates a successful operation. But, when I issue the following request: *curl -H "X-Auth-Token: $T" http://controller:8778/resource_providers * I received a status code of "*503*", and when I checked the logs of placement and keystone, they say that the authentication failed. For the same reason, nova-compute can't register itself as a resource provider. I'm sure that the authentication credentials for placement are set properly, but I don't know what's the problem. Any suggestions, please? -------------- next part -------------- An HTML attachment was scrubbed... URL: From manish16054 at gmail.com Thu Apr 15 14:53:17 2021 From: manish16054 at gmail.com (Manish Mahalwal) Date: Thu, 15 Apr 2021 20:23:17 +0530 Subject: dynamic vendor data and cloud-init Message-ID: Hi All, I am working with OpenStack Pike and cloud-init 21.1. I am able to successfully pass dynamic vendor data to the config drive of an instance. However, cloud-init 21.1 just reads all the 'x' bytes of the vendor_data2.json but it doesn't execute the contents of the json. Although, static vendor data works perfectly fine and the YAML file in the JSON is executed as expected by cloud-init 21.1 * Now, the person who wrote the code for handling dynamic vendordata in cloud-init (https://github.com/canonical/cloud-init/pull/777) says that the JSON cloud-init expects is of the form: > {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - > black\nfqdn: cloud-overridden-by-vendordata2.example.org."} > * I believe that the JSON should have another outer key (as mentioned here https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html) which is the name of the microservice specified in nova.conf file and that the inner key should be cloud-init. In nova.conf: vendordata_dynamic_targets=name1@ http://example.com,name2 at http://example2.com { > "name1": { > "cloud-init": "#cloud-config\n..." > }, > "name2": { > "cloud-init": "#cloud-config\n..." > } > } >>Who is right and who is wrong? To read more on this please go through the following: https://bugs.launchpad.net/cloud-init/+bug/1841104 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Apr 15 17:03:29 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 15 Apr 2021 12:03:29 -0500 Subject: [ops][victoria][cinder] Import volume? In-Reply-To: <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> References: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> Message-ID: <20210415170329.GA2777639@sm-workstation> On Thu, Apr 15, 2021 at 04:30:45PM +0000, Eugen Block wrote: > Hi, > > there’s a ‚cinder manage‘ command to import an rbd image into openstack. > But be aware that if you delete it in openstack it will be removed from > ceph, too (like a regular cinder volume). > I don’t have the exact command syntax at hand right now, but try ‚cinder > help manage‘ > > Regards > Eugen > Here is the documentation for that command: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-manage Also note, if you no longer need to manage the volume in Cinder, but you do not want it to be deleted from your storage backend, there is also the inverse command of `cinder unmanage`. Details for that command can be found here: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-unmanage > > Zitat von DHilsbos at performair.com: > > > All; > > > > I'm looking to transfer several VMs from XenServer to an OpenStack > > Victoria cloud. Finding explanations for importing Glance images is > > easy, but I haven't been able to find a tutorial on importing Cinder > > volumes. > > > > Since they are currently independent servers / volumes it seems somewhat > > wasteful and messy to import each VMs disk as an image just to spawn a > > volume from it. > > > > We're using Ceph as the storage provider for Glance and Cinder. > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > From ricolin at ricolky.com Thu Apr 15 17:16:38 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 16 Apr 2021 01:16:38 +0800 Subject: [Multi-arch][tc][SIG][all] Multi-arch SIG report just published! Message-ID: Dear all We just publish a Multi-arch SIG report to introduce the current Multi-arch status in OpenStack community. You can found the link from superuser [1] or direct access to full report here [2]. I thank anyone who provides their time to any related works. If you also work on related stuff. We would reeeeeeeeeeeeeeeeeeeeally love and wish to learn/hear from you!!! There're more works OpenStack community can do to support multi-arch. But it won't be done fast if we don't have enough resources for it. We currently really need more volunteers, feedbacks, and more CI resources, and we welcome all kinds of help we can get. So if you have any resources/suggestions regarding multi-arch support in OpenStack community, please let us know. If you would like to find us, please join #openstack-multi-arch . Also, as PTG is near, I invite you all to join us in PTG [4]! *Time: 4/20 Tuesday from 07:00-08:00 and 15:00-16:00 (UTC time)* And here is our PTG Etherpad: [3] (feel free to suggest topics). [1] https://superuser.openstack.org/articles/openstack-multi-arch-sig-making-progress-addressing-hardware-diversification-requirements/ [2] https://www.openstack.org/multi-arch-sig-report [3] https://etherpad.opendev.org/p/xena-ptg-multi-arch-sig [4] http://www.openstack.org/ptg *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Apr 15 17:31:28 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 15 Apr 2021 20:31:28 +0300 Subject: [TripleO] Xena PTG schedule please review In-Reply-To: References: <6d836b7c-a6b8-0cfe-b96a-9ef778ba6ac2@redhat.com> Message-ID: Great thank you for confirming I will make the change tomorrow On Thursday, April 15, 2021, Arx Cruz wrote: > Hello, > > Sure, it's fine with me. Sorry the delay, I'm switching ISP, my internet > is terrible today. > > On Thu, Apr 15, 2021 at 5:04 PM Marios Andreou wrote: > >> On Thu, Apr 15, 2021 at 5:32 PM Dan Sneddon wrote: >> > >> > >> > >> > On 4/7/21 9:24 AM, Marios Andreou wrote: >> > > Hello TripleO o/ >> > > >> > > Thanks again to everybody who has volunteered to lead a session for >> > > the coming Xena TripleO project teams gathering. >> > > >> > > I've had a go at the agenda [1] trying to keep it to max 4 or 5 >> > > sessions per day with some breaks. >> > > >> > > Please review the slot assigned for your session at [1]. If that time >> > > is not ok then please let me know as soon as possible and indicate if >> > > you want it later or earlier or on any other day. If you've decided >> > > the session no longer makes sense then also please tell me and we can >> > > move things around accordingly to finish earlier. >> > > >> > > I'd like to finalise the schedule by next Monday 12 April which is a >> > > week before PTG. We can and likely will make changes after this date >> > > but last minute changes are best avoided to allow folks to schedule >> > > their PTG attendance across projects. >> > > >> > > Thanks everybody for your help! Looking forward to interesting >> > > presentations and discussions as always >> > > >> > > regards, marios >> > > >> > > [1] https://etherpad.opendev.org/p/tripleo-ptg-xena >> > > >> > > >> > >> > Marios, >> > >> > I have found a conflict between my Tuesday 1510-1550 "BGP Routing with >> > FRR" and another discussion happening in the Neutron room about BGP. >> > >> > Would it be possible to move the "BGP Routing with FRR" talk on Tuesday >> > to Wednesday? Perhaps a direct swap with the "One yaml to rule all >> > tempest tests" discussion that is scheduled for Wednesday 1510-1550? >> > Another time on Wednesday could also work. >> > >> >> ACK I just pinged arx (adding him into cc here too) ... once I hear >> back from him and if he doesn't have another conflict we can make the >> change. >> Arx are you OK with the proposed swap? Your session would move to >> Tuesday same time. >> >> Otherwise we can explore something else, >> >> regards, marios >> >> > Thanks, >> > -- >> > Dan Sneddon | Senior Principal Software Engineer >> > dsneddon at redhat.com | redhat.com/cloud >> > dsneddon:irc | @dxs:twitter >> > >> >> > > -- > > Arx Cruz > > Software Engineer > > Red Hat EMEA > > arxcruz at redhat.com > @RedHat Red Hat > Red Hat > > > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Thu Apr 15 18:05:38 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 15 Apr 2021 18:05:38 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? Message-ID: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> All; I seem to have generated another issue for myself... I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. I also can't find an option to pass to the openstack server start command which requests a specific host. Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From gouthampravi at gmail.com Thu Apr 15 18:17:06 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 15 Apr 2021 11:17:06 -0700 Subject: [manila] Propose Liron Kuchlani and Vida Haririan to manila-tempest-plugin-core In-Reply-To: References: Message-ID: On Wed, Apr 7, 2021 at 11:38 AM Goutham Pacha Ravi wrote: > > Hello Zorillas, > > Vida's been our bug czar since the Ussuri release and she's > conceptualized and executed our successful bug triage strategy. She > has also painstakingly organized several documentation and code bug > squash events and kept the pulse on multi-release efforts. She's > taught me a lot about project management and you can see tangible > results here, I suppose :) > > Liron's fixed a lot of test code bugs and covered some old and > important test gaps over the past few releases. He's driving > standardization of the tempest plugin and bringing in best practices > from tempest, refstack and elsewhere into our testing. It's always a > pleasure to work with Liron since he's happy to provide and welcome > feedback. > > More recently, Liron and Vida have enabled us to work with the > InteropWG and define refstack guidelines. They've also gotten us > closer to members from the QA community who they work with more > closely downstream. In short, they bring in different perspectives > while also espousing the team's core values. So I'd like to propose > their addition to the manila-tempest-plugin-core team. > > Please give me your +/- 1s for this proposal. Amazing. Thank you all for your responses. I've added Vida and Liron to the manila-tempest-plugin-core group. > > Thanks, > Goutham From zigo at debian.org Thu Apr 15 19:55:21 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Apr 2021 21:55:21 +0200 Subject: [announce][debian][wallaby] general availability of OpenStack Wallaby in Debian Message-ID: <562a8d99-b9fa-6735-9c35-694c0ef327b7@debian.org> Hi! It's my pleasure to announce the general availability of OpenStack Wallaby in Debian. I've just finished uploading everything to Debian Experimental today (not in unstable, as Bullseye is frozen), and the Bullseye backports are available the usual way, for example using extrepo (which is in the official Debian backports): apt-get install extrepo extrepo enable openstack_wallaby apt-get update ... or directly setting-up the http://bullseye-wallaby.debian.net repository the usual way. Note that while Victoria is available on Buster and Bullseye (to ease the transition), Wallaby is only available on Bullseye, which is due to be released next month (if everything goes as planned, there's no official release date decided yet if I understand well). New in this release =================== Masakari is now packaged. Masakari-dashboard is still waiting on the FTP master NEW queue though. A quick update on openstack-cluster-installer ============================================= Wallaby can already be installed by OCI [1]. Note that OCI is now public-cloud ready, and can deploy specific nodes for the billing (with cloudkitty): - messaging (separate RabbitMQ cluster and Galera for Gnocchi) - billmon / billosd (separate Ceph cluster for Gnocchi) Our tests showed that this setup can scale to 10k+ VMs without any issue (the separate Galera + RabbitMQ bus really helps) reporting 400+ metrics per seconds. As always, OCI is smart enough so the additional nodes are all optional (and not needed for smaller scales), and the cluster reconfigures itself if you decide to add new node types in your deployment. Thanks to new features added to puppet-openstack [2], the number of uwsgi process adapts automatically to the number of cores available in controller nodes. Last, with OCI you may now enjoy a full BGP-to-the-host (over IPv6 un-numbered link local) networking setup. This also works with compute nodes, as long as you decide to not use the DVR mode (if you do with to use DVR, then you need L2 connectivity on the computes: that's a Neutron "feature", unfortunately), or if you decide to use Neutron BGP routed networking [3] (though this mode also still has some limitations at this time, such as no support for virtual router external gateways). In this setup, only the Network nodes need L2 connectivity to the outside world. This also scales very nicely to *a lot* of nodes... without any ARP spanning tree problems. We (at Infomaniak) now only use this deployment mode in production due to its scalability. Final words =========== Please report any issue you may find, on OCI or on the Debian packages. Cheers, Thomas Goirand (zigo) [1] https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer [2] https://review.opendev.org/q/topic:%22debian-uwsgi-support%22+(status:open%20OR%20status:merged) [3] https://docs.openstack.org/neutron/latest/admin/config-bgp-floating-ip-over-l2-segmented-network.html From radoslaw.piliszek at gmail.com Thu Apr 15 20:06:05 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 15 Apr 2021 22:06:05 +0200 Subject: [announce][debian][wallaby] general availability of OpenStack Wallaby in Debian In-Reply-To: <562a8d99-b9fa-6735-9c35-694c0ef327b7@debian.org> References: <562a8d99-b9fa-6735-9c35-694c0ef327b7@debian.org> Message-ID: On Thu, Apr 15, 2021 at 9:56 PM Thomas Goirand wrote: > New in this release > =================== > > Masakari is now packaged. Once again, thank you for packaging Masakari! :-) -yoctozepto From kennelson11 at gmail.com Thu Apr 15 23:56:37 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Apr 2021 16:56:37 -0700 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results Message-ID: Hello! Please join me in congratulating the 1 newly elected member of the Technical Committee (TC). Amy Marrich (spotz)! Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voice is heard. Thank you for another great round! -Kendall Nelson (diablo_rojo) & the Election Officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Apr 16 06:12:47 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:12:47 +0200 Subject: [neutron] Drivers meeting agenda - 16.04.2021 - cancelled In-Reply-To: <6804092.R4j1StIJWZ@p1> References: <6804092.R4j1StIJWZ@p1> Message-ID: <5858367.tOFUnugRee@p1> Hi, Dnia czwartek, 15 kwietnia 2021 15:02:42 CEST Slawek Kaplonski pisze: > Hi, > > Agenda for our tomorrow's drivers meeting is at [1]. We have 1 new RFE to > discuss: > > - https://bugs.launchpad.net/neutron/+bug/1922716 - [RFE] BFD for BGP Dynamic > Routing > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat I got info from Miguel and Brian that they both can't be on today's meeting. Giving that I worry that we may not have a quorum on today's meeting so I think it may be better to cancel it. Next week there is PTG and that RFE https://bugs.launchpad.net/neutron/+bug/ 1922716 is already in the agenda (to be discussed on Tuesday) so we will discuss it there. In the meantime, please spent some time reading that rfe and maybe ask some questions to the owner so we will have as much info as possible before the PTG. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:14:15 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:14:15 +0200 Subject: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: <4135616.GcyNBQpf4Z@p1> Message-ID: <10882430.7SCC9sQoDL@p1> Hi, Dnia czwartek, 15 kwietnia 2021 16:03:23 CEST Kanevsky, Arkady pisze: > Thanks Slawek. > I will check with the team and will get back to you. > For now assume that Friday will work. Sure thing. Thx a lot. Please let me know if You would need to move it to other day/time slot. Also, if it's possible, please add topics which You want to discuss to the etherpad https://etherpad.opendev.org/p/neutron-xena-ptg - our session is under line 163. > Thanks, > Arkady > > -----Original Message----- > From: Slawek Kaplonski > Sent: Wednesday, April 14, 2021 3:43 AM > To: openstack-discuss at lists.openstack.org > Cc: OpenStack Discuss; Kanevsky, Arkady > Subject: Re: [Neutron][Interop] request for 15-30 min on Xena PTG for Interop > > Hi Arkady, > > Dnia poniedziałek, 12 kwietnia 2021 08:21:09 CEST Slawek Kaplonski pisze: > > > Hi, > > > > Dnia niedziela, 11 kwietnia 2021 22:32:55 CEST Kanevsky, Arkady pisze: > > > > > Brian, > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 > > > min on > > > > > > PTG meeting to go over Interop testing and any changes for neutron > > tempest > > or > > > > > > > > tempest configuration in Wallaby cycle or changes planned for Xena. > > > Once on > > > > > > agenda one of the Interop WG person will attend and lead the discussion. > > > > I just added it to our etherpad > > https://etherpad.opendev.org/p/neutron-xena-ptg > > I will be working on schedule of the sessions later this week and I > > will let You know what timeslot this session with Interop WG will be. > > Please let me know if You have any preferences. We have our sessions > > scheduled: > > > > Monday 1300 - 1600 UTC > > Tuesday 1300 - 1600 UTC > > Thursday 1300 - 1600 UTC > > Friday 1300 - 1600 UTC > > > > Our time slots which are already booked are: > > - Monday 15:00 - 16:00 UTC > > - Thursday 14:00 - 15:30 UTC > > - Friday 14:00 - 15:00 UTC > > > > > > > Thanks, > > > Arkady > > > > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > I scheduled session with Interop WG for Friday 13:30 - 14:00 UTC. > Please let me know if that isn't good time slot for You. > Please also add topics which You want to discuss to our etherpad https:// etherpad.opendev.org/p/neutron-xena-ptg > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:41:58 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:41:58 +0200 Subject: [neutron] Team meeting - Tuesday 20.04.2021 Message-ID: <22887068.HZfpmljPZv@p1> Hi, As we have PTG, let's cancel next week's team meeting. See You on the PTG sessions. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:42:30 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:42:30 +0200 Subject: [neutron] CI meeting - Tuesday 20.04.2021 Message-ID: <6147595.Qk4cETbaLc@p1> Hi, As we have PTG, let's cancel next week's CI meeting. See You on the PTG sessions. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 06:44:13 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 08:44:13 +0200 Subject: [neutron] Drivers meeting - Friday 23.04.2021 cancelled Message-ID: <8794951.JCYDN9oZMe@p1> Hi, As we have PTG, let's cancel next week's drivers meeting. See You on the PTG sessions. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Apr 16 10:55:29 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 16 Apr 2021 12:55:29 +0200 Subject: [neutron][all] PTG session about OVN as default backend in Devstack Message-ID: <32241215.rSAhrytHMa@p1> Hi, We discussed this topic couple of times in the Neutron team and with wider community also. And now we really feel like it is good time to pull the trigger and switch default Neutron backend in Devstack from ML2/OVS to ML2/ OVN. Lucas already prepared patches for that and all should be already in goo shape. But before we will do that, we want to have PTG session about it. It is scheduled to be on Thursday 22nd of April at 13:00 UTC time in the Neutron session. We want to give some short summary of current status of this but also we would like to do something like "AMA" about it for people from other projects. So if You have any questions/concerns about that, please go to that session on Thursday to discuss that with us. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Fri Apr 16 13:27:32 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Apr 2021 14:27:32 +0100 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> Message-ID: On 15/04/2021 19:05, DHilsbos at performair.com wrote: > All; > > I seem to have generated another issue for myself... > > I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. > > I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. > > My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. > > I also can't find an option to pass to the openstack server start command which requests a specific host. > > Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? you should be able to cold migrate them using the migrate command but that should put the servers into resize_verify and then you need to confirm the migration to complte it. we will not clean up the vm on the source node until you do that last step. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From nate.johnston at redhat.com Fri Apr 16 14:23:02 2021 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 16 Apr 2021 10:23:02 -0400 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results In-Reply-To: References: Message-ID: <20210416142302.7sewz2ppmr45gp7c@grind.home> Congratulations Amy! Nate On Thu, Apr 15, 2021 at 04:56:37PM -0700, Kendall Nelson wrote: > Hello! > > Please join me in congratulating the 1 newly elected member of the > Technical Committee (TC). > > Amy Marrich (spotz)! > > Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to > ensure your voice is heard. > > Thank you for another great round! > > -Kendall Nelson (diablo_rojo) & the Election Officials From smooney at redhat.com Fri Apr 16 14:30:22 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Apr 2021 15:30:22 +0100 Subject: dynamic vendor data and cloud-init In-Reply-To: References: Message-ID: On 15/04/2021 15:53, Manish Mahalwal wrote: > Hi All, > > I am working with OpenStack Pike and cloud-init 21.1. I am able to > successfully pass dynamic vendor data to the config drive of an > instance. However, cloud-init 21.1 just reads all the 'x' bytes of the > vendor_data2.json but it doesn't execute the contents of the json. > Although, static vendor data works perfectly fine and the YAML file in > the JSON is executed as expected by cloud-init 21.1 > > * Now, the person who wrote the code for handling dynamic vendordata > in cloud-init (https://github.com/canonical/cloud-init/pull/777 > ) says that the JSON > cloud-init expects is of the form: > > {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n > - black\nfqdn: cloud-overridden-by-vendordata2.example.org."} > > the reference implementation for the dynamic vendor data  backend was https://github.com/mikalstill/vendordata and it was a feature developed specificaly for rackspace. the data format that service should return is # { # "hostname": "foo", # "image-id": "75a74383-f276-4774-8074-8c4e3ff2ca64", # "instance-id": "2ae914e9-f5ab-44ce-b2a2-dcf8373d899d", # "metadata": {}, # "project-id": "039d104b7a5c4631b4ba6524d0b9e981", # "user-data": null # } # An example of this data: https://github.com/mikalstill/vendordata/blob/master/app.py#L34-L42 this blog post explains how it should work https://www.madebymikal.com/nova-vendordata-deployment-an-excessively-detailed-guide/ > * I believe that the JSON should have another outer key (as mentioned > here > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html > ) > which is the name of the microservice specified in nova.conf file and > that the inner key should be cloud-init. > > In nova.conf: > vendordata_dynamic_targets=name1 at http://example.com,name2 at http://example2.com > > > { >     "name1": { >  "cloud-init": "#cloud-config\n..." >     }, >     "name2": { >  "cloud-init": "#cloud-config\n..." >     } > } > > > > > >>Who is right and who is wrong? > > To read more on this please go through the following: > https://bugs.launchpad.net/cloud-init/+bug/1841104 > > From Arkady.Kanevsky at dell.com Fri Apr 16 14:33:19 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 14:33:19 +0000 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results In-Reply-To: References: Message-ID: Hurray to Amy. From: Kendall Nelson Sent: Thursday, April 15, 2021 6:57 PM To: OpenStack Discuss Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results [EXTERNAL EMAIL] Hello! Please join me in congratulating the 1 newly elected member of the Technical Committee (TC). Amy Marrich (spotz)! Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c [civs1.civs.us] Election process details and results are also available here: https://governance.openstack.org/election/ [governance.openstack.org] Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voice is heard. Thank you for another great round! -Kendall Nelson (diablo_rojo) & the Election Officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Apr 16 15:22:35 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Fri, 16 Apr 2021 17:22:35 +0200 Subject: [kolla-ansible][horizon][keystone] policy and token In-Reply-To: References: <98A48E2D-9F27-4125-BF76-CF3992A5990B@poczta.onet.pl> Message-ID: <076D9347-4CF7-4588-97A9-3A960E45537F@poczta.onet.pl> Hi again, After some struggling I modified policies so most of them works fine. But I have problem with identity: create_user and identity: create_group. In the case of create group I can do it from Horizon (domain_admin user), but I can’t do it from CLI (with command Openstack group create —domain 3a08xxxx82c1 SOME_GROUP_NAME) and I was wondering why. After analyzing logs it turned out, that tokens from Horizon and CLI are different! The one from CLI does not contain domain_id (which I specify from CLI???), while the one from Horizon contains it, and there is a match for policy rules. Token from CLI: DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-b00bccae-c3d2-4a53-a8e2-bd9b0bbdfd84 9adbxxxx02ef 61d4xxxx9c0f <- user default Project_ID here - 3a08xxxxb82c1 3a08xxxx82c1] RBAC: auth_context: {'token': , 'domain_id': None, <- no domain_id 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': None, <- no domain name 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': '61d4xxxx9c0f',<- user default Project_ID here 'project_domain_id': '3a08xxxx82c1’, <- default user project domain_id 'roles': ['reader', 'member', 'project_admin', 'domain_admin'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 Token from Horizon: DEBUG keystone.server.flask.request_processing.middleware.auth_context [req-aeeec218-fe13-4048-98a7-3240df0dacae 9adbxxxxb02ef <- no user default Project_ID here - 3a08xxxx82c1 3a08xxxx82c1 -] RBAC: auth_context: {'token': , 'domain_id': '3a08xxxx82c1’, <- domain_id 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': ’some_domain’, <- domain name 'group_ids': [], 'user_id': '9adbxxxx02ef', 'user_domain_id': '3a08xxxx82c1', 'system_scope': None, 'project_id': None,<- no user default Project_ID here 'project_domain_id': None,<- default user project domain_id 'roles': ['member', 'domain_admin', 'project_admin', 'reader'], 'is_admin_project': False, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} fill_context /var/lib/kolla/venv/lib/python3.8/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py:478 The policy rules: "identity:create_group": "rule:cloud_admin or rule:admin_and_matching_target_group_domain_id”, "admin_and_matching_target_group_domain_id": "rule:admin_required and domain_id:%(target.group.domain_id)s”, "admin_required": "role:admin or role:domain_admin or role:project_admin", CLI user openrc file: export OS_AUTH_URL=http://some-fancy-url:5000 export OS_PROJECT_ID=61d4xxxx9c0f export OS_PROJECT_NAME=„some_project_name" export OS_USER_DOMAIN_NAME=„some_domain" if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi export OS_PROJECT_DOMAIN_ID="3a08xxxx82c1" if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi unset OS_TENANT_ID unset OS_TENANT_NAME export OS_USERNAME=„some_user" echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="RegionOne" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi export OS_INTERFACE=public export OS_IDENTITY_API_VERSION=3 How to put domain_id into CLI token if —domain xxxxx doesn’t do that? The same situation is with create_user. And the best part - ofcource cloud_admin=admin is able to do both, because he don’t need to be checked against domain_id. Ofcourse there is also some kind of a bug, that prevents displaying „Create user” button in the horizon interface, but when you eneter direct link (…/users/create) you can create user. After some struggling with horizon (as suggested here: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/575272/1/templates/mitaka/keystonev3_policy.json#b38 ) „create group” button showed up, but not "create user” - not even for admin user… What’s wrong?? Best regards Adam > Wiadomość napisana przez Mark Goddard w dniu 30.03.2021, o godz. 12:51: > > On Tue, 30 Mar 2021 at 10:52, Adam Tomas wrote: >> >> >> Without any custom policies when I look inside the horizon container I see (in /etc/openstack-dashboard) current/default policies. If I override (for example keystone_policy.json) with a file placed in /etc/kolla/config/horizon which contains only 3 rules, then after kolla-ansible reconfigure inside horizon container there is of course keystone_police.json file, but only with my 3 rules - should I assume, that previously seen default rules (other than the ones overridden by my rules) still works, whether I see them in the file or not? -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Apr 16 16:04:38 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 16 Apr 2021 16:04:38 +0000 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> Sean; Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. Unfortunately, at present, the state change is not occurring. Here's a series of commands, with output: #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-elcom-1 | | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | | OS-EXT-STS:power_state | Shutdown | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | stopped | | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | it-network=10.255.127.208, 10.0.160.35 | | config_drive | | | created | 2021-03-06T04:35:51Z | | flavor | m4.large (8) | | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | | image | N/A (booted from volume) | | key_name | None | | name | Java Dev | | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | | properties | | | security_groups | name='allow-ping' | | | name='allow-ssh' | | | name='default' | | status | SHUTOFF | | updated | 2021-04-16T15:52:07Z | | user_id | 69b73ea8f55c46a99021e77ebf70b62a | | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | +-------------------------------------+----------------------------------------------------------+ #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-elcom-1 | | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | | OS-EXT-STS:power_state | Shutdown | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | stopped | | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | it-network=10.255.127.208, 10.0.160.35 | | config_drive | | | created | 2021-03-06T04:35:51Z | | flavor | m4.large (8) | | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | | image | N/A (booted from volume) | | key_name | None | | name | Java Dev | | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | | properties | | | security_groups | name='allow-ping' | | | name='allow-ssh' | | | name='default' | | status | SHUTOFF | | updated | 2021-04-16T15:53:32Z | | user_id | 69b73ea8f55c46a99021e77ebf70b62a | | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | +-------------------------------------+----------------------------------------------------------+ #tail /var/log/nova/nova-conductor.log #tail /var/log/nova/nova-scheduler.log 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: Both Cinder volume storage, and ephemeral storage are being handled by Ceph. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean Mooney [mailto:smooney at redhat.com] Sent: Friday, April 16, 2021 6:28 AM To: openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][victoria] Migrate cross CPU? On 15/04/2021 19:05, DHilsbos at performair.com wrote: > All; > > I seem to have generated another issue for myself... > > I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. > > I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. > > My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. > > I also can't find an option to pass to the openstack server start command which requests a specific host. > > Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? you should be able to cold migrate them using the migrate command but that should put the servers into resize_verify and then you need to confirm the migration to complte it. we will not clean up the vm on the source node until you do that last step. > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From jay.faulkner at verizonmedia.com Fri Apr 16 16:26:19 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Fri, 16 Apr 2021 09:26:19 -0700 Subject: [ironic] Ironic Whiteboard v2 call for reviews Message-ID: Hi all, Iury and I spent some time this morning updating the Ironic whiteboard etherpad to include more immediately useful information to contributors. We placed this updated whiteboard at https://etherpad.opendev.org/p/IronicWhiteBoardv2 -- our approach was to prune any outdated/broken links or information, and focus on making the first part of the whiteboard an easy one-click place for folks to see easy ways to contribute. All the rest of the information was carried over and reformatted. Once there is consensus from the team about this being a positive change, we should either replace the existing IronicWhiteBoard with the contents of the v2 page, or just update links to point to the new one instead. What do you all think? Thanks, Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Apr 16 16:57:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Apr 2021 17:57:41 +0100 Subject: [ops][nova][victoria] Migrate cross CPU? In-Reply-To: <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE36A1@COM01.performair.local> <0670B960225633449A24709C291A52524FBE6543@COM01.performair.local> Message-ID: <5bed6419-6c85-b39f-1226-cc517fe911de@redhat.com> hum ok the best way to debug this is to lis the server events and get the request id for the migration it may be req-ff109e53-74e0-40de-8ec7-29aff600b5f7 based on the logs you posted but you should see more info in the api, conductor and compute logs for that request id. given the state has not change i suspect it failed rather early. its possible that you are expirence an issue with the rabbitmq service and rpc calls are bing lost but i woudl not expect to see logs realted to this in the scudler while the vm is stilll in the SHUTOFF status. can you do "openstack server event list 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3" then get the most recent resize event's request id and see if there are any other logs. regard sean. (note i think it will be listed as a resize not a migrate since interanlly migreate is implmented as resize but to the same flavour). On 16/04/2021 17:04, DHilsbos at performair.com wrote: > Sean; > > Thank you very much for your response. I wasn't aware of the state change to resize_verify, that's useful. > > Unfortunately, at present, the state change is not occurring. > > Here's a series of commands, with output: > > #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > +-------------------------------------+----------------------------------------------------------+ > | Field | Value | > +-------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-elcom-1 | > | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | > | OS-EXT-STS:power_state | Shutdown | > | OS-EXT-STS:task_state | None | > | OS-EXT-STS:vm_state | stopped | > | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | > | OS-SRV-USG:terminated_at | None | > | accessIPv4 | | > | accessIPv6 | | > | addresses | it-network=10.255.127.208, 10.0.160.35 | > | config_drive | | > | created | 2021-03-06T04:35:51Z | > | flavor | m4.large (8) | > | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | > | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | > | image | N/A (booted from volume) | > | key_name | None | > | name | Java Dev | > | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | > | properties | | > | security_groups | name='allow-ping' | > | | name='allow-ssh' | > | | name='default' | > | status | SHUTOFF | > | updated | 2021-04-16T15:52:07Z | > | user_id | 69b73ea8f55c46a99021e77ebf70b62a | > | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | > +-------------------------------------+----------------------------------------------------------+ > #openstack server migrate --host s700066.463.os.mcgown.enterprises --os-compute-api-version 2.56 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > #openstack server show 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 > +-------------------------------------+----------------------------------------------------------+ > | Field | Value | > +-------------------------------------+----------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-elcom-1 | > | OS-EXT-SRV-ATTR:host | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:hypervisor_hostname | s700030.463.os.mcgown.enterprises | > | OS-EXT-SRV-ATTR:instance_name | instance-00000037 | > | OS-EXT-STS:power_state | Shutdown | > | OS-EXT-STS:task_state | None | > | OS-EXT-STS:vm_state | stopped | > | OS-SRV-USG:launched_at | 2021-03-06T04:36:07.000000 | > | OS-SRV-USG:terminated_at | None | > | accessIPv4 | | > | accessIPv6 | | > | addresses | it-network=10.255.127.208, 10.0.160.35 | > | config_drive | | > | created | 2021-03-06T04:35:51Z | > | flavor | m4.large (8) | > | hostId | 174a83351ac674a25a2bf5131b931fc7a9e16be48b62f37925a66676 | > | id | 64229d87-4cbb-44d1-ba8a-5fe63c9c40f3 | > | image | N/A (booted from volume) | > | key_name | None | > | name | Java Dev | > | project_id | 10dfdfadb7374ea1ba37bee1435d87ad | > | properties | | > | security_groups | name='allow-ping' | > | | name='allow-ssh' | > | | name='default' | > | status | SHUTOFF | > | updated | 2021-04-16T15:53:32Z | > | user_id | 69b73ea8f55c46a99021e77ebf70b62a | > | volumes_attached | id='ae69c924-60e5-431e-9572-c41a153e720b' | > +-------------------------------------+----------------------------------------------------------+ > #tail /var/log/nova/nova-conductor.log > #tail /var/log/nova/nova-scheduler.log > 2021-04-16 08:53:24.870 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter only checking host s700066.463.os.mcgown.enterprises and node s700066.463.os.mcgown.enterprises > 2021-04-16 08:53:24.871 3773 INFO nova.scheduler.host_manager [req-ff109e53-74e0-40de-8ec7-29aff600b5f7 d7c514813e5d4fe6815f5f59e8e35f2f a008ad02d16f436a9e320882ca497055 - default default] Host filter ignoring hosts: > > Both Cinder volume storage, and ephemeral storage are being handled by Ceph. > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Friday, April 16, 2021 6:28 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][victoria] Migrate cross CPU? > > > > On 15/04/2021 19:05, DHilsbos at performair.com wrote: >> All; >> >> I seem to have generated another issue for myself... >> >> I built our Victoria cloud initially on Intel Atom servers. We recently received the first of our AMD Epyc (7002 series) servers, which are intended to take over the Nova Compute responsibilities. >> >> I've had success in the past doing live migrates, but live migrating from one of the Atom servers to the new server fails, with an error indicating CPU compatibility problems. Ok, I can understand that. >> >> My problem is that I don't seem to understand the openstack server migrate command (non-live). It doesn't seem to do anything, whether the instance is Running or Shut Down. I can't find errors in the logs from the API / conductor / scheduler host. >> >> I also can't find an option to pass to the openstack server start command which requests a specific host. >> >> Can I get these existing instances moved from the Atom servers to the Epyc server(s), or do I need to recreate them to do this? > you should be able to cold migrate them using the migrate command but > that should put the servers into resize_verify and then you need > to confirm the migration to complte it. we will not clean up the vm on > the source node until you do that last step. > >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Director - Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> > From amy at demarco.com Fri Apr 16 17:02:18 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 16 Apr 2021 12:02:18 -0500 Subject: [all][elections][tc] Technical Committee April 2021 Special Election Results In-Reply-To: References: Message-ID: Thanks everyone! Amy (spotz) On Fri, Apr 16, 2021 at 9:36 AM Kanevsky, Arkady wrote: > Hurray to Amy. > > > > *From:* Kendall Nelson > *Sent:* Thursday, April 15, 2021 6:57 PM > *To:* OpenStack Discuss > *Subject:* [all][elections][tc] Technical Committee April 2021 Special > Election Results > > > > [EXTERNAL EMAIL] > > Hello! > > > Please join me in congratulating the 1 newly elected member of the > Technical Committee (TC). > > Amy Marrich (spotz)! > > Full results: https://civs1.civs.us/cgi-bin/results.pl?id=E_69909177d200947c > [civs1.civs.us] > > > Election process details and results are also available here: https://governance.openstack.org/election/ > [governance.openstack.org] > > > Thank you to all of the candidates, having a good group of candidates > helps engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to > ensure your voice is heard. > > Thank you for another great round! > > > > -Kendall Nelson (diablo_rojo) & the Election Officials > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Apr 16 17:07:52 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 16 Apr 2021 12:07:52 -0500 Subject: [Diversity] D&I WG Office Hours at the PTG Message-ID: The Diversity and Inclusion WG has grabbed our usual PTG slot Monday at 16:00UTC and we will be holding office hours to help any project with Inclusive Naming or other D&I related topics. If you'd like us to attend any of your sessions please let me know. If you've been interested in learning more about the WG please come join us! Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Apr 16 17:28:02 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 16 Apr 2021 10:28:02 -0700 Subject: [PTG][StoryBoard][infra][OpenDev] StoryBoard PTG Planning Message-ID: Hello! I know it's last minute, but we decided to book an hour on Tuesday to meet from 16-17 UTC in the Ocata room. If you have topics to bring to us, please add them to the etherpad[1]. If you just want to come and say hi, please do! If we have extra time we will triage and cleanup the StoryBoard backlog. -Kendall (diablo_rojo) [1] https://etherpad.opendev.org/p/apr2021-ptg-storyboard -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Apr 16 17:40:08 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 16 Apr 2021 19:40:08 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Train Message-ID: <07a09dbb-4baa-1d22-5605-a636b0f55fbc@est.tech> Hi, As Wallaby was released the day before yesterday and we are in a less busy period, it is a good opportunity to call your attention to the following: In less than a month Train is planned to transition to Extended Maintenance phase [1] (planned date: 2021-05-12). I have generated the list of the current *open* and *unreleased* changes in stable/train for the follows-policy tagged repositories [2] (where there are such patches). These lists could help the teams who are planning to do a *final* release on Train before moving stable/train branches to Extended Maintenance. Feel free to edit and extend these lists to track your progress! * At the transition date the Release Team will tag the*latest* (Train) *releases* of repositories with *train-em* tag. * After the transition stable/train will be still open for bug fixes, but there won't be any official releases. NOTE: teams, please focus on wrapping up your libraries first if there is any concern about the changes, in order to avoid broken releases! Thanks, Előd [1] https://releases.openstack.org/ [2] https://etherpad.opendev.org/p/train-final-release-before-em From johnsomor at gmail.com Fri Apr 16 20:19:31 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 16 Apr 2021 13:19:31 -0700 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Arkady, I have put you down for 14:30 UTC. Michael On Thu, Apr 15, 2021 at 7:18 AM Kanevsky, Arkady wrote: > > Thanks Michael. Interop team will have a rep there. > If you can schedule us at 14:00 UTC or 14:30, or 14:45 that will be the best. I think 15 min will be enough. > Thanks, > Arkady > > -----Original Message----- > From: Michael Johnson > Sent: Monday, April 12, 2021 10:57 AM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop > > > [EXTERNAL EMAIL] > > Hi Arkady, > > I have added Interop to the Designate topics list (https://urldefense.com/v3/__https://etherpad.opendev.org/p/xena-ptg-designate__;!!LpKI!yXIFUxciVfW5bKHaFIxjMmhoQrGASnWQVIz9UZY3oXExCpXgnM52TrpaajTFMP1HP3fc$ [etherpad[.]opendev[.]org]) and will schedule a slot this week when I put a rough agenda together. > > Thanks, > Michael > > On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > > > Adding comminuty > > > > > > > > From: Kanevsky, Arkady > > Sent: Sunday, April 11, 2021 3:25 PM > > To: 'johnsomor at gmail.com' > > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for > > Interop > > > > > > > > John, > > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > > > > > Thanks, > > > > > > > > > > > > Arkady Kanevsky, Ph.D. > > > > SP Chief Technologist & DE > > > > Dell Technologies office of CTO > > > > Dell Inc. One Dell Way, MS PS2-91 > > > > Round Rock, TX 78682, USA > > > > Phone: 512 7204955 > > > > From gmann at ghanshyammann.com Fri Apr 16 21:24:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 16 Apr 2021 16:24:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 16th April, 21: Reading: 5 min Message-ID: <178dc9228ad.cf85ab74158590.1567674469339432611@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. What we completed this week: ========================= Project updates: ------------------- ** devstack-plugin-ceph is branched now[1]. Other updates: ------------------ ** TC one vacant seat election is completed now. We have a new TC member Amy Marrich (spotz). Also thanks to Feilong Wang (flwang) for participating and showing interest in TC[2]. TC liaisons assignments for Xena cycle --------------------------------------------- * This is to have two TC members assigned as liaisons for each project team. * I generated the auto assignments using the script on top of already assigned projects[3]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-04-15-15.00.log.html * We will skip next week's meeting due to PTG and have our next meeting on April 29th, Thursday 15:00 UTC[4]. 3. Activities In progress: ================== Open Reviews ----------------- * No open reviews this week[5]. This is good progress by TC. Gate performance and heavy job configs ------------------------------------------------ * dansmith fixed the devstack async mode related bash bug related to children[6] * Workarounds for making stackviz not to fail jobs are merged in all stable branches[7]. * Cinder failures are still happening and the Cinder team is in progress to fix those. PTG ----- TC is planning to meet in PTG for Thursday 2 hrs and Friday 4 hrs, details are in etherpad[8], feel free to add topic you would like to discuss with TC in PTG. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[9]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [10] 3. Office hours: The Technical Committee offers two office hours per week in #openstack-tc [11]: * Tuesday at 0100 UTC * Wednesday at 1500 UTC 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/786067 [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021869.html [3] https://governance.openstack.org/tc/reference/tc-liaisons.html [4] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] https://review.opendev.org/c/openstack/devstack/+/786330 [7] https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90 [8] https://etherpad.opendev.org/p/tc-xena-ptg [9] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [10] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [11] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From Arkady.Kanevsky at dell.com Fri Apr 16 21:54:48 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 21:54:48 +0000 Subject: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Michael. -----Original Message----- From: Michael Johnson Sent: Friday, April 16, 2021 3:20 PM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Arkady, I have put you down for 14:30 UTC. Michael On Thu, Apr 15, 2021 at 7:18 AM Kanevsky, Arkady wrote: > > Thanks Michael. Interop team will have a rep there. > If you can schedule us at 14:00 UTC or 14:30, or 14:45 that will be the best. I think 15 min will be enough. > Thanks, > Arkady > > -----Original Message----- > From: Michael Johnson > Sent: Monday, April 12, 2021 10:57 AM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: Re: FW: [Designate][Interop] request for 15-30 min on Xena > PTG for Interop > > > [EXTERNAL EMAIL] > > Hi Arkady, > > I have added Interop to the Designate topics list (https://urldefense.com/v3/__https://etherpad.opendev.org/p/xena-ptg-designate__;!!LpKI!yXIFUxciVfW5bKHaFIxjMmhoQrGASnWQVIz9UZY3oXExCpXgnM52TrpaajTFMP1HP3fc$ [etherpad[.]opendev[.]org]) and will schedule a slot this week when I put a rough agenda together. > > Thanks, > Michael > > On Sun, Apr 11, 2021 at 1:36 PM Kanevsky, Arkady wrote: > > > > Adding comminuty > > > > > > > > From: Kanevsky, Arkady > > Sent: Sunday, April 11, 2021 3:25 PM > > To: 'johnsomor at gmail.com' > > Subject: [Designate][Interop] request for 15-30 min on Xena PTG for > > Interop > > > > > > > > John, > > > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Dsignate tempest or tempest configuration in Wallaby cycle or changes planned for Xena. > > > > Once on agenda one of the Interop WG person will attend and lead the discussion. > > > > > > > > Thanks, > > > > > > > > > > > > Arkady Kanevsky, Ph.D. > > > > SP Chief Technologist & DE > > > > Dell Technologies office of CTO > > > > Dell Inc. One Dell Way, MS PS2-91 > > > > Round Rock, TX 78682, USA > > > > Phone: 512 7204955 > > > > From Arkady.Kanevsky at dell.com Fri Apr 16 21:57:41 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 21:57:41 +0000 Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Repeating the request. Where is Swift Xena PTG etherpad? From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:37 PM To: tburke at nvidia.com Cc: OpenStack Discuss Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop Tim, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Swift tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 16 21:59:45 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 21:59:45 +0000 Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Repeating the request. Where is Keystone Etherpad for Xena PTG? From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:30 PM To: knikolla at bu.edu Cc: OpenStack Discuss Subject: [Keystone][Interop] request for 15-30 min on Xena PTG for Interop Kristi, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Keystone tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 16 22:01:29 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 22:01:29 +0000 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Rico, Repeating the request. Where is Heat etherpad for Xena PTG agenda? Thanks, Arkady From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:28 PM To: ricolin at ricolky.com Cc: OpenStack Discuss Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop Rico, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Heat tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Apr 16 22:12:36 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 16 Apr 2021 22:12:36 +0000 Subject: [Interop] No meeting Friday 4/23/2021 - PTG week. Message-ID: Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Fri Apr 16 22:13:04 2021 From: martialmichel at datamachines.io (Martial Michel) Date: Fri, 16 Apr 2021 18:13:04 -0400 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week Message-ID: The Scientific SIG will have two meetings next week during the PTG. Details on those meetings are as follows: Session 1 - Cactus room - April 21st - 14:00-15:00 UTC Main session, topic discussion (note we only have one hour) Session 2 - Cactus room - April 21st - 21:00-22:00 UTC Lightning Talks: Bring a LT on something you've been doing or would like to present (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) As a reminder the Scientific SIG has a Slack. Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. Thank you and looking forward to seeing a few stackers next week -- Martial -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Fri Apr 16 22:13:04 2021 From: martialmichel at datamachines.io (Martial Michel) Date: Fri, 16 Apr 2021 18:13:04 -0400 Subject: [Scientific] Scientific SIG meetings during the OpenStack PTG next week Message-ID: The Scientific SIG will have two meetings next week during the PTG. Details on those meetings are as follows: Session 1 - Cactus room - April 21st - 14:00-15:00 UTC Main session, topic discussion (note we only have one hour) Session 2 - Cactus room - April 21st - 21:00-22:00 UTC Lightning Talks: Bring a LT on something you've been doing or would like to present (10 minutes per talk, including questions. Note we only have one hour, so strict timekeeping will have to be enforced ) As a reminder the Scientific SIG has a Slack. Please contact me directly (martialmichel at datamachines.io) if you want to join our slack or our meeting and need joining information. Thank you and looking forward to seeing a few stackers next week -- Martial -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Fri Apr 16 22:56:24 2021 From: dmeng at uvic.ca (dmeng) Date: Fri, 16 Apr 2021 15:56:24 -0700 Subject: [sdk]: compute service create_server method, how to create multiple servers Message-ID: <20bd2d0dd9ed5013919e036df2576cca@uvic.ca> Hello there, Hope this email finds you well. We are currently using the openstacksdk for developing our product, and have a question about the openstacksdk compute service create_server() method. We are wondering if the "max_count" and "min_count" are supported by openstackskd for creating multiple servers at once. I tried both the max_count and the min_count, and they both only create one server for me, but I'd like to create multiple servers at once. The code I'm using is like the following: conn = connection.Connection( session=sess, region_name=None, compute_api_version='2') nova = conn.compute nova.create_server( name='sdk-test-create', image_id=image_id, flavor_id=flavor_id, key_name=my_key_name, networks=[{"uuid": network_id}], security_groups=[{'name':security_group_name}], min_count=3, ) The above code will create one server "sdk-test-create", but I'm assuming it should create three. Wondering did I miss anything, or if we have any other option to archive this? Thanks for your help and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From pangliye at inspur.com Sat Apr 17 05:58:16 2021 From: pangliye at inspur.com (=?gb2312?B?TGl5ZSBQYW5nKOXMwaLStSk=?=) Date: Sat, 17 Apr 2021 05:58:16 +0000 Subject: [venus] Xena PTG schedule Message-ID: Hi: I prepared an agenda for our PTG meeting, which mainly introducing the current progress of venus, and it is available in our etherpad [1]. The time slot is April 22st @ 13:00 - 14:00 UTC[2] Also, if You have any other topic to discuss, please let me know too so I can include it in the agenda. Looking forward to your participation in the discussion. [1] https://etherpad.opendev.org/p/apr2021-ptg-venus [2] http://ptg.openstack.org/ptg.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From ricolin at ricolky.com Sat Apr 17 12:02:07 2021 From: ricolin at ricolky.com (Rico Lin) Date: Sat, 17 Apr 2021 20:02:07 +0800 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Hi there, Our etherpad https://etherpad.opendev.org/p/xena-ptg-heat You can also found our scheduled time there Kanevsky, Arkady 於 2021年4月17日 週六,上午6:01寫道: > Rico, > > Repeating the request. > > Where is Heat etherpad for Xena PTG agenda? > > Thanks, > > Arkady > > > > *From:* Kanevsky, Arkady > *Sent:* Sunday, April 11, 2021 3:28 PM > *To:* ricolin at ricolky.com > *Cc:* OpenStack Discuss > *Subject:* [Heat][Interop] request for 15-30 min on Xena PTG for Interop > > > > Rico, > > As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on > PTG meeting to go over Interop testing and any changes for Heat tempest or > tempest configuration in Wallaby cycle or changes planned for Xena. > > Once on agenda one of the Interop WG person will attend and lead the > discussion. > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -- *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sat Apr 17 17:10:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sat, 17 Apr 2021 17:10:39 +0000 Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: Message-ID: Thanks Rico. I had added topic for interlock with Interop WG. Time is 14:45-15:00 UTC. Thanks, Arkady From: Rico Lin Sent: Saturday, April 17, 2021 7:02 AM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: Re: [Heat][Interop] request for 15-30 min on Xena PTG for Interop [EXTERNAL EMAIL] Hi there, Our etherpad https://etherpad.opendev.org/p/xena-ptg-heat [etherpad.opendev.org] You can also found our scheduled time there Kanevsky, Arkady >於 2021年4月17日 週六,上午6:01寫道: Rico, Repeating the request. Where is Heat etherpad for Xena PTG agenda? Thanks, Arkady From: Kanevsky, Arkady Sent: Sunday, April 11, 2021 3:28 PM To: ricolin at ricolky.com Cc: OpenStack Discuss Subject: [Heat][Interop] request for 15-30 min on Xena PTG for Interop Rico, As chair of OpenStack Interop WG, Interop/Refstack would like 15-30 min on PTG meeting to go over Interop testing and any changes for Heat tempest or tempest configuration in Wallaby cycle or changes planned for Xena. Once on agenda one of the Interop WG person will attend and lead the discussion. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -- Rico Lin OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Sat Apr 17 20:54:07 2021 From: melwittt at gmail.com (melanie witt) Date: Sat, 17 Apr 2021 13:54:07 -0700 Subject: [Placement] Weird issue in placement-api In-Reply-To: References: Message-ID: <708dca1f-3ff0-f2f7-12b0-6d594663e545@gmail.com> On 4/15/21 02:57, Taha Adel wrote: > Hello, > > I currently have OpenStack manually deployed by following the official > install documentation, but I have faced a weird situation. When I send > an api request to placement api service using the following command: > > *curl -H "X-Auth-Token: $T" http://controller:8778 * > > I received a status code of "*200*", which indicates a successful > operation. But, when I issue the following request: > > *curl -H "X-Auth-Token: $T" http://controller:8778/resource_providers > * > > I received a status code of "*503*", and when I checked the logs of > placement and keystone, they say that the authentication failed. For the > same reason, nova-compute can't register itself as a resource provider. > > I'm sure that the authentication credentials for placement are set > properly, but I don't know what's the problem. I think what you're seeing is expected behavior, the root in the API doesn't require authentication [1]: "Does not perform verification of authentication tokens for root in the API." so you will get 200 at the root but can get 503 for all other paths if there's an auth issue. Have you looked at the placement-api logs to see if there's additional info there? You can also try enabling log level DEBUG by setting [DEFAULT]debug = True in placement.conf. HTH, -melanie [1] https://github.com/openstack/placement/blob/6f00ba5f685183539d0ebf62a4741f2f6930e051/placement/auth.py#L90-L94 From tburke at nvidia.com Sun Apr 18 02:25:24 2021 From: tburke at nvidia.com (Timothy Burke) Date: Sun, 18 Apr 2021 02:25:24 +0000 Subject: [Swift][Interop] request for 15-30 min on Xena PTG for Interop In-Reply-To: References: , Message-ID: Sorry; been on vacation all week. Our record at http://ptg.openstack.org/etherpads.html should be accurate; Swift's etherpad is at https://etherpad.opendev.org/p/swift-ptg-xena. We'd be happy to talk for about interop testing -- is there a particular time that would work best for you? Any sort of prep work that might be good for us to think about ahead of time? Tim ________________________________ From: Kanevsky, Arkady Sent: Fr