From dangtrinhnt at gmail.com Thu Oct 1 02:32:57 2020 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 1 Oct 2020 11:32:57 +0900 Subject: [searchlight][election][ptl] PTL non-candidacy Message-ID: Hi guys, I have been taking the role of Searchlight's PTL since Stein to hopefully making the project sustainable. Over the course of 4 development cycles, I had lots of joy and also learned a lot while doing maintenance & open source contributor recruiting. Unfortunately, I switched to a job that does not involve OpenStack, and that leaves me no time for contributions. That being said, I announce my non-candidacy for the Wallaby cycle. Though not being able to contribute to the project, I'll still be around and help you out whenever I can. Thanks, -- *Trinh Nguyen* *dangtrinh.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Oct 1 07:03:25 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 1 Oct 2020 09:03:25 +0200 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <25c8cc54dc2fae53f16420e52bca9395eaa88c79.camel@redhat.com> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> <145a7b7a-6022-08f0-e647-255dcca4fe6e@debian.org> <25c8cc54dc2fae53f16420e52bca9395eaa88c79.camel@redhat.com> Message-ID: On 10/1/20 12:13 AM, Sean Mooney wrote: > cant you just hit the root of the service? that is unauthenticated for microversion > version discovery so haproxy could simple use / for a http check if its just bing used > to test if the rest api is running. How will I make the difference, in my logs, between a client hitting / for microversion discovery, and haproxy doing an healthcheck? >> I believe "version": "2.79" is the microversion of the Nova API, which >> therefore, exposes what version of Nova (here: Train). Am I correct? > no you are not. > it does not expose the package infomation it tells you the make microversion the api support but > that is a different thing. we dont always bump the microversion in a release. > ussuri and victoria but share the same microversion > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-ussuri-and-victoria > > the microverion also wont change on a stable branch no matter what bugs exist or have been patched. Right, but this still tells what version of OpenStack is installed. Maybe Nova hasn't bumped it, but one could check multiple services, and find out what version of OpenStack is there. This is still a problem, at least more than what you've described with the /healthcheck URL. >> believe we also must leave this, because clients must be able to >> discover the micro-version of the API, right? > yes without this no client can determin what api version is supported by a specific cloud. > this is intened to be a public endpoint with no auth for that reason. That part I don't understand. Couldn't this be an authenticated thing? > if the generic oslo ping rpc was added we coudl use that but i think dansmith had a simpler proposal for caching it > based on if we were able to connect during normal operation and jsut have the api check look at teh in memeory value. > i.e. if the last attempt to read form the db failed to connect we would set a global variable e.g. DB_ACCESSIBLE=FALSE > and then the next time it succeded we set it to True. the health check woudl just read the global so there should be > little to no overhead vs what oslo does > > this would basically cache the last knon state and the health check is just doing the equivalent of > return DB_ACCESSIBLE and RPC_ACCESSIBLE That's a good idea, but my patch has been sitting as rejected for the last 5 months. It could be in Victoria already... >> What is *not* useful as well, is delaying such a trivial patch for more >> than 6 months, just in the hope that in a distant future, we may have >> something better. > > but as you yourself pointed out almost every service has a / enpoint that is used for microverion discovery that is > public so not implementing /healtcheck in nova does not block you using / as the healthcheck url and you can enable the > oslo endpoint if you chose too by enable the middle ware in your local deployment. As I wrote above: that's *not* a good idea. >> Sure, take your time, get something implemented that does a nice >> healtcheck with db access and rabbitmq connectivity checks. But that >> should in no way get in the path of having a configuration which works >> for everyone by default. > there is nothing stopping install tools providing that experience by default today. Indeed, and I'm doing it. However, it's very annoying to rebase such a patch on every single release of OpenStack, just like with every other patch that the Debian packages are carying. > at least as long as nova support configurable middleware they can enable or even enable the /healthcheck endpoint > by default without requiring nova code change. i have looked at enough customer bug to know that network partions > are common in real envionments where someone trying to use the /healthcheck endpoint to know if nova is healty would > be severly disapointed when it says its healty and they cant boot any vms because rabbitmq is not reachable. We discussed this already, and we all agree that the name of the URL was a bad choice. It could have been called /reverse-proxycheck instead. But that's too late and changing that name would break users, unfortunately. If we want to rename it, then we must follow a cycle of deprecation. However, I don't really care if some are disappointed because they can't read the doc or the code, and just miss-guess because of the URL name. We need this thing for HA setup, it is easy to get installed, so why not do it? The URL name mistake cannot be a blocker. > usecause outside fo haproxy failover a bad health check is arguable worse then no healthcheck. I never wrote that I don't want a better health check. Just that the one we have is already useful, and that it should be on by default. > im not unsympathic to your request but with what oslo does by default we would basically have to document that this > should not be used to monitor the healthy of the nova service What it does is already documented. If you want to add more in the documentation, please do so. > we have already got several bug reports to the status of vm not matching reality when connectivity to > the cell is down. e.g. when we cant connect to the cell database if the vm is stoped say via a power off vis ssh then > its state will not be reflected in a nova show. This has nothing to do with what we're currently discussing, which is having an URL to wire-in haproxy checks. > if we were willing to add a big warning and clearly call out that this is just saying the api is accesable but not > necessarily functional then i would be more ok with what olso provides but it does not tell you anything about the > health of nova or if any other api request will actually work. https://review.opendev.org/755433 > i would suggest adding this to the nova ptg etherpad if you want to move this forward in nova in particular. I would suggest not discussing the mater too much, and actually doing something about it. :) It has been discussed already for a way too long. Cheers, Thomas Goirand (zigo) From ruslanas at lpic.lt Thu Oct 1 08:09:13 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Oct 2020 11:09:13 +0300 Subject: [tripleo] Image build error with missing osops-tools-monitoring-oschecks in CentOS 8 Message-ID: Hi all, in file: /usr/share/tripleo-puppet-elements/overcloud-opstools/pkg-map should be added before "default", on line 9 following text, to support CentOS 8: "release": { "centos": { "8": { "oschecks_package": "" } } }, so it would not search for osops-tools-monitoring-oschecks not sure where to submit an update, sending here, someone who knows what and where, can do or I can submit it just name where. -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Thu Oct 1 08:26:50 2020 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 1 Oct 2020 13:56:50 +0530 Subject: [tripleo] Image build error with missing osops-tools-monitoring-oschecks in CentOS 8 In-Reply-To: References: Message-ID: Hi Ruslanas, I guess you are hitting it in Train as for Ussuri+ that element is removed[1]. It was not backported to Train as Train supports both CentOS7 and 8. You can propose a patch to the stable/train branch of openstack/tripleo-puppet-elements to handle CentOS8 case. [1] https://review.opendev.org/#/q/4b60b7cd6999591cb3d0f50ab3966688a566a02c On Thu, Oct 1, 2020 at 1:46 PM Ruslanas Gžibovskis wrote: > > Hi all, > > in file: /usr/share/tripleo-puppet-elements/overcloud-opstools/pkg-map > should be added before "default", on line 9 following text, to support CentOS 8: > > "release": { > "centos": { > "8": { > "oschecks_package": "" > } > } > }, > > so it would not search for osops-tools-monitoring-oschecks > > not sure where to submit an update, sending here, someone who knows what and where, can do or I can submit it just name where. > > -- > Ruslanas Gžibovskis > +370 6030 7030 Thanks and Regards Yatin Karel From hberaud at redhat.com Thu Oct 1 08:59:02 2020 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 1 Oct 2020 10:59:02 +0200 Subject: [oslo] Project leadership In-Reply-To: <20200930214825.v7hvjec2ejffay55@yuggoth.org> References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> <20200930214825.v7hvjec2ejffay55@yuggoth.org> Message-ID: Hello, First, thanks Ben for this email. I'm personally in favor of experimenting with the DPL on oslo during W. If nobody wants the release liaison role then I volunteer to keep this role during W. I also volunteer to become the meeting chair if nobody else wants that role, we need to ensure that we continue to run our weekly meetings, especially with a distributed governance model. I think that our core members can match well with the majority of listed roles and they can easily assume some of these who are not yet assigned. Also to continue with this topic we need to follow the related process [1], Ben do you plan to submit the related patch? [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#process-for-opting-in-to-distributed-leadership Le mer. 30 sept. 2020 à 23:50, Jeremy Stanley a écrit : > On 2020-09-30 16:35:10 -0500 (-0500), Ben Nemec wrote: > [...] > > The general consensus from the people I've talked to seems to be a > > distributed model. To kick off that discussion, here's a list of roles > that > > I think should be filled in some form: > > > > * Release liaison > > * Security point-of-contact > > * TC liaison > > * Cross-project point-of-contact > > * PTG/Forum coordinator > > * Meeting chair > > * Community goal liaison (almost forgot this one since I haven't actually > > been doing it ;-). > > > > I've probably missed a few, but those are the the ones I came up with to > > start. > [...] > > The TC resolution on distributed project leadership also includes a > recommended list of liaison roles, though it's basically what you > outlined above: > > > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Oct 1 09:50:51 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 1 Oct 2020 11:50:51 +0200 Subject: [oslo] Project leadership In-Reply-To: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> Message-ID: <2710df64-1e16-631f-640b-48b6d81769bc@openstack.org> Ben Nemec wrote: > The general consensus from the people I've talked to seems to be a > distributed model. To kick off that discussion, here's a list of roles > that I think should be filled in some form: > > * Release liaison > * Security point-of-contact > * TC liaison > * Cross-project point-of-contact > * PTG/Forum coordinator > * Meeting chair > * Community goal liaison (almost forgot this one since I haven't > actually been doing it ;-). Note that only three roles absolutely need to be filled, per the TC resolution[1]: - Release liaison - tact-sig liaison (historically named the “infra Liaison”) - Security point of contact The others are just recommended. [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html -- Thierry From ruslanas at lpic.lt Thu Oct 1 09:59:36 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Oct 2020 12:59:36 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: I am using default sources from docker.io/tripleou can be seen in [1] I still see the same errors [2], even the image was updated. I have rebuilt undercloud. and images are 18 hours old. if someone could help me to refresh ironic-inspector in other way, I could try to. but now if I exec -it into ironic_inspector container, I det ironic user and cannot login to root. If someone could waste some time from their life, and paste some links, how I could update it from the glorious master, I could help to test and will help later on ;) thank you for your time, for reading ;) [1] http://paste.openstack.org/show/87pn8i1QGJj2JQyPEJbl/ [2] http://paste.openstack.org/show/LX2h9qSvyJDw6VxwUXt5/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Thu Oct 1 10:07:39 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Oct 2020 13:07:39 +0300 Subject: [tripleo] Image build error with missing osops-tools-monitoring-oschecks in CentOS 8 In-Reply-To: References: Message-ID: I am talking about ussuri, centos8, deployed from centos packages. it is in the package:openstack-tripleo-puppet-elements-12.3.1-1.el8.noarch I know, that this "osops-tools-monitoring-oschecks" was removed, but it still in pkg-map file ;) Sorry I did not specify exact version of OSP release. On Thu, 1 Oct 2020 at 11:27, Yatin Karel wrote: > Hi Ruslanas, > > I guess you are hitting it in Train as for Ussuri+ that element is > removed[1]. It was not backported to Train as Train supports both > CentOS7 and 8. You can propose a patch to the stable/train branch of > openstack/tripleo-puppet-elements to handle CentOS8 case. > > > [1] > https://review.opendev.org/#/q/4b60b7cd6999591cb3d0f50ab3966688a566a02c > > On Thu, Oct 1, 2020 at 1:46 PM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > in file: /usr/share/tripleo-puppet-elements/overcloud-opstools/pkg-map > > should be added before "default", on line 9 following text, to support > CentOS 8: > > > > "release": { > > "centos": { > > "8": { > > "oschecks_package": "" > > } > > } > > }, > > > > so it would not search for osops-tools-monitoring-oschecks > > > > not sure where to submit an update, sending here, someone who knows what > and where, can do or I can submit it just name where. > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > > Thanks and Regards > Yatin Karel > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Thu Oct 1 10:40:59 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Thu, 1 Oct 2020 07:40:59 -0300 Subject: [cloudkitty][election][ptl] PTL non-candidacy In-Reply-To: References: Message-ID: Thank you for your response. Do I need to do something else? Such as filling out some form right now? Or is this e-mail enough? On Wed, Sep 30, 2020 at 10:25 AM Pierre Riteau wrote: > Hi Rafael, > > Thank you for volunteering! I'll be happy to help you. > > The window for submitting nominations for PTL has ended, so the > Technical Committee will have to consider your proposal, following > [1]. > > Best wishes, > Pierre Riteau (priteau) > > [1] > https://governance.openstack.org/tc/resolutions/20141128-elections-process-for-leaderless-programs.html > > On Wed, 30 Sep 2020 at 15:19, Rafael Weingärtner > wrote: > > > > Hello Pierre, > > I am not that familiar with the PTL duties (yet), but I would like to > volunteer to be the next CloudKitty PTL. > > > > On Tue, Sep 22, 2020 at 12:35 PM Pierre Riteau > wrote: > >> > >> Hello, > >> > >> Late in the Victoria cycle, I volunteered to help with the then > >> inactive CloudKitty project, which resulted in becoming its PTL. While > >> I plan to continue contributing to CloudKitty, I will have very > >> limited availability during the beginning of the Wallaby cycle. In > >> particular, I may not even be able to join the PTG. > >> > >> Thus it would be best if someone else ran for CloudKitty PTL this > >> cycle. If you are interested in nominating yourself but aren't sure > >> what is involved, don't hesitate to reach out to me by email or IRC. > >> > >> Thanks, > >> Pierre Riteau (priteau) > >> > > > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Thu Oct 1 11:07:52 2020 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 1 Oct 2020 16:37:52 +0530 Subject: [tripleo] Image build error with missing osops-tools-monitoring-oschecks in CentOS 8 In-Reply-To: References: Message-ID: Hi, On Thu, Oct 1, 2020 at 3:37 PM Ruslanas Gžibovskis wrote: > > I am talking about ussuri, centos8, deployed from centos packages. > > it is in the package:openstack-tripleo-puppet-elements-12.3.1-1.el8.noarch > > I know, that this "osops-tools-monitoring-oschecks" was removed, but it still in pkg-map file ;) > Ok you are using centos ussuri repos, and those are missing the commit from stable/ussuri https://github.com/openstack/tripleo-puppet-elements/compare/12.3.1...stable/ussuri as last ussuri tag release was 10 weeks ago https://review.opendev.org/#/c/742478/. So a new release is needed to get the fixes for the issue you are seeing. @Wesley Hayutin can we have new releases for Ussuri. > Sorry I did not specify exact version of OSP release. > > On Thu, 1 Oct 2020 at 11:27, Yatin Karel wrote: >> >> Hi Ruslanas, >> >> I guess you are hitting it in Train as for Ussuri+ that element is >> removed[1]. It was not backported to Train as Train supports both >> CentOS7 and 8. You can propose a patch to the stable/train branch of >> openstack/tripleo-puppet-elements to handle CentOS8 case. >> >> >> [1] https://review.opendev.org/#/q/4b60b7cd6999591cb3d0f50ab3966688a566a02c >> >> On Thu, Oct 1, 2020 at 1:46 PM Ruslanas Gžibovskis wrote: >> > >> > Hi all, >> > >> > in file: /usr/share/tripleo-puppet-elements/overcloud-opstools/pkg-map >> > should be added before "default", on line 9 following text, to support CentOS 8: >> > >> > "release": { >> > "centos": { >> > "8": { >> > "oschecks_package": "" >> > } >> > } >> > }, >> > >> > so it would not search for osops-tools-monitoring-oschecks >> > >> > not sure where to submit an update, sending here, someone who knows what and where, can do or I can submit it just name where. >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> >> >> Thanks and Regards >> Yatin Karel >> Thanks and Regards Yatin Karel > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From balazs.gibizer at est.tech Thu Oct 1 12:22:16 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 01 Oct 2020 14:22:16 +0200 Subject: [nova][neutron] The nova-next job is broken on master Message-ID: <4PVIHQ.AA6E7XKJZL53@est.tech> Hi, The post_test_hook part of the nova-next job is broken[1] on master blocking the nova gate. Please hold your rechecks. We need a new neutronlib release containing the fix [2] to make the job pass. Until that I proposed a patch that temporary disable the failing test steps[3]. Cheers, gibi [1] https://bugs.launchpad.net/nova/+bug/1898035 [2] https://review.opendev.org/#/c/753230 [3] https://review.opendev.org/#/c/755498 From sean.mcginnis at gmx.com Thu Oct 1 13:29:14 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Oct 2020 08:29:14 -0500 Subject: [searchlight][election][ptl] PTL non-candidacy In-Reply-To: References: Message-ID: > Hi guys, > > I have been taking the role of Searchlight's PTL since Stein to > hopefully making the project sustainable. Over the course of 4 > development cycles, I had lots of joy and also learned a lot while > doing maintenance & open source contributor recruiting. Unfortunately, > I switched to a job that does not involve OpenStack, and that leaves > me no time for contributions. That being said, I announce my > non-candidacy for the Wallaby cycle. > Though not being able to contribute to the project, I'll still be > around and help you out whenever I can. > > Thanks, > > -- > *Trinh Nguyen* > *dangtrinh.com * > Thanks for all your work on Searchlight Trinh! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtint.stfc at gmail.com Thu Oct 1 13:57:38 2020 From: mtint.stfc at gmail.com (Michael STFC) Date: Thu, 1 Oct 2020 14:57:38 +0100 Subject: core os password reset In-Reply-To: <2C209110-996B-4C48-9407-0B853DFEBC65@datalounges.com> References: <67a107feee8376e0093e13a112bbf9bba95b9145.camel@redhat.com> <2C209110-996B-4C48-9407-0B853DFEBC65@datalounges.com> Message-ID: <262C90FD-26C1-4033-9A30-E2FCAED8BF85@gmail.com> Thanks - its sorted and many thanks for the advise. > On 16 Sep 2020, at 15:49, Florian Rommel wrote: > > Thank you Sean, appreciate it. Learned something new !:) > > //Florian > >> On 16. Sep 2020, at 17.36, Sean Mooney wrote: >> >> On Wed, 2020-09-16 at 17:25 +0300, Florian Rommel wrote: >>> Just as a side question, is there a benefit of ignition over cloud-init? (Not trying to start a flame war.. genuinely >>> interested, and not trying to hijack the thread either) >> im not really sure. i think core os created ignition because of some limitation with cloud init >> >> they use it for a lot more then jsut basic first boot setup. its used for all system configtion in >> container linux so i imagin they hit edgecases and developed ignition to adress those. >> most cloud image like ubuntu or fedora i think dont ship with ignition support out of the box. >> i did not have anything in my history on this topic but i did fine this >> https://coreos.com/ignition/docs/latest/what-is-ignition.html >> coreos are generally pretty good at documenting there deision are thigns like this >> also https://coreos.com/ignition/docs/latest/rationale.html >> >> i really have not had much interaction with it however so in practis i dont know which is "better" in general. >>> >>> //florian >>> >>>>> On 16. Sep 2020, at 16.45, Sean Mooney wrote: >>>> >>>> On Wed, 2020-09-16 at 06:39 -0700, Michael STFC wrote: >>>>> Our openstack env automatically injects SSH keys) and already does that >>>>> with all other images I have downloaded to deployed e.g fedora cloud images >>>>> and ceros cloud image. >>>>> >>>>> However core os is different and I have tried to edit grub added >>>>> coreos.autologin=tty1 >>>>> but nothing. >>>>> >>>>> Also tried to do this via cloud-config >>>>> >>>>> #cloud-config >>>>> >>>>> coreos: >>>>> units: >>>>> - name: etcd.service >>>>> command: start >>>>> >>>>> users: >>>>> - name: core >>>>> passwd: coreos >>>>> ssh_authorized_keys: >>>>> - "ssh-rsa xxxxx" >>>>> >>>>> >>>>> And not luck - when vm boots it hangs. >>>> >>>> Coreos does not use cloud config by default it uses ignition. >>>> i belive you can still configure it with cloud init but you have to do it >>>> slightly differnet then normal. >>>> https://coreos.com/os/docs/latest/booting-on-openstack.html#container-linux-configs >>>> has the detail you need. basically you have to either pass an ignition script as the user >>>> data or Container Linux Config format. >>>> >>>> cloud init wont work. >>>> >>>> e.g. >>>> nova boot \ >>>> --user-data ./config.ign \ >>>> --image cdf3874c-c27f-4816-bc8c-046b240e0edd \ >>>> --key-name coreos \ >>>> --flavor m1.medium \ >>>> --min-count 3 \ >>>> --security-groups default,coreos >>>> >>>> were ./config.ign is an ignition file. >>>> >>>>> >>>>> On 16 Sep 2020 at 13:31:10, Florian Rommel wrote: >>>>> >>>>>> Hi Michael. >>>>>> So, if I remember coreOS correctly, its the same as all of the cloud based >>>>>> images. It uses SSH keys to authenticate. If you have a an SSH public key >>>>>> in there where you do no longer have the private key for, you can “easily” >>>>>> reset it by 2 ways. >>>>>> 1. If its volume based instance, delete the instance but not the volume. >>>>>> Create the instance again by adding your own ssh key into the boot process. >>>>>> This will ADD the ssh key, but not overwrite the existing one in the >>>>>> authorized_key file >>>>>> 2. If it is normal ephermal disk based instance, make a snapshot and >>>>>> create a new instance from the snapshot, adding your own ssh key into it. >>>>>> >>>>>> Either or, if they are ssh key authenticated (which they should be), there >>>>>> isn’t really an EASY way unless you want to have the volume directly. >>>>>> >>>>>> Best regards, >>>>>> //Florian >>>>>> >>>>>> On 16. Sep 2020, at 13.53, Michael STFC wrote: >>>>>> >>>>>> >>>>>> Hi >>>>>> >>>>>> >>>>>> New to openstack and wanting to know how to get boot core os and reset >>>>>> user core password. >>>>>> >>>>>> >>>>>> Please advise. >>>>>> >>>>>> >>>>>> Michael >>>>>> >>>>>> >>>>>> >>>>>> >>>> >>>> >>> >>> >> >> > > > From thierry at openstack.org Thu Oct 1 14:15:20 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 1 Oct 2020 16:15:20 +0200 Subject: [cloudkitty][election][ptl] PTL non-candidacy In-Reply-To: References: Message-ID: <0f33ee83-18cc-5e30-8cc5-dc6a0dbcef4a@openstack.org> Rafael Weingärtner wrote: > Thank you for your response. > Do I need to do something else? Such as filling out some form right now? > Or is this e-mail enough? This email is enough. We'll likely ask you to +1 the governance change adding your name, just to be sure. But that's not posted yet. -- Thierry Carrez (ttx) From jeremyfreudberg at gmail.com Thu Oct 1 14:25:16 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 1 Oct 2020 10:25:16 -0400 Subject: [sahara][stable] EOL Ocata and Pike Message-ID: Hello all, I am announcing the intention to EOL the stable/ocata and stable/pike branches for all branched Sahara deliverables. It has been discussed within the Sahara team a couple times, and it was most recently discussed today [*]. Any objections? Thanks, Jeremy [*] http://eavesdrop.openstack.org/meetings/sahara/2020/sahara.2020-10-01-14.01.log.html#l-26 From pwm2012 at gmail.com Thu Oct 1 14:27:04 2020 From: pwm2012 at gmail.com (pwm) Date: Thu, 1 Oct 2020 22:27:04 +0800 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: ok, will try to test it on 3 nodes this weekend. I see you are using the Linux bridge, thanks for sharing your solution. On Tue, Sep 29, 2020 at 10:01 PM Satish Patel wrote: > Perfect, > > Now try same way on 3 nodes and tell me how it goes? > > Because I was having issue in my environment to make it work so I used > different method which is here on my blog > https://satishdotpatel.github.io//openstack-ansible-octavia/ > > Sent from my iPhone > > On Sep 29, 2020, at 8:27 AM, pwm wrote: > >  > Hi, > I'm testing on AIO before moving to a 3 nodes controller. Haven't tested > on 3 nodes controller yet but I do think it will get the same issue. > > On Tue, Sep 29, 2020 at 3:43 AM Satish Patel wrote: > >> Hi, >> >> I was dealing with the same issue a few weeks back so curious are you >> having this problem on AIO or 3 node controllers? >> >> ~S >> >> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: >> > >> > Hi, >> > I using the following setup for Octavia load balancer on OVS >> > Ansible openstack_user_config.yml >> > - network: >> > container_bridge: "br-lbaas" >> > container_type: "veth" >> > container_interface: "eth14" >> > host_bind_override: "eth14" >> > ip_from_q: "octavia" >> > type: "flat" >> > net_name: "octavia" >> > group_binds: >> > - neutron_openvswitch_agent >> > - octavia-worker >> > - octavia-housekeeping >> > - octavia-health-manager >> > >> > user_variables.yml >> > octavia_provider_network_name: "octavia" >> > octavia_provider_network_type: flat >> > octavia_neutron_management_network_name: lbaas-mgmt >> > >> > /etc/netplan/50-cloud-init.yaml >> > br-lbaas: >> > dhcp4: no >> > interfaces: [ bond10 ] >> > addresses: [] >> > parameters: >> > stp: false >> > forward-delay: 0 >> > bond10: >> > dhcp4: no >> > addresses: [] >> > interfaces: [ens16] >> > parameters: >> > mode: balance-tlb >> > >> > brctl show >> > bridge name bridge id STP enabled >> interfaces >> > br-lbaas 8000.d60e4e80f672 no >> 2ea34552_eth14 >> > >> bond10 >> > >> > However, I am getting the following error when creating the load balance >> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >> to instance. Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.29.233.47', port=9443 >> > >> > The Octavia api container unable to connect to the amphora instance. >> > Any missing configuration, cause I need to manually add in the eth14 >> interface to the br-lbaas bridge in order to fix the connection issue >> > brctl addif br-lbaas eth14 >> > >> > Thanks >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Oct 1 15:03:35 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 1 Oct 2020 17:03:35 +0200 Subject: [kolla] No Kall today Message-ID: Hi, Folks! Sorry for the very late notice: today's Kolla Kall is cancelled due to priority of Victoria release reviews. Thank you for understanding. -yoctozepto From juliaashleykreger at gmail.com Thu Oct 1 15:17:52 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Oct 2020 08:17:52 -0700 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: So interestingly enough, I can't seem to access that docker repository to look at the images. Something seems off, but it doesn't look related to the change that was reverted a couple days ago. Looking at the additional logging you provided, it look something it occuring out of order. So a few questions: 1) Was the ironic-inspector container started before or after the ironic-api container? It needs to start after for the dnsmasq filter driver to sync and operate. 2) Can you manually trigger inspection without the extra tripleo tools? "openstack baremetal node inspect " I ask this specifically because I'm not seeing the post request in the logging your supplying, so I'm wondering if for some reason the wrong thing is happening in the tripleo tooling. When you manually inspect, you'll need to do "openstack baremetal node show to see if the node exits inspect state. From there I'd check the logs again to see if the same error occurred or if things were successful. -Julia On Thu, Oct 1, 2020 at 2:59 AM Ruslanas Gžibovskis wrote: > > I am using default sources from docker.io/tripleou can be seen in [1] > I still see the same errors [2], even the image was updated. I have rebuilt undercloud. > and images are 18 hours old. > > if someone could help me to refresh ironic-inspector in other way, I could try to. but now if I exec -it into ironic_inspector container, I det ironic user and cannot login to root. > > If someone could waste some time from their life, and paste some links, how I could update it from the glorious master, I could help to test and will help later on ;) > > thank you for your time, for reading ;) > > [1] http://paste.openstack.org/show/87pn8i1QGJj2JQyPEJbl/ > [2] http://paste.openstack.org/show/LX2h9qSvyJDw6VxwUXt5/ From amy at demarco.com Thu Oct 1 16:23:41 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 1 Oct 2020 11:23:41 -0500 Subject: [OpenStack-Ansible][OSA][Docs] Project missing from deployment page Message-ID: Hey all, Looks like OpenStack-Ansible has dropped from the deployment section again:( https://docs.openstack.org/ussuri/deploy/ Any suggestions on what we need to patch to get back on the page? Thanks, Amy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Oct 1 16:28:44 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 1 Oct 2020 17:28:44 +0100 Subject: [OpenStack-Ansible][OSA][Docs] Project missing from deployment page In-Reply-To: References: Message-ID: On Thu, 1 Oct 2020 at 17:25, Amy Marrich wrote: > > Hey all, > > Looks like OpenStack-Ansible has dropped from the deployment section again:( > https://docs.openstack.org/ussuri/deploy/ > > Any suggestions on what we need to patch to get back on the page? It happens for every deployment project, for every release :( There must be a better way... > > Thanks, > > Amy From amy at demarco.com Thu Oct 1 16:31:31 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 1 Oct 2020 11:31:31 -0500 Subject: [OpenStack-Ansible][OSA][Docs] Project missing from deployment page In-Reply-To: References: Message-ID: Mark, Any hints how I can fix it? We used to have this issue but can't remember the fix.:( Thanks, Amy On Thu, Oct 1, 2020 at 11:28 AM Mark Goddard wrote: > On Thu, 1 Oct 2020 at 17:25, Amy Marrich wrote: > > > > Hey all, > > > > Looks like OpenStack-Ansible has dropped from the deployment section > again:( > > https://docs.openstack.org/ussuri/deploy/ > > > > Any suggestions on what we need to patch to get back on the page? > > It happens for every deployment project, for every release :( > > There must be a better way... > > > > > Thanks, > > > > Amy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Oct 1 16:36:58 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 1 Oct 2020 18:36:58 +0200 Subject: [OpenStack-Ansible][OSA][Docs] Project missing from deployment page In-Reply-To: References: Message-ID: Good catch, we (Kolla) are in the same situation each cycle. This time OSA has a slightly longer delay than us so we couldn't do the courtesy of uncommenting them as well. I fixed OSA's now in [1]. [1] https://review.opendev.org/755592 -yoctozepto On Thu, Oct 1, 2020 at 6:27 PM Amy Marrich wrote: > > Hey all, > > Looks like OpenStack-Ansible has dropped from the deployment section again:( > https://docs.openstack.org/ussuri/deploy/ > > Any suggestions on what we need to patch to get back on the page? > > Thanks, > > Amy From gr at ham.ie Thu Oct 1 16:37:18 2020 From: gr at ham.ie (Graham Hayes) Date: Thu, 1 Oct 2020 17:37:18 +0100 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: <7794e955-9de9-4cb5-d159-9aa773438e9f@ham.ie> So, we have had a single goal proposed that does not have push back, or limitations - the privsep migration. As this may be a challenging goal, I think doing this one on its own for the cycle is a good idea. Rodolfo has agreed to continue championing this goal, so I have proposed moving the goal from the proposed to selected folder[1] 1 - https://review.opendev.org/755590 Thanks all! - Graham On 21/09/2020 18:53, Graham Hayes wrote: > Hi All > > It is that time of year / release again - and we need to choose the > community goals for Wallaby. > > Myself and Nate looked over the list of goals [1][2][3], and we are > suggesting one of the following: > > >  - Finish moving legacy python-*client CLIs to python-openstackclient >  - Move from oslo.rootwrap to oslo.privsep >  - Implement the API reference guide changes >  - All API to provide a /healthcheck URL like Keystone (and others) > provide > > Some of these goals have champions signed up already, but we need to > make sure they are still available to do them. If you are interested in > helping drive any of the goals, please speak up! > > We need to select goals in time for the new release cycle - so please > reply if there is goals you think should be included in this list, or > not included. > > Next steps after this will be helping people write a proposed goal > and then the TC selecting the ones we will pursue during Wallaby. > > Additionally, we have traditionally selected 2 goals per cycle - > however with the people available to do the work across projects > Nate and I briefly discussed reducing that to one for this cycle. > > What does the community think about this? > > Thanks, > > Graham > > 1 - https://etherpad.opendev.org/p/community-goals > 2 - https://governance.openstack.org/tc/goals/proposed/index.html > 3 - https://etherpad.opendev.org/p/community-w-series-goals > 4 - > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > From ruslanas at lpic.lt Thu Oct 1 16:37:10 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Oct 2020 19:37:10 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: Hi Julia, 1) I think, podman ps sorts according to starting time. [1] So if we trust in it, so ironic is first one (in the bottom) and first which is still running (not configuration run). 2.1) ok, fails same place. baremetal node show CPU2 [2] 2.2) Now, logs look same too [3] 0) regarding image I have, I can podman save (a first option from man podman-save = podman save --quiet -o alpine.tar ironic-inspector:current-tripleo) P.S. baremetal is alias: alias baremetal="openstack baremetal" [1] http://paste.openstack.org/show/uejDzLWpPvMdLFAJTCam/ [2] http://paste.openstack.org/show/ryYv54g9XoWSKGdCOuqh/ [3] http://paste.openstack.org/show/syKp1MtkeOa1J5aglfNj/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Thu Oct 1 16:54:29 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Oct 2020 19:54:29 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: you can access it here [1] I have done xz -9 to it in addition ;) so takes around 110 MB instead of 670MB [1] https://proxy.qwq.lt/fun/centos-binary-ironic-inspector.current-tripleo.tar.xz On Thu, 1 Oct 2020 at 19:37, Ruslanas Gžibovskis wrote: > Hi Julia, > > 1) I think, podman ps sorts according to starting time. [1] > So if we trust in it, so ironic is first one (in the bottom) and first > which is still running (not configuration run). > > 2.1) ok, fails same place. baremetal node show CPU2 [2] > 2.2) Now, logs look same too [3] > > 0) regarding image I have, I can podman save (a first option from man > podman-save = podman save --quiet -o alpine.tar > ironic-inspector:current-tripleo) > > P.S. baremetal is alias: alias baremetal="openstack baremetal" > > [1] http://paste.openstack.org/show/uejDzLWpPvMdLFAJTCam/ > [2] http://paste.openstack.org/show/ryYv54g9XoWSKGdCOuqh/ > [3] http://paste.openstack.org/show/syKp1MtkeOa1J5aglfNj/ > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Oct 1 17:36:14 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Oct 2020 10:36:14 -0700 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: If memory serves me correctly, TripleO shares a folder outside the container for the configuration and logs are written out to the container console so the container itself is not exactly helpful. Interestingly the container contents you supplied is labeled ironic-inspector, but contains the ironic release from Ussuri. I think you're going to need someone with more context into how TripleO has assembled the container assets to provide more clarity than I can provide. My feeling is likely some sort of configuration issue for inspector, since the single inspection fails and the supplied log data shows the request coming in. On Thu, Oct 1, 2020 at 9:54 AM Ruslanas Gžibovskis wrote: > > you can access it here [1] > I have done xz -9 to it in addition ;) so takes around 110 MB instead of 670MB > > > [1] https://proxy.qwq.lt/fun/centos-binary-ironic-inspector.current-tripleo.tar.xz > > On Thu, 1 Oct 2020 at 19:37, Ruslanas Gžibovskis wrote: >> >> Hi Julia, >> >> 1) I think, podman ps sorts according to starting time. [1] >> So if we trust in it, so ironic is first one (in the bottom) and first which is still running (not configuration run). >> >> 2.1) ok, fails same place. baremetal node show CPU2 [2] >> 2.2) Now, logs look same too [3] >> >> 0) regarding image I have, I can podman save (a first option from man podman-save = podman save --quiet -o alpine.tar ironic-inspector:current-tripleo) >> >> P.S. baremetal is alias: alias baremetal="openstack baremetal" >> >> [1] http://paste.openstack.org/show/uejDzLWpPvMdLFAJTCam/ >> [2] http://paste.openstack.org/show/ryYv54g9XoWSKGdCOuqh/ >> [3] http://paste.openstack.org/show/syKp1MtkeOa1J5aglfNj/ >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From ruslanas at lpic.lt Thu Oct 1 18:00:39 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Oct 2020 21:00:39 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: Replying in line, not my favourite way, so not sure if i do this correctly or not. I could try to make access to this undercloud host if you want. On Thu, 1 Oct 2020, 20:36 Julia Kreger, wrote: > If memory serves me correctly, TripleO shares a folder outside the > container for the configuration and logs are written out to the > container console so the container itself is not exactly helpful. > Would you like to see exact configs? Which ones? I can grep/cat it. Same with all log files. If you need i can provide them to you. Interestingly the container contents you supplied is labeled > ironic-inspector, but contains the ironic release from Ussuri. > Yes. I use ussuri release from centos8 repos, and all the scripts it provides. > > I think you're going to need someone with more context into how > TripleO has assembled the container assets to provide more clarity > than I can provide. My feeling is likely some sort of configuration > issue for inspector, since the single inspection fails and the > supplied log data shows the request coming in. > My earlier setup, which was deployed around 4 weeks ago, worked fine, and the one i have deployed last Friday, was not working. So something, if you have reverted it, might not been reverted in centos flows? Might it be right? > > On Thu, Oct 1, 2020 at 9:54 AM Ruslanas Gžibovskis > wrote: > > > > you can access it here [1] > > I have done xz -9 to it in addition ;) so takes around 110 MB instead of > 670MB > > > > > > [1] > https://proxy.qwq.lt/fun/centos-binary-ironic-inspector.current-tripleo.tar.xz > > > > On Thu, 1 Oct 2020 at 19:37, Ruslanas Gžibovskis > wrote: > >> > >> Hi Julia, > >> > >> 1) I think, podman ps sorts according to starting time. [1] > >> So if we trust in it, so ironic is first one (in the bottom) and first > which is still running (not configuration run). > >> > >> 2.1) ok, fails same place. baremetal node show CPU2 [2] > >> 2.2) Now, logs look same too [3] > >> > >> 0) regarding image I have, I can podman save (a first option from man > podman-save = podman save --quiet -o alpine.tar > ironic-inspector:current-tripleo) > >> > >> P.S. baremetal is alias: alias baremetal="openstack baremetal" > >> > >> [1] http://paste.openstack.org/show/uejDzLWpPvMdLFAJTCam/ > >> [2] http://paste.openstack.org/show/ryYv54g9XoWSKGdCOuqh/ > >> [3] http://paste.openstack.org/show/syKp1MtkeOa1J5aglfNj/ > >> > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Oct 1 19:28:34 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 1 Oct 2020 15:28:34 -0400 Subject: [cinder][ops] new cinder stable releases address upgrade bug Message-ID: Greetings: The Cinder project has two new stable releases: - 16.2.0 (Ussuri) - 15.4.0 (Train) (we also have a new Stein, 14.3.0, but it's not relevant to this discussion) Among other things, these address Bug #1893107 [0], which could prevent a cinder database upgrade from Train to Ussuri when the database in question had been (a) unpurged and (b) originally upgraded from Stein to Train. (A purged Stein database upgraded to Train, or a database that was new in Train, would not be subject to this issue.) The 16.2.0 [1] and 15.4.0 [2] release notes outline some strategies for addressing this issue. We recommend that you read both before attempting an upgrade to Ussuri and decide which strategy works best for you and your users. (Of course, you always read all the release notes, but we figured it wouldn't hurt to bring this particular point to your attention.) Thank you for your attention to this matter! [0] https://bugs.launchpad.net/cinder/+bug/1893107 [1] https://docs.openstack.org/releasenotes/cinder/ussuri.html#relnotes-16-2-0-stable-ussuri [2] https://docs.openstack.org/releasenotes/cinder/train.html#relnotes-15-4-0-stable-train From pierre at stackhpc.com Thu Oct 1 20:14:48 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 1 Oct 2020 22:14:48 +0200 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team Message-ID: Hello, While CloudKitty has moved to Storyboard to track its bugs [1], there is still a lot of content under the Launchpad project [2], including years of bug reports, blueprints, etc. As PTL, I would like to be able to administer the CloudKitty Launchpad project. I have requested access to the "CloudKitty Drivers" Launchpad team [3], but haven't heard back. I see some people have been on the waiting list for years [4]. Since the "OpenStack Administrators" team [5] is the owner "CloudKitty Drivers", would one of its members be able to grant me access? Many thanks, Pierre Riteau (priteau) [1] https://storyboard.openstack.org/#!/project/890 [2] https://launchpad.net/cloudkitty [3] https://launchpad.net/~cloudkitty-drivers [4] https://launchpad.net/~cloudkitty-drivers/+members#proposed [5] https://launchpad.net/~openstack-admins From skaplons at redhat.com Thu Oct 1 19:59:01 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 01 Oct 2020 21:59:01 +0200 Subject: [neutron] Drivers meeting 02.10.2020 cancelled Message-ID: <5130532.SK0LdkjbMv@p1> Hi, Due to no agenda items for our tomorrow's drivers meeting, I'm cancelling it for this week. See You on the meeting next week. Have a great weekend :) -- Slawek Kaplonski Principal Software Engineer Red Hat From melwittt at gmail.com Thu Oct 1 20:42:39 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 1 Oct 2020 13:42:39 -0700 Subject: [keystone][policy] user read-only role not working In-Reply-To: References: <174c3a06897.bdfa9ca56831.6510718612076837121@zohocorp.com> Message-ID: <2b1b796a-9136-3066-0d1b-553a41065ca5@gmail.com> On 9/25/20 07:25, Ben Nemec wrote: > I don't believe that the reader role was respected by most projects in > Train. Moving every project to support it is still a work in progress. This is true and for nova, we have added support for the reader role beginning in the Ussuri release as part of this spec work: https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/policy-defaults-refresh.html Documentation: https://docs.openstack.org/nova/latest/configuration/policy-concepts.html To accomplish a read-only user in the Train release for nova, you can DIY to a limited extent by creating custom roles and adjusting your policy.json file [1][2] accordingly. There are separate policies for GET/POST/PUT/DELETE in many cases so if you were to create a role ReadWriteUser you could specify that for POST/PUT/DELETE APIs and create another role ReadOnlyUser and specify that for GET APIs. Hope this helps, -melanie [1] https://docs.openstack.org/nova/train/configuration/sample-policy.html [2] https://docs.openstack.org/security-guide/identity/policies.html > On 9/24/20 11:58 PM, its-openstack at zohocorp.com wrote: >> Dear Openstack, >> >> We have deployed openstack train branch. >> >> This mail is in regards to the default role in openstack. we are >> trying to create a read-only user i.e, the said user can only view in >> the web portal(horizon)/using cli commands. >> the user cannot create an instance or delete an instance , the same >> with any resource. >> >> we created a user in a project test with reader role, but in >> horizon/cli able to create and delete instance and similar to other >> access also >> if you so kindly help us fix this issue would be grateful. >> >> the commands used for creation >> >> >> >> $ openstack user create --domain default --password-prompt >> test-reader at test.com >> $ openstack role add --project test --user test-reader at test.com >> reader >> >> >> >> Thanks and Regards >> sysadmin >> >> >> >> >> > From sean.mcginnis at gmx.com Thu Oct 1 20:47:56 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Oct 2020 15:47:56 -0500 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: References: Message-ID: > Hello, > > While CloudKitty has moved to Storyboard to track its bugs [1], there > is still a lot of content under the Launchpad project [2], including > years of bug reports, blueprints, etc. > > As PTL, I would like to be able to administer the CloudKitty Launchpad > project. I have requested access to the "CloudKitty Drivers" Launchpad > team [3], but haven't heard back. I see some people have been on the > waiting list for years [4]. > > Since the "OpenStack Administrators" team [5] is the owner "CloudKitty > Drivers", would one of its members be able to grant me access? > > Many thanks, > Pierre Riteau (priteau) I know nothing about it, but just curious - wasn't the Launchpad content imported in to Storyboard when making that switch? Sean From fungi at yuggoth.org Thu Oct 1 21:07:39 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Oct 2020 21:07:39 +0000 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: References: Message-ID: <20201001210738.3bwubnqldabiqspo@yuggoth.org> On 2020-10-01 22:14:48 +0200 (+0200), Pierre Riteau wrote: > While CloudKitty has moved to Storyboard to track its bugs [1], there > is still a lot of content under the Launchpad project [2], including > years of bug reports, blueprints, etc. > > As PTL, I would like to be able to administer the CloudKitty Launchpad > project. I have requested access to the "CloudKitty Drivers" Launchpad > team [3], but haven't heard back. I see some people have been on the > waiting list for years [4]. > > Since the "OpenStack Administrators" team [5] is the owner "CloudKitty > Drivers", would one of its members be able to grant me access? Yes, I've taken care of this now, you should be a member (and administrator) of the drivers team as of a few minutes ago. Please go ahead and follow the post-migration steps documented here, since it appears nobody ever did: https://docs.openstack.org/infra/storyboard/migration.html#recently-migrated -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Thu Oct 1 21:09:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Oct 2020 21:09:32 +0000 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: References: Message-ID: <20201001210931.shxeh6sjmwl4lvwq@yuggoth.org> On 2020-10-01 15:47:56 -0500 (-0500), Sean McGinnis wrote: [...] > I know nothing about it, but just curious - wasn't the Launchpad > content imported in to Storyboard when making that switch? It was, for example this bug: https://launchpad.net/bugs/1550177 https://storyboard.openstack.org/#!/story/1550177 Unfortunately the recommendations for locking bug reporting in LP were never followed, so people have been opening bugs in both places since the migration. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Oct 1 21:11:18 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Oct 2020 16:11:18 -0500 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: <20201001210931.shxeh6sjmwl4lvwq@yuggoth.org> References: <20201001210931.shxeh6sjmwl4lvwq@yuggoth.org> Message-ID: <4570fab7-44b1-f754-4e6b-9fe556f84040@gmx.com> >> I know nothing about it, but just curious - wasn't the Launchpad >> content imported in to Storyboard when making that switch? > It was, for example this bug: > > https://launchpad.net/bugs/1550177 > https://storyboard.openstack.org/#!/story/1550177 > > Unfortunately the recommendations for locking bug reporting in LP > were never followed, so people have been opening bugs in both places > since the migration. Doh! From walsh277072 at gmail.com Fri Oct 2 05:19:24 2020 From: walsh277072 at gmail.com (WALSH CHANG) Date: Fri, 2 Oct 2020 05:19:24 +0000 Subject: [Gnocchi-api] Gnocchi-api can not be found Message-ID: To whom it may concern, I got the error message when I restart gnocchi-api as the official installation guide. Here is the error message I got root at c3macserver2:~# service gnocchi-api start Failed to start gnocchi-api.service: Unit gnocchi-api.service not found. root at c3macserver2:~# whereis gnocchi-api gnocchi-api: /usr/bin/gnocchi-api gnocchi-api is already the newest version (4.3.2-0ubuntu2~cloud0). Someone said the new version doesn't include the gnocchi-api, so I am trying to install the previous version. https://stackoverflow.com/questions/47520779/service-gnocchi-api-not-found Need the help to install the previous version of gnocchi-api. My openstack version : stein Ubuntu 18.0.4 Thanks. Kind regards, Walsh -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Oct 2 08:31:45 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Oct 2020 10:31:45 +0200 Subject: =?UTF-8?Q?Re=3A_=5Belection=5D=5Bplt=5D_Herv=C3=A9_Beraud_candidacy_for_Rele?= =?UTF-8?Q?ase_Management_PTL?= In-Reply-To: <791383fc-d53c-5965-64f0-95674467a30d@redhat.com> References: <791383fc-d53c-5965-64f0-95674467a30d@redhat.com> Message-ID: Le ven. 2 oct. 2020 à 10:10, Daniel Bengtsson a écrit : > > > Le 25/09/2020 à 14:38, Herve Beraud a écrit : > > During this cycle the main efforts will be around the team building. > > Indeed, we now have a lot of great tools and automations but we need to > > recruit new members and mentor them to keep a strong and resilient core > > team. > > By recruiting more people we could easily spread the workload on more > > core members. > > I plan to dedicate myself on this topic during this cycle. > I'm really interested to be involved and join the release team. I have > started to do review and patches on the project. > > You can count on us to help you and to mentor you to learn how release processes work :) Do not hesitate to join us on IRC and to join our meetings. Also do not hesitate to add your nick to our ping list to ensure to be notified when our meetings starts: https://etherpad.opendev.org/p/victoria-relmgt-tracking -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ops at clustspace.com Fri Oct 2 08:45:54 2020 From: ops at clustspace.com (ops at clustspace.com) Date: Fri, 02 Oct 2020 11:45:54 +0300 Subject: [usurri][linbit][drbd] Cinder configuration Message-ID: <786204356af6be3f5bc6fc44715a151a@clustspace.com> Hello, Does anyone have manuall how to configure drbd+linstor for cinder? Official documentation is very crap. We already tryed week for it and without any results From pierre at stackhpc.com Fri Oct 2 09:29:48 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 2 Oct 2020 11:29:48 +0200 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: <20201001210738.3bwubnqldabiqspo@yuggoth.org> References: <20201001210738.3bwubnqldabiqspo@yuggoth.org> Message-ID: On Thu, 1 Oct 2020 at 23:08, Jeremy Stanley wrote: > > On 2020-10-01 22:14:48 +0200 (+0200), Pierre Riteau wrote: > > While CloudKitty has moved to Storyboard to track its bugs [1], there > > is still a lot of content under the Launchpad project [2], including > > years of bug reports, blueprints, etc. > > > > As PTL, I would like to be able to administer the CloudKitty Launchpad > > project. I have requested access to the "CloudKitty Drivers" Launchpad > > team [3], but haven't heard back. I see some people have been on the > > waiting list for years [4]. > > > > Since the "OpenStack Administrators" team [5] is the owner "CloudKitty > > Drivers", would one of its members be able to grant me access? > > Yes, I've taken care of this now, you should be a member (and > administrator) of the drivers team as of a few minutes ago. > > Please go ahead and follow the post-migration steps documented here, > since it appears nobody ever did: > > https://docs.openstack.org/infra/storyboard/migration.html#recently-migrated Thank you Jeremy! What would be the process to import into Storyboard all Launchpad bugs that have been opened since the initial migration? Will the import script handle it fine or should we identify and tag all new bugs? From sean.mcginnis at gmx.com Fri Oct 2 11:04:06 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 2 Oct 2020 06:04:06 -0500 Subject: [release] Release countdown for week R-1 Oct 5-9 Message-ID: <20201002110406.GA247200@sm-workstation> Development Focus ----------------- We are on the final mile of the victoria development cycle! Remember that the victoria final release will include the latest release candidate (for cycle-with-rc deliverables) or the latest intermediary release (for cycle-with-intermediary deliverables) available. October 8 is the deadline for final victoria release candidates as well as any last cycle-with-intermediary deliverables. We will then enter a quiet period until we tag the final release on October 14. Teams should be prioritizing fixing release-critical bugs, before that deadline. Otherwise it's time to start planning the Wallaby cycle, including discussing Forum and PTG sessions content, in preparation of Open Infra Summit the week of Oct 19 and the PTG on the week of Oct 26. Actions ------- Watch for any translation patches coming through on the stable/victoria branch and merge them quickly. If you discover a release-critical issue, please make sure to fix it on the master branch first, then backport the bugfix to the stable/victoria branch before triggering a new release. Please drop by #openstack-release with any questions or concerns about the upcoming release! Upcoming Deadlines & Dates -------------------------- Final Victoria RC deadline: October 8 Final Victoria release: October 14 Open Infra Summit: October 19-23 Wallaby PTG: October 26-30 From katonalala at gmail.com Fri Oct 2 11:55:20 2020 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 2 Oct 2020 13:55:20 +0200 Subject: [neutron][requirements] RFE requested for neutron-lib Message-ID: Hi, I would like to ask Requirements Freeze Exception for neutron-lib 2.6.1 for Neutron Victoria. neutron-lib 2.6.1 contains a bug fix for Neutron which solves a nova gate issue (on Victoria) coming from a new feature in neutron, see [1]. [1] https://bugs.launchpad.net/neutron/+bug/1882804 (RFE: allow replacing the QoS policy of bound port) Thanks in advance Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Oct 2 12:02:32 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 02 Oct 2020 14:02:32 +0200 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: References: Message-ID: <8GPKHQ.GOJ6W3345YH42@est.tech> On Fri, Oct 2, 2020 at 13:55, Lajos Katona wrote: > Hi, > > I would like to ask Requirements Freeze Exception for neutron-lib > 2.6.1 for Neutron Victoria. > neutron-lib 2.6.1 contains a bug fix for Neutron which solves a nova > gate issue (on Victoria) coming from a new feature in neutron, see > [1]. This bugfix is needed to avoid a regression in the $ nova-manage placement heal_allocation CLI Having neutron-lib 2.6.1 released and then the neutron requirements bumped would fix the nova-next job on master[2]. And prevent a similar break of the job on stable/victoria after [3] merges. Cheers, gibi > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 (RFE: allow > replacing the QoS policy of bound port) [2] https://bugs.launchpad.net/nova/+bug/1898035 [3] https://review.opendev.org/#/c/755180/ > > Thanks in advance > Lajos Katona (lajoskatona) From rosmaita.fossdev at gmail.com Fri Oct 2 12:36:57 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 2 Oct 2020 08:36:57 -0400 Subject: [usurri][linbit][drbd] Cinder configuration In-Reply-To: <786204356af6be3f5bc6fc44715a151a@clustspace.com> References: <786204356af6be3f5bc6fc44715a151a@clustspace.com> Message-ID: On 10/2/20 4:45 AM, ops at clustspace.com wrote: > Hello, > > Does anyone have manuall how to configure drbd+linstor for cinder? > Official documentation is very crap. I suggest you contact Linstor directly. Their cinder driver was updated in Ussuri, but the developer who worked on that patch has since left the company, and their third-party CI system has not been responding to Cinder patches. It would be good to let them know that some people are in fact trying to use their driver. Unfortunately, "linstor days" is the same week as the OpenStack Wallaby PTG, but if you're not attending the PTG, maybe there's a session that would be helpful (I haven't checked): https://www.linbit.com/linstor-days-sds-virtual-event/ And if you are able to get some help, the Cinder project would welcome a patch to update the documentation. > We already tryed week for it and without any results Sorry to hear that, hopefully someone on this list can give you some guidance. cheers, brian From skaplons at redhat.com Fri Oct 2 12:57:07 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 2 Oct 2020 14:57:07 +0200 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <8GPKHQ.GOJ6W3345YH42@est.tech> References: <8GPKHQ.GOJ6W3345YH42@est.tech> Message-ID: <20201002125707.aus5lmxcclqqco6l@p1> Hi, >From the neutron PoV I think that it is ok to approve that to unblock nova's gate. But I think that requirements team will to vote on that :) On Fri, Oct 02, 2020 at 02:02:32PM +0200, Balázs Gibizer wrote: > > > On Fri, Oct 2, 2020 at 13:55, Lajos Katona wrote: > > Hi, > > > > I would like to ask Requirements Freeze Exception for neutron-lib 2.6.1 > > for Neutron Victoria. > > neutron-lib 2.6.1 contains a bug fix for Neutron which solves a nova > > gate issue (on Victoria) coming from a new feature in neutron, see [1]. > > This bugfix is needed to avoid a regression in the $ nova-manage placement > heal_allocation CLI > > Having neutron-lib 2.6.1 released and then the neutron requirements bumped > would fix the nova-next job on master[2]. And prevent a similar break of the > job on stable/victoria after [3] merges. > > Cheers, > gibi > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 (RFE: allow > > replacing the QoS policy of bound port) > > [2] https://bugs.launchpad.net/nova/+bug/1898035 > [3] https://review.opendev.org/#/c/755180/ > > > > > Thanks in advance > > Lajos Katona (lajoskatona) > > > -- Slawek Kaplonski Principal Software Engineer Red Hat From mnaser at vexxhost.com Fri Oct 2 13:03:32 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 2 Oct 2020 09:03:32 -0400 Subject: [tc] weekly meeting Message-ID: Hi everyone, Here’s a summary of what happened in our TC monthly meeting last Thursday, October 1st. # ATTENDEES (LINES SAID) - mnaser (100) - gmann (24) - evrardjp (18) - mugsie (18) - belmoreira (12) - diablo_rojo (11) - njohnston (5) - knikolla (5) - ralonsoh (4) - jungleboyj (4) - fungi (4) - ttx (3) - ricolin_ (3) - openstack (3) - smcginnis (2) - cloudnull (1) # MEETING SUMMARY 1. Roll call (mnaser, 14:00:07) 2. Follow up on past action items (mnaser, 14:07:07) - http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-09-03-14.01.html (mnaser, 14:07:24) 3. OpenStack User-facing APIs and CLIs (belmoreira) (mnaser, 14:34:35) 4. W cycle goal selection start (mnaser, 14:41:19) 5. Completion of retirement cleanup (gmann) (mnaser, 14:47:51) - https://storyboard.openstack.org/#!/story/2007686 (ralonsoh, 14:47:52) 6. Audit and clean-up tags (gmann) (mnaser, 14:48:23) - https://review.opendev.org/#/c/749363/ (mnaser, 14:48:28) 7. PTG Planning (mnaser, 14:50:54) - https://etherpad.opendev.org/p/tc-wallaby-ptg (mnaser, 14:51:02) 8. Open discussion (mnaser, 14:53:20) 9. Meeting ended at 14:59:58 UTC # ACTION ITEMS - diablo_rojo schedule session with sig-arch and k8s steering committee - mugsie Drive Wallaby cycle community goals - belmoreira draft up title, abstract and moderator for OSC gaps session - mugsie reach out to ralonsoh wrt helping with privsep - gmann to start cleaning-up assert:supports-api-interoperability tag To read the full logs of the meeting, please refer to http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-10-01-14.00.log.html Thanks! Mohammed -- Mohammed Naser VEXXHOST, Inc. From sboyron at redhat.com Fri Oct 2 13:40:44 2020 From: sboyron at redhat.com (Sebastien Boyron) Date: Fri, 2 Oct 2020 15:40:44 +0200 Subject: [requirements][oslo] Explicit requirement to setuptools. Message-ID: Hey all, Almost all openstack projects are using pbr and setuptools. A great majority are directly importing setuptools in the code (setup.py) while not explicitly requiring it in the requirements. In these cases , setuptools is only installed thanks to pbr dependency. Example 1: Having a look on nova code : http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/nova We can see that setuptools is importer in setup.py to requires pbr whereas neither in *requirements.txt nor in *constraints.txt Example 2: This is exactly the same for neutron : http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/neutron I discovered this while making some cleaning on rpm-packaging spec files. Spec files should reflect the content of explicits requirements of the related project. Until now there is no issue on that, but to make it proper, setuptools has been removed from all the projects rpm dependencies and relies only on pbr rpm dependency except if there is an explicit requirement on it in the project. If tomorrow, unlikely, pbr evolves and no longer requires setuptools, many things can fail: - All explicits imports in python code should break. - RPM generation should break too since it won't be present as a BuildRequirement. - RPM installation of old versions will pull the latest pbr version which will not require anymore and can break the execution. - RPM build can be held by distribute if there is not setuptools buildRequired anymore. As the python PEP20 claims "Explicit is better than implicit." and it should be our mantra on Openstack, especially with this kind of nasty case. https://www.python.org/dev/peps/pep-0020/ I think we should explicitly require setuptools if, and only if, we need to make an explicit import on it. This will help to have the right requirements into the RPMs while still staying simple and logical; keeping project requirements and RPM requirements in phase. I am opening the discussion and pointing to this right now, but I think we should wait for the Wallaby release before doing anything on that point to insert this modification into the regular development cycle. On a release point of view all the changes related to this proposal will be released through the classic release process and they will be landed with other projects changes, in other words it will not require a range of specific releases for projects. *SEBASTIEN BOYRON* Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwm2012 at gmail.com Fri Oct 2 14:17:22 2020 From: pwm2012 at gmail.com (pwm) Date: Fri, 2 Oct 2020 22:17:22 +0800 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: Tested on 3 nodes and 1 compute node, it is working fine by adding the eth14 to the br-lbaas bridge. Satish your solution will save 1 physical interface, will try your solution using VLAN later. Thanks On Thu, Oct 1, 2020 at 10:27 PM pwm wrote: > ok, will try to test it on 3 nodes this weekend. I see you are using the > Linux bridge, thanks for sharing your solution. > > > On Tue, Sep 29, 2020 at 10:01 PM Satish Patel > wrote: > >> Perfect, >> >> Now try same way on 3 nodes and tell me how it goes? >> >> Because I was having issue in my environment to make it work so I used >> different method which is here on my blog >> https://satishdotpatel.github.io//openstack-ansible-octavia/ >> >> Sent from my iPhone >> >> On Sep 29, 2020, at 8:27 AM, pwm wrote: >> >>  >> Hi, >> I'm testing on AIO before moving to a 3 nodes controller. Haven't tested >> on 3 nodes controller yet but I do think it will get the same issue. >> >> On Tue, Sep 29, 2020 at 3:43 AM Satish Patel >> wrote: >> >>> Hi, >>> >>> I was dealing with the same issue a few weeks back so curious are you >>> having this problem on AIO or 3 node controllers? >>> >>> ~S >>> >>> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: >>> > >>> > Hi, >>> > I using the following setup for Octavia load balancer on OVS >>> > Ansible openstack_user_config.yml >>> > - network: >>> > container_bridge: "br-lbaas" >>> > container_type: "veth" >>> > container_interface: "eth14" >>> > host_bind_override: "eth14" >>> > ip_from_q: "octavia" >>> > type: "flat" >>> > net_name: "octavia" >>> > group_binds: >>> > - neutron_openvswitch_agent >>> > - octavia-worker >>> > - octavia-housekeeping >>> > - octavia-health-manager >>> > >>> > user_variables.yml >>> > octavia_provider_network_name: "octavia" >>> > octavia_provider_network_type: flat >>> > octavia_neutron_management_network_name: lbaas-mgmt >>> > >>> > /etc/netplan/50-cloud-init.yaml >>> > br-lbaas: >>> > dhcp4: no >>> > interfaces: [ bond10 ] >>> > addresses: [] >>> > parameters: >>> > stp: false >>> > forward-delay: 0 >>> > bond10: >>> > dhcp4: no >>> > addresses: [] >>> > interfaces: [ens16] >>> > parameters: >>> > mode: balance-tlb >>> > >>> > brctl show >>> > bridge name bridge id STP enabled >>> interfaces >>> > br-lbaas 8000.d60e4e80f672 no >>> 2ea34552_eth14 >>> > >>> bond10 >>> > >>> > However, I am getting the following error when creating the load >>> balance >>> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >>> to instance. Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.29.233.47', port=9443 >>> > >>> > The Octavia api container unable to connect to the amphora instance. >>> > Any missing configuration, cause I need to manually add in the eth14 >>> interface to the br-lbaas bridge in order to fix the connection issue >>> > brctl addif br-lbaas eth14 >>> > >>> > Thanks >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Oct 2 14:56:38 2020 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 2 Oct 2020 09:56:38 -0500 Subject: [oslo] PTO next week Message-ID: I will be on PTO next week so I won't be around to run the meeting. This might be a good time to start implementing our DPL approach, if someone else wants to pick up the job of meeting coordinator. I believe Herve had mentioned some interest in that. Anyway, I won't be here and I'm stepping down as PTL so y'all can do what you want. ;-) -Ben From fungi at yuggoth.org Fri Oct 2 14:59:12 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 2 Oct 2020 14:59:12 +0000 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: References: <20201001210738.3bwubnqldabiqspo@yuggoth.org> Message-ID: <20201002145912.sbw4iv23fuujipvw@yuggoth.org> On 2020-10-02 11:29:48 +0200 (+0200), Pierre Riteau wrote: [...] > What would be the process to import into Storyboard all Launchpad bugs > that have been opened since the initial migration? Will the import > script handle it fine or should we identify and tag all new bugs? We can rerun the import and it will add any new bugs it finds, or new comments which were left on old bugs after the previous import. It will not, however, update task statuses. If a bug was imported previously but then closed in Launchpad and left active in StoryBoard, the import will only pick up the additional comments from Launchpad for that bug, but not automatically resolve the existing task in StoryBoard for you. This is probably the behavior you'd want anyway, I'm just pointing it out for clarity. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jean-francois.taltavull at elca.ch Fri Oct 2 15:08:11 2020 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Fri, 2 Oct 2020 15:08:11 +0000 Subject: Neutron DVR, service subnets and public IPs Message-ID: Hi All, Has someone already used successfully neutron service subnets, in a DVR deployment, in order to save internet public IP addresses ? Regards, Jean-Francois From hberaud at redhat.com Fri Oct 2 15:15:35 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Oct 2020 17:15:35 +0200 Subject: [oslo] PTO next week In-Reply-To: References: Message-ID: Thanks Ben for the heads up. If nobody else volunteers I'll manage this one. Enjoy your vacation ;) Le ven. 2 oct. 2020 à 17:07, Ben Nemec a écrit : > I will be on PTO next week so I won't be around to run the meeting. This > might be a good time to start implementing our DPL approach, if someone > else wants to pick up the job of meeting coordinator. I believe Herve > had mentioned some interest in that. > > Anyway, I won't be here and I'm stepping down as PTL so y'all can do > what you want. ;-) > > -Ben > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Oct 2 15:16:52 2020 From: mthode at mthode.org (Matthew Thode) Date: Fri, 2 Oct 2020 10:16:52 -0500 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: References: Message-ID: <20201002151652.oyc637rm33pybqs6@mthode.org> On 20-10-02 15:40:44, Sebastien Boyron wrote: > Hey all, > > Almost all openstack projects are using pbr and setuptools. > > A great majority are directly importing setuptools in the code (setup.py) > while not explicitly requiring it in the requirements. > In these cases , setuptools is only installed thanks to pbr dependency. > > > Example 1: Having a look on nova code : > http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/nova > > We can see that setuptools is importer in setup.py to requires pbr whereas > neither in *requirements.txt nor in *constraints.txt > > > Example 2: This is exactly the same for neutron : > http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/neutron > > I discovered this while making some cleaning on rpm-packaging spec files. > Spec files should reflect the content of explicits requirements of the > related project. > Until now there is no issue on that, but to make it proper, setuptools has > been removed from all the projects rpm dependencies and > relies only on pbr rpm dependency except if there is an explicit > requirement on it in the project. If tomorrow, unlikely, > pbr evolves and no longer requires setuptools, many things can fail: > - All explicits imports in python code should break. > - RPM generation should break too since it won't be present as a > BuildRequirement. > - RPM installation of old versions will pull the latest pbr version which > will not require anymore and can break the execution. > - RPM build can be held by distribute if there is not setuptools > buildRequired anymore. > > As the python PEP20 claims "Explicit is better than implicit." and it > should be our mantra on Openstack, especially with this kind of nasty case. > https://www.python.org/dev/peps/pep-0020/ > I think we should explicitly require setuptools if, and only if, we need to > make an explicit import on it. > > This will help to have the right requirements into the RPMs while still > staying simple and logical; keeping project requirements and > RPM requirements in phase. > > I am opening the discussion and pointing to this right now, but I think we > should wait for the Wallaby release before doing anything on that point to > insert this modification > into the regular development cycle. On a release point of view all the > changes related to this proposal will be released through the classic > release process > and they will be landed with other projects changes, in other words it will > not require a range of specific releases for projects. > > *SEBASTIEN BOYRON* > Red Hat Yes, both should be included in requirements.txt if needed/used. I'll also add that if entry_points exist in setup.cfg you will need to have setuptools installed as well. setuptools/pbr is hard to version manage, (it's managed by the venv install done by devstack at the moment iirc). I'm not sure we have a way of specifying a requirement should be listed without also specifying a version, but it could be as easy as adding it to the minimal set (list) in the check code and having people's gate fail until they fix it :P From a gentoo perspective, we've been going through and filing bugs for similiar reasons recently (setuptools being a runtime dep as well as a build dep). So this hits outside of Red Hat as well. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From pierre at stackhpc.com Fri Oct 2 15:21:27 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 2 Oct 2020 17:21:27 +0200 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: <20201002145912.sbw4iv23fuujipvw@yuggoth.org> References: <20201001210738.3bwubnqldabiqspo@yuggoth.org> <20201002145912.sbw4iv23fuujipvw@yuggoth.org> Message-ID: On Fri, 2 Oct 2020 at 17:12, Jeremy Stanley wrote: > > On 2020-10-02 11:29:48 +0200 (+0200), Pierre Riteau wrote: > [...] > > What would be the process to import into Storyboard all Launchpad bugs > > that have been opened since the initial migration? Will the import > > script handle it fine or should we identify and tag all new bugs? > > We can rerun the import and it will add any new bugs it finds, or > new comments which were left on old bugs after the previous import. > > It will not, however, update task statuses. If a bug was imported > previously but then closed in Launchpad and left active in > StoryBoard, the import will only pick up the additional comments > from Launchpad for that bug, but not automatically resolve the > existing task in StoryBoard for you. This is probably the behavior > you'd want anyway, I'm just pointing it out for clarity. Then could you rerun the import please? That would be great! After that I can lock down Launchpad bugs. From openstack at nemebean.com Fri Oct 2 15:36:09 2020 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 2 Oct 2020 10:36:09 -0500 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <20201002151652.oyc637rm33pybqs6@mthode.org> References: <20201002151652.oyc637rm33pybqs6@mthode.org> Message-ID: <267f6f1b-0a57-4038-a5eb-1d57d5c1bdb3@nemebean.com> On 10/2/20 10:16 AM, Matthew Thode wrote: > On 20-10-02 15:40:44, Sebastien Boyron wrote: >> Hey all, >> >> Almost all openstack projects are using pbr and setuptools. >> >> A great majority are directly importing setuptools in the code (setup.py) >> while not explicitly requiring it in the requirements. >> In these cases , setuptools is only installed thanks to pbr dependency. >> >> >> Example 1: Having a look on nova code : >> http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/nova >> >> We can see that setuptools is importer in setup.py to requires pbr whereas >> neither in *requirements.txt nor in *constraints.txt >> >> >> Example 2: This is exactly the same for neutron : >> http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/neutron >> >> I discovered this while making some cleaning on rpm-packaging spec files. >> Spec files should reflect the content of explicits requirements of the >> related project. >> Until now there is no issue on that, but to make it proper, setuptools has >> been removed from all the projects rpm dependencies and >> relies only on pbr rpm dependency except if there is an explicit >> requirement on it in the project. If tomorrow, unlikely, >> pbr evolves and no longer requires setuptools, many things can fail: >> - All explicits imports in python code should break. >> - RPM generation should break too since it won't be present as a >> BuildRequirement. >> - RPM installation of old versions will pull the latest pbr version which >> will not require anymore and can break the execution. >> - RPM build can be held by distribute if there is not setuptools >> buildRequired anymore. >> >> As the python PEP20 claims "Explicit is better than implicit." and it >> should be our mantra on Openstack, especially with this kind of nasty case. >> https://www.python.org/dev/peps/pep-0020/ >> I think we should explicitly require setuptools if, and only if, we need to >> make an explicit import on it. >> >> This will help to have the right requirements into the RPMs while still >> staying simple and logical; keeping project requirements and >> RPM requirements in phase. >> >> I am opening the discussion and pointing to this right now, but I think we >> should wait for the Wallaby release before doing anything on that point to >> insert this modification >> into the regular development cycle. On a release point of view all the >> changes related to this proposal will be released through the classic >> release process >> and they will be landed with other projects changes, in other words it will >> not require a range of specific releases for projects. >> >> *SEBASTIEN BOYRON* >> Red Hat > > Yes, both should be included in requirements.txt if needed/used. I'll > also add that if entry_points exist in setup.cfg you will need to have > setuptools installed as well. > > setuptools/pbr is hard to version manage, (it's managed by the venv > install done by devstack at the moment iirc). I'm not sure we have a > way of specifying a requirement should be listed without also specifying > a version, but it could be as easy as adding it to the minimal set > (list) in the check code and having people's gate fail until they fix it :P > > From a gentoo perspective, we've been going through and filing bugs for > similiar reasons recently (setuptools being a runtime dep as well as a > build dep). So this hits outside of Red Hat as well. > This was the same conclusion we came to in the Oslo meeting last week. We mostly wanted to give people a chance to object before we patch bombed all of the non-compliant projects. :-) From hberaud at redhat.com Fri Oct 2 16:05:57 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Oct 2020 18:05:57 +0200 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <267f6f1b-0a57-4038-a5eb-1d57d5c1bdb3@nemebean.com> References: <20201002151652.oyc637rm33pybqs6@mthode.org> <267f6f1b-0a57-4038-a5eb-1d57d5c1bdb3@nemebean.com> Message-ID: Agreed +1 I'm in favor of avoiding implicit imports, it could help a lot of people to move away from doubts! Le ven. 2 oct. 2020 à 17:39, Ben Nemec a écrit : > > > On 10/2/20 10:16 AM, Matthew Thode wrote: > > On 20-10-02 15:40:44, Sebastien Boyron wrote: > >> Hey all, > >> > >> Almost all openstack projects are using pbr and setuptools. > >> > >> A great majority are directly importing setuptools in the code > (setup.py) > >> while not explicitly requiring it in the requirements. > >> In these cases , setuptools is only installed thanks to pbr dependency. > >> > >> > >> Example 1: Having a look on nova code : > >> > http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/nova > >> > >> We can see that setuptools is importer in setup.py to requires pbr > whereas > >> neither in *requirements.txt nor in *constraints.txt > >> > >> > >> Example 2: This is exactly the same for neutron : > >> > http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/neutron > >> > >> I discovered this while making some cleaning on rpm-packaging spec > files. > >> Spec files should reflect the content of explicits requirements of the > >> related project. > >> Until now there is no issue on that, but to make it proper, setuptools > has > >> been removed from all the projects rpm dependencies and > >> relies only on pbr rpm dependency except if there is an explicit > >> requirement on it in the project. If tomorrow, unlikely, > >> pbr evolves and no longer requires setuptools, many things can fail: > >> - All explicits imports in python code should break. > >> - RPM generation should break too since it won't be present as a > >> BuildRequirement. > >> - RPM installation of old versions will pull the latest pbr version > which > >> will not require anymore and can break the execution. > >> - RPM build can be held by distribute if there is not setuptools > >> buildRequired anymore. > >> > >> As the python PEP20 claims "Explicit is better than implicit." and it > >> should be our mantra on Openstack, especially with this kind of nasty > case. > >> https://www.python.org/dev/peps/pep-0020/ > >> I think we should explicitly require setuptools if, and only if, we > need to > >> make an explicit import on it. > >> > >> This will help to have the right requirements into the RPMs while still > >> staying simple and logical; keeping project requirements and > >> RPM requirements in phase. > >> > >> I am opening the discussion and pointing to this right now, but I think > we > >> should wait for the Wallaby release before doing anything on that point > to > >> insert this modification > >> into the regular development cycle. On a release point of view all the > >> changes related to this proposal will be released through the classic > >> release process > >> and they will be landed with other projects changes, in other words it > will > >> not require a range of specific releases for projects. > >> > >> *SEBASTIEN BOYRON* > >> Red Hat > > > > Yes, both should be included in requirements.txt if needed/used. I'll > > also add that if entry_points exist in setup.cfg you will need to have > > setuptools installed as well. > > > > setuptools/pbr is hard to version manage, (it's managed by the venv > > install done by devstack at the moment iirc). I'm not sure we have a > > way of specifying a requirement should be listed without also specifying > > a version, but it could be as easy as adding it to the minimal set > > (list) in the check code and having people's gate fail until they fix it > :P > > > > From a gentoo perspective, we've been going through and filing bugs for > > similiar reasons recently (setuptools being a runtime dep as well as a > > build dep). So this hits outside of Red Hat as well. > > > > This was the same conclusion we came to in the Oslo meeting last week. > We mostly wanted to give people a chance to object before we patch > bombed all of the non-compliant projects. :-) > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Oct 2 16:13:06 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 2 Oct 2020 18:13:06 +0200 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: References: <20201002151652.oyc637rm33pybqs6@mthode.org> <267f6f1b-0a57-4038-a5eb-1d57d5c1bdb3@nemebean.com> Message-ID: To help to track this topic I propose to set the topic of the related patches to `setuptools-explicit` Le ven. 2 oct. 2020 à 18:05, Herve Beraud a écrit : > Agreed +1 > > I'm in favor of avoiding implicit imports, it could help a lot of people > to move away from doubts! > > Le ven. 2 oct. 2020 à 17:39, Ben Nemec a écrit : > >> >> >> On 10/2/20 10:16 AM, Matthew Thode wrote: >> > On 20-10-02 15:40:44, Sebastien Boyron wrote: >> >> Hey all, >> >> >> >> Almost all openstack projects are using pbr and setuptools. >> >> >> >> A great majority are directly importing setuptools in the code >> (setup.py) >> >> while not explicitly requiring it in the requirements. >> >> In these cases , setuptools is only installed thanks to pbr dependency. >> >> >> >> >> >> Example 1: Having a look on nova code : >> >> >> http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/nova >> >> >> >> We can see that setuptools is importer in setup.py to requires pbr >> whereas >> >> neither in *requirements.txt nor in *constraints.txt >> >> >> >> >> >> Example 2: This is exactly the same for neutron : >> >> >> http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/neutron >> >> >> >> I discovered this while making some cleaning on rpm-packaging spec >> files. >> >> Spec files should reflect the content of explicits requirements of the >> >> related project. >> >> Until now there is no issue on that, but to make it proper, setuptools >> has >> >> been removed from all the projects rpm dependencies and >> >> relies only on pbr rpm dependency except if there is an explicit >> >> requirement on it in the project. If tomorrow, unlikely, >> >> pbr evolves and no longer requires setuptools, many things can fail: >> >> - All explicits imports in python code should break. >> >> - RPM generation should break too since it won't be present as a >> >> BuildRequirement. >> >> - RPM installation of old versions will pull the latest pbr version >> which >> >> will not require anymore and can break the execution. >> >> - RPM build can be held by distribute if there is not setuptools >> >> buildRequired anymore. >> >> >> >> As the python PEP20 claims "Explicit is better than implicit." and it >> >> should be our mantra on Openstack, especially with this kind of nasty >> case. >> >> https://www.python.org/dev/peps/pep-0020/ >> >> I think we should explicitly require setuptools if, and only if, we >> need to >> >> make an explicit import on it. >> >> >> >> This will help to have the right requirements into the RPMs while still >> >> staying simple and logical; keeping project requirements and >> >> RPM requirements in phase. >> >> >> >> I am opening the discussion and pointing to this right now, but I >> think we >> >> should wait for the Wallaby release before doing anything on that >> point to >> >> insert this modification >> >> into the regular development cycle. On a release point of view all the >> >> changes related to this proposal will be released through the classic >> >> release process >> >> and they will be landed with other projects changes, in other words it >> will >> >> not require a range of specific releases for projects. >> >> >> >> *SEBASTIEN BOYRON* >> >> Red Hat >> > >> > Yes, both should be included in requirements.txt if needed/used. I'll >> > also add that if entry_points exist in setup.cfg you will need to have >> > setuptools installed as well. >> > >> > setuptools/pbr is hard to version manage, (it's managed by the venv >> > install done by devstack at the moment iirc). I'm not sure we have a >> > way of specifying a requirement should be listed without also specifying >> > a version, but it could be as easy as adding it to the minimal set >> > (list) in the check code and having people's gate fail until they fix >> it :P >> > >> > From a gentoo perspective, we've been going through and filing bugs for >> > similiar reasons recently (setuptools being a runtime dep as well as a >> > build dep). So this hits outside of Red Hat as well. >> > >> >> This was the same conclusion we came to in the Oslo meeting last week. >> We mostly wanted to give people a chance to object before we patch >> bombed all of the non-compliant projects. :-) >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Fri Oct 2 16:17:06 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 2 Oct 2020 18:17:06 +0200 Subject: [cloudkitty] Core team cleanup Message-ID: Hello, In the cloudkitty-core team there is zhangguoqing, whose email address (zhang.guoqing at 99cloud.net) is bouncing with "551 5.1.1 recipient is not exist". I propose to remove them from the core team. zhangguoqing, if you read us and want to stay in the team, please contact me. Best wishes, Pierre Riteau (priteau) From rafaelweingartner at gmail.com Fri Oct 2 16:35:50 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Fri, 2 Oct 2020 13:35:50 -0300 Subject: [cloudkitty] Core team cleanup In-Reply-To: References: Message-ID: I guess it is fine to do the cleanup. Maybe, we could wait 24/48 hours before doing so; just to give enough time for the person to respond to the e-mail (if they are still active in the community somehow). On Fri, Oct 2, 2020 at 1:18 PM Pierre Riteau wrote: > Hello, > > In the cloudkitty-core team there is zhangguoqing, whose email address > (zhang.guoqing at 99cloud.net) is bouncing with "551 5.1.1 recipient is > not exist". > I propose to remove them from the core team. > > zhangguoqing, if you read us and want to stay in the team, please contact > me. > > Best wishes, > Pierre Riteau (priteau) > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Fri Oct 2 16:37:37 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 2 Oct 2020 18:37:37 +0200 Subject: [cloudkitty] Core team cleanup In-Reply-To: References: Message-ID: I should have said that I will wait until the end of next week before doing the cleanup. On Fri, 2 Oct 2020 at 18:36, Rafael Weingärtner wrote: > > I guess it is fine to do the cleanup. Maybe, we could wait 24/48 hours before doing so; just to give enough time for the person to respond to the e-mail (if they are still active in the community somehow). > > On Fri, Oct 2, 2020 at 1:18 PM Pierre Riteau wrote: >> >> Hello, >> >> In the cloudkitty-core team there is zhangguoqing, whose email address >> (zhang.guoqing at 99cloud.net) is bouncing with "551 5.1.1 recipient is >> not exist". >> I propose to remove them from the core team. >> >> zhangguoqing, if you read us and want to stay in the team, please contact me. >> >> Best wishes, >> Pierre Riteau (priteau) >> > > > -- > Rafael Weingärtner From rafaelweingartner at gmail.com Fri Oct 2 16:38:53 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Fri, 2 Oct 2020 13:38:53 -0300 Subject: [cloudkitty] Core team cleanup In-Reply-To: References: Message-ID: Sounds good to me. On Fri, Oct 2, 2020 at 1:38 PM Pierre Riteau wrote: > I should have said that I will wait until the end of next week before > doing the cleanup. > > On Fri, 2 Oct 2020 at 18:36, Rafael Weingärtner > wrote: > > > > I guess it is fine to do the cleanup. Maybe, we could wait 24/48 hours > before doing so; just to give enough time for the person to respond to the > e-mail (if they are still active in the community somehow). > > > > On Fri, Oct 2, 2020 at 1:18 PM Pierre Riteau > wrote: > >> > >> Hello, > >> > >> In the cloudkitty-core team there is zhangguoqing, whose email address > >> (zhang.guoqing at 99cloud.net) is bouncing with "551 5.1.1 recipient is > >> not exist". > >> I propose to remove them from the core team. > >> > >> zhangguoqing, if you read us and want to stay in the team, please > contact me. > >> > >> Best wishes, > >> Pierre Riteau (priteau) > >> > > > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbengt at redhat.com Fri Oct 2 08:16:09 2020 From: dbengt at redhat.com (Daniel Bengtsson) Date: Fri, 2 Oct 2020 10:16:09 +0200 Subject: =?UTF-8?Q?Re=3a_=5belection=5d=5bplt=5d_Herv=c3=a9_Beraud_candidacy?= =?UTF-8?Q?_for_Release_Management_PTL?= In-Reply-To: References: Message-ID: <791383fc-d53c-5965-64f0-95674467a30d@redhat.com> Le 25/09/2020 à 14:38, Herve Beraud a écrit : > During this cycle the main efforts will be around the team building. > Indeed, we now have a lot of great tools and automations but we need to > recruit new members and mentor them to keep a strong and resilient core > team. > By recruiting more people we could easily spread the workload on more > core members. > I plan to dedicate myself on this topic during this cycle. I'm really interested to be involved and join the release team. I have started to do review and patches on the project. From dbengt at redhat.com Fri Oct 2 08:53:18 2020 From: dbengt at redhat.com (Daniel Bengtsson) Date: Fri, 2 Oct 2020 10:53:18 +0200 Subject: =?UTF-8?Q?Re=3a_=5belection=5d=5bplt=5d_Herv=c3=a9_Beraud_candidacy?= =?UTF-8?Q?_for_Release_Management_PTL?= In-Reply-To: References: <791383fc-d53c-5965-64f0-95674467a30d@redhat.com> Message-ID: <433995af-f3ca-1c71-53f0-1e7dc0c50f77@redhat.com> Le 02/10/2020 à 10:31, Herve Beraud a écrit : > You can count on us to help you and to mentor you to learn how release > processes work :) > Do not hesitate to join us on IRC and to join our meetings. Thanks a lot. I will do that. From fungi at yuggoth.org Fri Oct 2 18:10:09 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 2 Oct 2020 18:10:09 +0000 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: References: <20201001210738.3bwubnqldabiqspo@yuggoth.org> <20201002145912.sbw4iv23fuujipvw@yuggoth.org> Message-ID: <20201002181009.r6bigtuv5eppmnri@yuggoth.org> On 2020-10-02 17:21:27 +0200 (+0200), Pierre Riteau wrote: [...] > Then could you rerun the import please? That would be great! After > that I can lock down Launchpad bugs. I've completed an import update from the cloudkitty project on Launchpad to the openstack/cloudkitty project in StoryBoard. Please double-check that did what you expect and then feel free to proceed locking down bug reporting in LP. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Fri Oct 2 19:10:46 2020 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 2 Oct 2020 15:10:46 -0400 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: Nice, how did you add eth14 to br-lbaas bridge? Are you running OVS? On Fri, Oct 2, 2020 at 10:17 AM pwm wrote: > > Tested on 3 nodes and 1 compute node, it is working fine by adding the eth14 to the br-lbaas bridge. > Satish your solution will save 1 physical interface, will try your solution using VLAN later. > > Thanks > > > > > On Thu, Oct 1, 2020 at 10:27 PM pwm wrote: >> >> ok, will try to test it on 3 nodes this weekend. I see you are using the Linux bridge, thanks for sharing your solution. >> >> >> On Tue, Sep 29, 2020 at 10:01 PM Satish Patel wrote: >>> >>> Perfect, >>> >>> Now try same way on 3 nodes and tell me how it goes? >>> >>> Because I was having issue in my environment to make it work so I used different method which is here on my blog https://satishdotpatel.github.io//openstack-ansible-octavia/ >>> >>> Sent from my iPhone >>> >>> On Sep 29, 2020, at 8:27 AM, pwm wrote: >>> >>>  >>> Hi, >>> I'm testing on AIO before moving to a 3 nodes controller. Haven't tested on 3 nodes controller yet but I do think it will get the same issue. >>> >>> On Tue, Sep 29, 2020 at 3:43 AM Satish Patel wrote: >>>> >>>> Hi, >>>> >>>> I was dealing with the same issue a few weeks back so curious are you >>>> having this problem on AIO or 3 node controllers? >>>> >>>> ~S >>>> >>>> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: >>>> > >>>> > Hi, >>>> > I using the following setup for Octavia load balancer on OVS >>>> > Ansible openstack_user_config.yml >>>> > - network: >>>> > container_bridge: "br-lbaas" >>>> > container_type: "veth" >>>> > container_interface: "eth14" >>>> > host_bind_override: "eth14" >>>> > ip_from_q: "octavia" >>>> > type: "flat" >>>> > net_name: "octavia" >>>> > group_binds: >>>> > - neutron_openvswitch_agent >>>> > - octavia-worker >>>> > - octavia-housekeeping >>>> > - octavia-health-manager >>>> > >>>> > user_variables.yml >>>> > octavia_provider_network_name: "octavia" >>>> > octavia_provider_network_type: flat >>>> > octavia_neutron_management_network_name: lbaas-mgmt >>>> > >>>> > /etc/netplan/50-cloud-init.yaml >>>> > br-lbaas: >>>> > dhcp4: no >>>> > interfaces: [ bond10 ] >>>> > addresses: [] >>>> > parameters: >>>> > stp: false >>>> > forward-delay: 0 >>>> > bond10: >>>> > dhcp4: no >>>> > addresses: [] >>>> > interfaces: [ens16] >>>> > parameters: >>>> > mode: balance-tlb >>>> > >>>> > brctl show >>>> > bridge name bridge id STP enabled interfaces >>>> > br-lbaas 8000.d60e4e80f672 no 2ea34552_eth14 >>>> > bond10 >>>> > >>>> > However, I am getting the following error when creating the load balance >>>> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.29.233.47', port=9443 >>>> > >>>> > The Octavia api container unable to connect to the amphora instance. >>>> > Any missing configuration, cause I need to manually add in the eth14 interface to the br-lbaas bridge in order to fix the connection issue >>>> > brctl addif br-lbaas eth14 >>>> > >>>> > Thanks From rfolco at redhat.com Fri Oct 2 19:34:11 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 2 Oct 2020 16:34:11 -0300 Subject: [tripleo] TripleO CI Summary: Unified Sprint 33 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 33** (Sept 11 thru Oct 01). The following is a summary of completed work during this sprint cycle: - Added more jobs to RHOS 16.2 pipelines. - Applied new promoter configuration engine changes to consolidate all promoters into the same code. - Started a PoC implementation of the new dependency pipeline: - https://hackmd.io/iA6AyTn8Rn2ximODx6FmKg - Started elastic-recheck containerization work: - https://hackmd.io/HQ5hyGAOSuG44Le2x6YzUw - Created upstream parent jobs a.k.a content-provider to build containers for multiple jobs: - https://hackmd.io/vRMVeZXZRgK5Vxqi6ENUDg#QampA - Ruck/Rover recorded notes [1]. The planned work for the next sprint puts on hold the work started in the previous sprint and focuses on the following: - Add content-provider jobs to all CI layouts and templates. - Start prep work for stable/victoria branching in RDO. - Continue investigations to fix broken upgrade jobs. The Ruck and Rover for this sprint are Ronelle Landy (rlandy) and Soniya Vyas (soniya29). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd [2]. Thanks, rfolco [1] https://hackmd.io/7Q0YO5JKS0agcf9qwoD4IQ [2] https://hackmd.io/1qxCqYzATfudl1cKvaQ8-w -- Folco -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Oct 2 20:18:54 2020 From: mthode at mthode.org (Matthew Thode) Date: Fri, 2 Oct 2020 15:18:54 -0500 Subject: [neutron][requirements] RFE requested for neutron-lib In-Reply-To: <20201002125707.aus5lmxcclqqco6l@p1> References: <8GPKHQ.GOJ6W3345YH42@est.tech> <20201002125707.aus5lmxcclqqco6l@p1> Message-ID: <20201002201854.fhycqqkzqv3ivfvk@mthode.org> On 20-10-02 14:57:07, Slawek Kaplonski wrote: > Hi, > > From the neutron PoV I think that it is ok to approve that to unblock nova's > gate. But I think that requirements team will to vote on that :) > > On Fri, Oct 02, 2020 at 02:02:32PM +0200, Balázs Gibizer wrote: > > > > > > On Fri, Oct 2, 2020 at 13:55, Lajos Katona wrote: > > > Hi, > > > > > > I would like to ask Requirements Freeze Exception for neutron-lib 2.6.1 > > > for Neutron Victoria. > > > neutron-lib 2.6.1 contains a bug fix for Neutron which solves a nova > > > gate issue (on Victoria) coming from a new feature in neutron, see [1]. > > > > This bugfix is needed to avoid a regression in the $ nova-manage placement > > heal_allocation CLI > > > > Having neutron-lib 2.6.1 released and then the neutron requirements bumped > > would fix the nova-next job on master[2]. And prevent a similar break of the > > job on stable/victoria after [3] merges. > > > > Cheers, > > gibi > > > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 (RFE: allow > > > replacing the QoS policy of bound port) > > > > [2] https://bugs.launchpad.net/nova/+bug/1898035 > > [3] https://review.opendev.org/#/c/755180/ A diff would have been useful to provide :P (between 2.6.0 and the proposed 2.6.1. I found one though and approve this as reqs lead. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From pwm2012 at gmail.com Sat Oct 3 07:56:09 2020 From: pwm2012 at gmail.com (pwm) Date: Sat, 3 Oct 2020 15:56:09 +0800 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: On all controller and compute node, run this ip link set eth14 up brctl addif br-lbaas eth14 On Sat, Oct 3, 2020 at 3:10 AM Satish Patel wrote: > Nice, how did you add eth14 to br-lbaas bridge? Are you running OVS? > > On Fri, Oct 2, 2020 at 10:17 AM pwm wrote: > > > > Tested on 3 nodes and 1 compute node, it is working fine by adding the > eth14 to the br-lbaas bridge. > > Satish your solution will save 1 physical interface, will try your > solution using VLAN later. > > > > Thanks > > > > > > > > > > On Thu, Oct 1, 2020 at 10:27 PM pwm wrote: > >> > >> ok, will try to test it on 3 nodes this weekend. I see you are using > the Linux bridge, thanks for sharing your solution. > >> > >> > >> On Tue, Sep 29, 2020 at 10:01 PM Satish Patel > wrote: > >>> > >>> Perfect, > >>> > >>> Now try same way on 3 nodes and tell me how it goes? > >>> > >>> Because I was having issue in my environment to make it work so I used > different method which is here on my blog > https://satishdotpatel.github.io//openstack-ansible-octavia/ > >>> > >>> Sent from my iPhone > >>> > >>> On Sep 29, 2020, at 8:27 AM, pwm wrote: > >>> > >>>  > >>> Hi, > >>> I'm testing on AIO before moving to a 3 nodes controller. Haven't > tested on 3 nodes controller yet but I do think it will get the same issue. > >>> > >>> On Tue, Sep 29, 2020 at 3:43 AM Satish Patel > wrote: > >>>> > >>>> Hi, > >>>> > >>>> I was dealing with the same issue a few weeks back so curious are you > >>>> having this problem on AIO or 3 node controllers? > >>>> > >>>> ~S > >>>> > >>>> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: > >>>> > > >>>> > Hi, > >>>> > I using the following setup for Octavia load balancer on OVS > >>>> > Ansible openstack_user_config.yml > >>>> > - network: > >>>> > container_bridge: "br-lbaas" > >>>> > container_type: "veth" > >>>> > container_interface: "eth14" > >>>> > host_bind_override: "eth14" > >>>> > ip_from_q: "octavia" > >>>> > type: "flat" > >>>> > net_name: "octavia" > >>>> > group_binds: > >>>> > - neutron_openvswitch_agent > >>>> > - octavia-worker > >>>> > - octavia-housekeeping > >>>> > - octavia-health-manager > >>>> > > >>>> > user_variables.yml > >>>> > octavia_provider_network_name: "octavia" > >>>> > octavia_provider_network_type: flat > >>>> > octavia_neutron_management_network_name: lbaas-mgmt > >>>> > > >>>> > /etc/netplan/50-cloud-init.yaml > >>>> > br-lbaas: > >>>> > dhcp4: no > >>>> > interfaces: [ bond10 ] > >>>> > addresses: [] > >>>> > parameters: > >>>> > stp: false > >>>> > forward-delay: 0 > >>>> > bond10: > >>>> > dhcp4: no > >>>> > addresses: [] > >>>> > interfaces: [ens16] > >>>> > parameters: > >>>> > mode: balance-tlb > >>>> > > >>>> > brctl show > >>>> > bridge name bridge id STP enabled > interfaces > >>>> > br-lbaas 8000.d60e4e80f672 no > 2ea34552_eth14 > >>>> > > bond10 > >>>> > > >>>> > However, I am getting the following error when creating the load > balance > >>>> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not > connect to instance. Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.29.233.47', port=9443 > >>>> > > >>>> > The Octavia api container unable to connect to the amphora instance. > >>>> > Any missing configuration, cause I need to manually add in the > eth14 interface to the br-lbaas bridge in order to fix the connection issue > >>>> > brctl addif br-lbaas eth14 > >>>> > > >>>> > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwm2012 at gmail.com Sat Oct 3 07:56:55 2020 From: pwm2012 at gmail.com (pwm) Date: Sat, 3 Oct 2020 15:56:55 +0800 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: I'm running on OVS On Sat, Oct 3, 2020 at 3:56 PM pwm wrote: > On all controller and compute node, run this > ip link set eth14 up > brctl addif br-lbaas eth14 > > On Sat, Oct 3, 2020 at 3:10 AM Satish Patel wrote: > >> Nice, how did you add eth14 to br-lbaas bridge? Are you running OVS? >> >> On Fri, Oct 2, 2020 at 10:17 AM pwm wrote: >> > >> > Tested on 3 nodes and 1 compute node, it is working fine by adding the >> eth14 to the br-lbaas bridge. >> > Satish your solution will save 1 physical interface, will try your >> solution using VLAN later. >> > >> > Thanks >> > >> > >> > >> > >> > On Thu, Oct 1, 2020 at 10:27 PM pwm wrote: >> >> >> >> ok, will try to test it on 3 nodes this weekend. I see you are using >> the Linux bridge, thanks for sharing your solution. >> >> >> >> >> >> On Tue, Sep 29, 2020 at 10:01 PM Satish Patel >> wrote: >> >>> >> >>> Perfect, >> >>> >> >>> Now try same way on 3 nodes and tell me how it goes? >> >>> >> >>> Because I was having issue in my environment to make it work so I >> used different method which is here on my blog >> https://satishdotpatel.github.io//openstack-ansible-octavia/ >> >>> >> >>> Sent from my iPhone >> >>> >> >>> On Sep 29, 2020, at 8:27 AM, pwm wrote: >> >>> >> >>>  >> >>> Hi, >> >>> I'm testing on AIO before moving to a 3 nodes controller. Haven't >> tested on 3 nodes controller yet but I do think it will get the same issue. >> >>> >> >>> On Tue, Sep 29, 2020 at 3:43 AM Satish Patel >> wrote: >> >>>> >> >>>> Hi, >> >>>> >> >>>> I was dealing with the same issue a few weeks back so curious are you >> >>>> having this problem on AIO or 3 node controllers? >> >>>> >> >>>> ~S >> >>>> >> >>>> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: >> >>>> > >> >>>> > Hi, >> >>>> > I using the following setup for Octavia load balancer on OVS >> >>>> > Ansible openstack_user_config.yml >> >>>> > - network: >> >>>> > container_bridge: "br-lbaas" >> >>>> > container_type: "veth" >> >>>> > container_interface: "eth14" >> >>>> > host_bind_override: "eth14" >> >>>> > ip_from_q: "octavia" >> >>>> > type: "flat" >> >>>> > net_name: "octavia" >> >>>> > group_binds: >> >>>> > - neutron_openvswitch_agent >> >>>> > - octavia-worker >> >>>> > - octavia-housekeeping >> >>>> > - octavia-health-manager >> >>>> > >> >>>> > user_variables.yml >> >>>> > octavia_provider_network_name: "octavia" >> >>>> > octavia_provider_network_type: flat >> >>>> > octavia_neutron_management_network_name: lbaas-mgmt >> >>>> > >> >>>> > /etc/netplan/50-cloud-init.yaml >> >>>> > br-lbaas: >> >>>> > dhcp4: no >> >>>> > interfaces: [ bond10 ] >> >>>> > addresses: [] >> >>>> > parameters: >> >>>> > stp: false >> >>>> > forward-delay: 0 >> >>>> > bond10: >> >>>> > dhcp4: no >> >>>> > addresses: [] >> >>>> > interfaces: [ens16] >> >>>> > parameters: >> >>>> > mode: balance-tlb >> >>>> > >> >>>> > brctl show >> >>>> > bridge name bridge id STP enabled >> interfaces >> >>>> > br-lbaas 8000.d60e4e80f672 no >> 2ea34552_eth14 >> >>>> > >> bond10 >> >>>> > >> >>>> > However, I am getting the following error when creating the load >> balance >> >>>> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not >> connect to instance. Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.29.233.47', port=9443 >> >>>> > >> >>>> > The Octavia api container unable to connect to the amphora >> instance. >> >>>> > Any missing configuration, cause I need to manually add in the >> eth14 interface to the br-lbaas bridge in order to fix the connection issue >> >>>> > brctl addif br-lbaas eth14 >> >>>> > >> >>>> > Thanks >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sat Oct 3 16:56:42 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Sun, 4 Oct 2020 01:56:42 +0900 Subject: [all][dev] Python.h not found after openstack-tox-py36 is switch to CentOS8 Message-ID: Hi, openstack-tox-py36 is now run on CentOS-8 after commit 31c4a7a18e2bd43d2893563b992c683c95baed6f was merged. I noticed Python.h is not installed so we cannot install python modules which require compilation. An example is found at [1]. Do individual projects explicitly install python-dev or will we handle it in the common zuul job? Before proposing a fix, I would like to hear our direction. [1] https://zuul.opendev.org/t/openstack/build/696978c5614c41bd88b5b2a1d6c2174a/log/job-output.txt#968 Thanks, Akihiro Motoki (irc: amotoki) From fungi at yuggoth.org Sat Oct 3 17:04:15 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 3 Oct 2020 17:04:15 +0000 Subject: [all][dev] Python.h not found after openstack-tox-py36 is switch to CentOS8 In-Reply-To: References: Message-ID: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> On 2020-10-04 01:56:42 +0900 (+0900), Akihiro Motoki wrote: > openstack-tox-py36 is now run on CentOS-8 after commit > 31c4a7a18e2bd43d2893563b992c683c95baed6f was merged. > I noticed Python.h is not installed so we cannot install python > modules which require compilation. An example is found at [1]. > Do individual projects explicitly install python-dev or will we handle > it in the common zuul job? > Before proposing a fix, I would like to hear our direction. > > [1] https://zuul.opendev.org/t/openstack/build/696978c5614c41bd88b5b2a1d6c2174a/log/job-output.txt#968 We've also observed that the common mysqladmin invocations in tools/test_setup.sh files in many projects are failing due to differences in how MySQL/MariaDB are started or configured on CentOS vs Ubuntu. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Sat Oct 3 17:38:00 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sat, 3 Oct 2020 10:38:00 -0700 Subject: [all][dev] Python.h not found after openstack-tox-py36 is switch to CentOS8 In-Reply-To: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> References: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> Message-ID: I've been hacking on a change to handle mysql[1] for ironic which at least seems to get us past the initial failure with mysql, but we also carry postgresql support in our script and that seems like it is going to need more work. Currently fighting trying to update the pg_hba.conf file. [1]: https://review.opendev.org/755905 On Sat, Oct 3, 2020 at 10:06 AM Jeremy Stanley wrote: > > On 2020-10-04 01:56:42 +0900 (+0900), Akihiro Motoki wrote: > > openstack-tox-py36 is now run on CentOS-8 after commit > > 31c4a7a18e2bd43d2893563b992c683c95baed6f was merged. > > I noticed Python.h is not installed so we cannot install python > > modules which require compilation. An example is found at [1]. > > Do individual projects explicitly install python-dev or will we handle > > it in the common zuul job? > > Before proposing a fix, I would like to hear our direction. > > > > [1] https://zuul.opendev.org/t/openstack/build/696978c5614c41bd88b5b2a1d6c2174a/log/job-output.txt#968 > > We've also observed that the common mysqladmin invocations in > tools/test_setup.sh files in many projects are failing due to > differences in how MySQL/MariaDB are started or configured on CentOS > vs Ubuntu. > -- > Jeremy Stanley From xavpaice at gmail.com Sun Oct 4 06:54:46 2020 From: xavpaice at gmail.com (Xav Paice) Date: Sun, 4 Oct 2020 19:54:46 +1300 Subject: [charms] Zaza bundle tests Message-ID: I was writing a patch recently and in order to test it, I needed to make changes to the test bundles. I ended up making the same change across several files, and missing one (thanks to the reviewer for noticing that!). Some of the other projects I'm involved with use symlinks to a base bundle with overlays: ./tests$ ls -l bundles/ total 20 -rw-rw---- 2 xav xav 5046 Sep 29 14:56 base.yaml lrwxrwxrwx 1 xav xav 9 Sep 29 10:01 bionic.yaml -> base.yaml lrwxrwxrwx 1 xav xav 9 Sep 29 10:01 focal.yaml -> base.yaml drwxrwx--x 2 xav xav 4096 Sep 29 10:01 overlays lrwxrwxrwx 1 xav xav 9 Sep 29 10:01 xenial.yaml -> base.yaml ./tests$ ls bundles/overlays/ bionic.yaml.j2 focal.yaml.j2 local-charm-overlay.yaml.j2 xenial.yaml.j2 This means that I can edit base.yaml just once, and if a change is specific to any of the particular bundles there's a place for that in the individual overlays. When we have bundles for each release going back to Mitaka, this could be quite an effort saver. I gather there's been some discussion around this already, and would be keen to see if there's a reason folks might avoid that pattern? -------------- next part -------------- An HTML attachment was scrubbed... URL: From frickler at offenerstapel.de Sun Oct 4 07:29:19 2020 From: frickler at offenerstapel.de (Jens Harbott) Date: Sun, 4 Oct 2020 09:29:19 +0200 Subject: [all][dev] Python.h not found after openstack-tox-py36 is switch to CentOS8 In-Reply-To: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> References: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> Message-ID: <4fd07df7-78bc-d367-51e5-00ba32d5fc43@offenerstapel.de> On 03.10.20 19:04, Jeremy Stanley wrote: > On 2020-10-04 01:56:42 +0900 (+0900), Akihiro Motoki wrote: >> openstack-tox-py36 is now run on CentOS-8 after commit >> 31c4a7a18e2bd43d2893563b992c683c95baed6f was merged. >> I noticed Python.h is not installed so we cannot install python >> modules which require compilation. An example is found at [1]. >> Do individual projects explicitly install python-dev or will we handle >> it in the common zuul job? >> Before proposing a fix, I would like to hear our direction. >> >> [1] https://zuul.opendev.org/t/openstack/build/696978c5614c41bd88b5b2a1d6c2174a/log/job-output.txt#968 > > We've also observed that the common mysqladmin invocations in > tools/test_setup.sh files in many projects are failing due to > differences in how MySQL/MariaDB are started or configured on CentOS > vs Ubuntu. I think this change should have been announced beforehand and given projects enough time to prepare for the switch. Or maybe even the TC recommendations for testing environments should follow what is happening in the real world and acknowledge that Bionic is indeed still a platform that is being used for testing. I've thus proposed a revert of the change at [0] in order to unblock the gate for those affected (including Nova and Neutron), allowing to reevaluate the necessity and usefulness of this change. [0] https://review.opendev.org/755954 From iurygregory at gmail.com Sun Oct 4 08:33:39 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Sun, 4 Oct 2020 10:33:39 +0200 Subject: [all][dev] Python.h not found after openstack-tox-py36 is switch to CentOS8 In-Reply-To: <4fd07df7-78bc-d367-51e5-00ba32d5fc43@offenerstapel.de> References: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> <4fd07df7-78bc-d367-51e5-00ba32d5fc43@offenerstapel.de> Message-ID: Em dom., 4 de out. de 2020 às 09:31, Jens Harbott escreveu: > On 03.10.20 19:04, Jeremy Stanley wrote: > > On 2020-10-04 01:56:42 +0900 (+0900), Akihiro Motoki wrote: > >> openstack-tox-py36 is now run on CentOS-8 after commit > >> 31c4a7a18e2bd43d2893563b992c683c95baed6f was merged. > >> I noticed Python.h is not installed so we cannot install python > >> modules which require compilation. An example is found at [1]. > >> Do individual projects explicitly install python-dev or will we handle > >> it in the common zuul job? > >> Before proposing a fix, I would like to hear our direction. > >> > >> [1] > https://zuul.opendev.org/t/openstack/build/696978c5614c41bd88b5b2a1d6c2174a/log/job-output.txt#968 > > > > We've also observed that the common mysqladmin invocations in > > tools/test_setup.sh files in many projects are failing due to > > differences in how MySQL/MariaDB are started or configured on CentOS > > vs Ubuntu. > > I think this change should have been announced beforehand and given > projects enough time to prepare for the switch. Or maybe even the TC > recommendations for testing environments should follow what is happening > in the real world and acknowledge that Bionic is indeed still a platform > that is being used for testing. I've thus proposed a revert of the > change at [0] in order to unblock the gate for those affected (including > Nova and Neutron), allowing to reevaluate the necessity and usefulness > of this change. > > [0] https://review.opendev.org/755954 > > ++, I agree with you Jens. This type of change deserves an email and testing by each project. (maybe add a non-voting job with centos8 so it won't block the gate of the projects) -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Oct 4 13:11:46 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 4 Oct 2020 09:11:46 -0400 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: <0F1332BF-3D42-4D54-B9DC-19783447329C@gmail.com> Nice!! Sent from my iPhone > On Oct 3, 2020, at 3:56 AM, pwm wrote: > >  > On all controller and compute node, run this > ip link set eth14 up > brctl addif br-lbaas eth14 > > On Sat, Oct 3, 2020 at 3:10 AM Satish Patel wrote: >> Nice, how did you add eth14 to br-lbaas bridge? Are you running OVS? >> >> On Fri, Oct 2, 2020 at 10:17 AM pwm wrote: >> > >> > Tested on 3 nodes and 1 compute node, it is working fine by adding the eth14 to the br-lbaas bridge. >> > Satish your solution will save 1 physical interface, will try your solution using VLAN later. >> > >> > Thanks >> > >> > >> > >> > >> > On Thu, Oct 1, 2020 at 10:27 PM pwm wrote: >> >> >> >> ok, will try to test it on 3 nodes this weekend. I see you are using the Linux bridge, thanks for sharing your solution. >> >> >> >> >> >> On Tue, Sep 29, 2020 at 10:01 PM Satish Patel wrote: >> >>> >> >>> Perfect, >> >>> >> >>> Now try same way on 3 nodes and tell me how it goes? >> >>> >> >>> Because I was having issue in my environment to make it work so I used different method which is here on my blog https://satishdotpatel.github.io//openstack-ansible-octavia/ >> >>> >> >>> Sent from my iPhone >> >>> >> >>> On Sep 29, 2020, at 8:27 AM, pwm wrote: >> >>> >> >>>  >> >>> Hi, >> >>> I'm testing on AIO before moving to a 3 nodes controller. Haven't tested on 3 nodes controller yet but I do think it will get the same issue. >> >>> >> >>> On Tue, Sep 29, 2020 at 3:43 AM Satish Patel wrote: >> >>>> >> >>>> Hi, >> >>>> >> >>>> I was dealing with the same issue a few weeks back so curious are you >> >>>> having this problem on AIO or 3 node controllers? >> >>>> >> >>>> ~S >> >>>> >> >>>> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: >> >>>> > >> >>>> > Hi, >> >>>> > I using the following setup for Octavia load balancer on OVS >> >>>> > Ansible openstack_user_config.yml >> >>>> > - network: >> >>>> > container_bridge: "br-lbaas" >> >>>> > container_type: "veth" >> >>>> > container_interface: "eth14" >> >>>> > host_bind_override: "eth14" >> >>>> > ip_from_q: "octavia" >> >>>> > type: "flat" >> >>>> > net_name: "octavia" >> >>>> > group_binds: >> >>>> > - neutron_openvswitch_agent >> >>>> > - octavia-worker >> >>>> > - octavia-housekeeping >> >>>> > - octavia-health-manager >> >>>> > >> >>>> > user_variables.yml >> >>>> > octavia_provider_network_name: "octavia" >> >>>> > octavia_provider_network_type: flat >> >>>> > octavia_neutron_management_network_name: lbaas-mgmt >> >>>> > >> >>>> > /etc/netplan/50-cloud-init.yaml >> >>>> > br-lbaas: >> >>>> > dhcp4: no >> >>>> > interfaces: [ bond10 ] >> >>>> > addresses: [] >> >>>> > parameters: >> >>>> > stp: false >> >>>> > forward-delay: 0 >> >>>> > bond10: >> >>>> > dhcp4: no >> >>>> > addresses: [] >> >>>> > interfaces: [ens16] >> >>>> > parameters: >> >>>> > mode: balance-tlb >> >>>> > >> >>>> > brctl show >> >>>> > bridge name bridge id STP enabled interfaces >> >>>> > br-lbaas 8000.d60e4e80f672 no 2ea34552_eth14 >> >>>> > bond10 >> >>>> > >> >>>> > However, I am getting the following error when creating the load balance >> >>>> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.29.233.47', port=9443 >> >>>> > >> >>>> > The Octavia api container unable to connect to the amphora instance. >> >>>> > Any missing configuration, cause I need to manually add in the eth14 interface to the br-lbaas bridge in order to fix the connection issue >> >>>> > brctl addif br-lbaas eth14 >> >>>> > >> >>>> > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Oct 4 13:42:19 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 4 Oct 2020 13:42:19 +0000 Subject: [all][dev] Python.h not found after openstack-tox-py36 is switch to CentOS8 In-Reply-To: <4fd07df7-78bc-d367-51e5-00ba32d5fc43@offenerstapel.de> References: <20201003170414.ar55fmuqenakx6ue@yuggoth.org> <4fd07df7-78bc-d367-51e5-00ba32d5fc43@offenerstapel.de> Message-ID: <20201004134219.mjmy5htew5wleyyc@yuggoth.org> On 2020-10-04 09:29:19 +0200 (+0200), Jens Harbott wrote: [...] > maybe even the TC recommendations for testing environments should > follow what is happening in the real world and acknowledge that > Bionic is indeed still a platform that is being used for testing. [...] The TC recommending testing on old releases instead of new ones would basically mean we just stop being compatible with new distro versions. I'll grant the change was late and should have been pushed in at the start of the Victoria cycle instead of the end of it. In future any test platform changes should just happen at the start of the cycle and let the project developers cope with being unable to merge any new changes until they get their repos working on the PTI platforms for that cycle. Every time we try to provide an opportunity for smooth platform transitions, we always wind up at the end of the cycle (or even after the release) still trying to get jobs switched over to them. We really need to stop seeing this as a nicety and recognize it as the necessity it really is. If the TC says we're going to test on Python 3.6 because that's the version in CentOS 8, then we should be testing Python 3.6 on CentOS 8 and not some other entirely different distro which just happens to have an old Python version which is similar. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From katonalala at gmail.com Sun Oct 4 14:56:33 2020 From: katonalala at gmail.com (Lajos Katona) Date: Sun, 4 Oct 2020 16:56:33 +0200 Subject: [neutron] Bug deputy report for week of September 28th Message-ID: Hi, I was Neutron bug deputy the week before, below is a short summary of the week's reported bugs. - Critical Bugs - https://bugs.launchpad.net/neutron/+bug/1897713 (networking-ovn-tempest-dsvm-ovs-release job fails on stable/train) *Fixed* - https://bugs.launchpad.net/neutron/+bug/1897898 (Scenario test test_multiple_ports_portrange_remote is unstable ) *Unassigned* - https://bugs.launchpad.net/neutron/+bug/1898211 (OVN scenario jobs are failing on Ubuntu Focal due to ebtables-nft used by default ) *In Progress* - High Bugs - https://bugs.launchpad.net/neutron/+bug/1897796 (Scenario test test_subport_connectivity failing from time to time) *Assigned, in Progress* - https://bugs.launchpad.net/neutron/+bug/1897637 (explicity_egress_direct prevents learning of local MACs and causes flooding of ingress packets ) *In Progress*, perhaps duplicate of https://bugs.launchpad.net/neutron/+bug/1884708 - Medium Bugs - https://bugs.launchpad.net/neutron/+bug/1897921 ([OVN] "test_agent_show" failing, chassis not found ) *Fixed* - https://bugs.launchpad.net/neutron/+bug/1897928 (TestOvnDbNotifyHandler test cases failing due to missing attribute "_RowEventHandler__watched_events" ) *Fixed on Master* - Low Bugs - https://bugs.launchpad.net/neutron/+bug/1897546 (binding_levels property is missing from openstack port show command) - Not fully confirmed Bugs - https://bugs.launchpad.net/neutron/+bug/1897580 (SG rules aren't properly applied if CIDR of the tenant network is also matches the host network CIDR), I check it next week again. Iwas a much relaxed week in numbers. One critical bug is unassigned: https://bugs.launchpad.net/neutron/+bug/1897898 Regards Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Mon Oct 5 00:07:10 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 5 Oct 2020 13:07:10 +1300 Subject: [Trove] Project udpate Message-ID: Hi there, As the official Victoria release is approaching and it has been a long time silence for Trove in the upstream, I think it's good time for me as the Trove PTL for the last 3 dev cycles to have a project update. The things that will be described below have not been achieved in one single dev cycle, but are some significant changes since the 'dark time' of Trove project in the past. Tips hat to those who have made contributions to Trove project ever before. ## Service tenant configuration Service tenant configuration was added in Stein release, before that, it's impossible to deploy Trove in the public cloud (even not for some private cloud due to security concerns) because the user may have access to the guest instance which contains sensitive data in the config files, the users can also perform operations towards either storage or networking resources which may bring much management overhead and make it easy to break the database functionality. With service tenant configuration (which is currently the default setting in devstack), almost all the cloud resources(except the Swift objects for backup data) created for a Trove instance are only visible to the Trove service user. As Trove users, they can only see a Trove instance, but know nothing about the Nova VM, Cinder volume, Neutron management network, and security groups under the hood. The only way to operate Trove instances is to interact with Trove API. ## Message queue security concerns To do database operations, trove controller services communicate with trove-guestagent service inside the instance via message queue service (i.e. RabbitMQ in most environments). In the meantime, trove-guestagent periodically sends status update information to trove-conductor through the same messaging system. In the current design, the RabbitMQ username and password need to be configured in the trove-guestagent config file, which brought significant security concern for the cloud deployers in the past. If the guest instance is compromised, then guest credentials are compromised, which means the messaging system is compromised. As part of the solution, a security enhancement was introduced in the Ocata release, using encryption keys to protect the messages between the control plane and the guest instances. First, the rabbitmq credential should only have access to trove services. Second, even with the rabbitmq credential and the message encryption key of the particular instance, the communication from the guest agent and trove controller services are restricted in the context of that particular instance, other instances are not affected as the malicious user doesn't know their message encryption keys. Additionally, since Ussuri, trove is running in service tenant model in devstack by default which is also the recommended deployment configuration. Most of the cloud resources(except the Swift objects for backup data) created for a trove instance should only be visible to the trove service user, which also could decrease the attack surface. ## Datastore images Before Victoria, Trove provided a bunch of diskimage-builder elements for building different datastore images. As contributors were leaving, most of the elements just became unmaintained except for MySQL and MariaDB. To solve the problem, database containerization was introduced in Victoria dev cycle, so that the database service is running in a docker container inside the guest instance, trove guest agent is pulling container image for a particular datastore when initializing guest instance. Trove is not maintaining those container images. That means, since Victoria, the cloud provider only needs to maintain one single datastore image which only contains common code that is datastore independent. However, for backward compatibility, the cloud provider still needs to create different datastores but using the same Glance image ID. Additionally, using database container also makes it much easier for database operations and management. To upgrade from the older version to Victoria onwards, the Trove user has to create backups before upgrading, then create instances from the backup, so downtime is expected. ## Supported datastores Trove used to support several datastores such as MySQL, MariaDB, PostgreSQL, MongoDB, CouchDB, etc. Most of them became unmaintained because of a lack of maintainers in the community. Currently, only MySQL and MariaDB drivers are fully supported and tested in the upstream. PostgreSQL driver was refactored in Victoria dev cycle and is in experimental status again. Adding extra datastores should be quite easy by implementing the interfaces between trove task manager and guest agent. Again, no need to maintain separate datastore images thanks to the container technology. ## Instance backup At the same time as we were moving to use container for database services, we also moved the backup and restore functions out of trove guest agent code because the backup function is usually using some 3rd party software which we don't want to pre-install inside the datastore image. As a result, we are using container as well for database backup and restore. For more information about the backup container image, see https://lingxiankong.github.io/2020-04-14-database-backup-docker-image.html. ## Others There are many other improvements not mentioned above added to Trove since Train, e.g. * Access configuration for the instance. * The swift backend customization for backup. * Online volume resize support. * XFS disk format for database data volume. * API documentation improvement. * etc. By the way, Catalyst Cloud has already deployed Trove (in Alpha) in our public cloud in New Zealand, we are getting feedback from customers. I believe there are other deployers already have Trove in their production but running an old version because of previous upstream situation in the past. If you are one of them and interested in upgrading to the latest, please either reply to this email or send personal email to me, I would be very happy to provide any help or guidance. For those who are still in evaluation phase, you are also welcome to reach out for any questions. I'm always in the position to help in #openstack-trove IRC channel. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.macnaughton at canonical.com Mon Oct 5 06:02:17 2020 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Mon, 5 Oct 2020 08:02:17 +0200 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: <88f989d3-0496-d696-13a1-4272e971945b@canonical.com> On 04-10-2020 08:54, Xav Paice wrote: > I was writing a patch recently and in order to test it, I needed to > make changes to the test bundles.  I ended up making the same change > across several files, and missing one (thanks to the reviewer for > noticing that!). > > Some of the other projects I'm involved with use symlinks to a base > bundle with overlays: -- snip -- > This means that I can edit base.yaml just once, and if a change is > specific to any of the particular bundles there's a place for that in > the individual overlays.  When we have bundles for each release going > back to Mitaka, this could be quite an effort saver. I'd be quite interested in seeing where this could go, as there is a lot of duplication in the charms' test code that could probably be dramatically reduced by taking this approach! Could you propose a change to one of the repos as an example that we could functionally validate, as well as confirming the assumption that the only differences between the bundles is the series, and openstack-origin/source configs? Chris MacNaughton -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x74BAF13D12E6A841.asc Type: application/pgp-keys Size: 10616 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From xavpaice at gmail.com Mon Oct 5 06:37:56 2020 From: xavpaice at gmail.com (Xav Paice) Date: Mon, 5 Oct 2020 19:37:56 +1300 Subject: [charms] Zaza bundle tests In-Reply-To: <88f989d3-0496-d696-13a1-4272e971945b@canonical.com> References: <88f989d3-0496-d696-13a1-4272e971945b@canonical.com> Message-ID: absolutely - could do designate, which I'm working on now anyway, and has a large number of bundles. You're right, an illustration and something to check against would be helpful. On Mon, 5 Oct 2020 at 19:04, Chris MacNaughton < chris.macnaughton at canonical.com> wrote: > On 04-10-2020 08:54, Xav Paice wrote: > > I was writing a patch recently and in order to test it, I needed to > > make changes to the test bundles. I ended up making the same change > > across several files, and missing one (thanks to the reviewer for > > noticing that!). > > > > Some of the other projects I'm involved with use symlinks to a base > > bundle with overlays: > -- snip -- > > This means that I can edit base.yaml just once, and if a change is > > specific to any of the particular bundles there's a place for that in > > the individual overlays. When we have bundles for each release going > > back to Mitaka, this could be quite an effort saver. > > I'd be quite interested in seeing where this could go, as there is a lot > of duplication in the charms' test code that could probably be > dramatically reduced by taking this approach! Could you propose a change > to one of the repos as an example that we could functionally validate, > as well as confirming the assumption that the only differences between > the bundles is the series, and openstack-origin/source configs? > > Chris MacNaughton > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Mon Oct 5 07:48:41 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 5 Oct 2020 15:48:41 +0800 Subject: [Octavia][kolla-ansible][kayobe] - network configuration knowledge gathering Message-ID: Hi all, Openstack version is Train Deployed via Kayobe I am trying to deploy octavia lbaas but hitting some blockers with regards to how this should be set up. I think the current issue is the lack of neutron bridge for the octavia network and I cannot locate how to achieve this from the documentation. I have this setup at the moment which I've added another layer 2 network provisioned to the controller and compute node, for running octavia lbaas: [Controller node]------------octavia network-----------[Compute node] However as there's no bridge, the octavia instance cannot connect to it. The exact error from the logs: 2020-10-05 14:37:34.070 6 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet3 to bridge broct 2020-10-05 14:37:34.070 6 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge broct for physical network physnet3 does not Bridge "broct" does exist but it's not a neutron bridge: [root at juc-kocon1-prd kolla]# brctl show bridge name bridge id STP enabled interfaces brext 8000.001a4a16019a no eth5 p-brext-phy broct 8000.001a4a160173 no eth6 docker0 8000.0242f5ed2aac no [root at juc-kocon1-prd kolla]# I've been through the docs a few times but I am unable to locate this info. Most likely the information is there but I am unsure what I need to look for, hence missing it. Would any of you be able to help shed light on this or point me to the documentation? Thank you Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Oct 5 08:19:49 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 5 Oct 2020 09:19:49 +0100 Subject: [Octavia][kolla-ansible][kayobe] - network configuration knowledge gathering In-Reply-To: References: Message-ID: Following up in IRC: http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-10-05.log.html#t2020-10-05T06:44:47 On Mon, 5 Oct 2020 at 08:50, Tony Pearce wrote: > > Hi all, > > Openstack version is Train > Deployed via Kayobe > > I am trying to deploy octavia lbaas but hitting some blockers with regards to how this should be set up. I think the current issue is the lack of neutron bridge for the octavia network and I cannot locate how to achieve this from the documentation. > > I have this setup at the moment which I've added another layer 2 network provisioned to the controller and compute node, for running octavia lbaas: > > [Controller node]------------octavia network-----------[Compute node] > > However as there's no bridge, the octavia instance cannot connect to it. The exact error from the logs: > > 2020-10-05 14:37:34.070 6 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet3 to bridge broct > 2020-10-05 14:37:34.070 6 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge broct for physical network physnet3 does not > > Bridge "broct" does exist but it's not a neutron bridge: > > [root at juc-kocon1-prd kolla]# brctl show > bridge name bridge id STP enabled interfaces > brext 8000.001a4a16019a no eth5 > p-brext-phy > broct 8000.001a4a160173 no eth6 > docker0 8000.0242f5ed2aac no > [root at juc-kocon1-prd kolla]# > > > I've been through the docs a few times but I am unable to locate this info. Most likely the information is there but I am unsure what I need to look for, hence missing it. > > Would any of you be able to help shed light on this or point me to the documentation? > > Thank you > > Tony Pearce > From alex.kavanagh at canonical.com Mon Oct 5 08:39:38 2020 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Mon, 5 Oct 2020 09:39:38 +0100 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: Hi I think I remember that a conscious decision was made to avoid using symlinks for the bundles due to the hell that openstack-mojo-specs descended into? Liam may want to wade in on this? Cheers Alex. On Sun, Oct 4, 2020 at 7:57 AM Xav Paice wrote: > I was writing a patch recently and in order to test it, I needed to make > changes to the test bundles. I ended up making the same change across > several files, and missing one (thanks to the reviewer for noticing that!). > > Some of the other projects I'm involved with use symlinks to a base bundle > with overlays: > > ./tests$ ls -l bundles/ > total 20 > -rw-rw---- 2 xav xav 5046 Sep 29 14:56 base.yaml > lrwxrwxrwx 1 xav xav 9 Sep 29 10:01 bionic.yaml -> base.yaml > lrwxrwxrwx 1 xav xav 9 Sep 29 10:01 focal.yaml -> base.yaml > drwxrwx--x 2 xav xav 4096 Sep 29 10:01 overlays > lrwxrwxrwx 1 xav xav 9 Sep 29 10:01 xenial.yaml -> base.yaml > ./tests$ ls bundles/overlays/ > bionic.yaml.j2 focal.yaml.j2 local-charm-overlay.yaml.j2 xenial.yaml.j2 > > This means that I can edit base.yaml just once, and if a change is > specific to any of the particular bundles there's a place for that in the > individual overlays. When we have bundles for each release going back to > Mitaka, this could be quite an effort saver. > > I gather there's been some discussion around this already, and would be > keen to see if there's a reason folks might avoid that pattern? > -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Oct 5 08:57:59 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Oct 2020 10:57:59 +0200 Subject: [largescale-sig] Next meeting: October 7, 16utc Message-ID: <9b198ee4-049e-4dcd-1ce4-d92b02fb7abe@openstack.org> Hi everyone, Our next meeting will be a EU-US-friendly meeting, this Wednesday, October 7 at 16 UTC[1] in the #openstack-meeting-3 channel on IRC: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20201007T16 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - ttx to draft a plan to tackle "meaningful monitoring" as a new SIG workstream - all to describe briefly how you solved metrics/billing in your deployment in https://etherpad.openstack.org/p/large-scale-sig-documentation - masahito to push latest patches to oslo.metrics Talk to you all later, -- Thierry Carrez From sean.mcginnis at gmx.com Mon Oct 5 10:19:26 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 5 Oct 2020 05:19:26 -0500 Subject: [ptl][release] Wrapping up Victoria Message-ID: <20201005101926.GA421629@sm-workstation> Hey everyone, We are getting close to being done with the Victoria development cycle. This is one more reminder that there are just a few days left to finish up any critical work. This Thursday, October 8, is the deadline for any new release candidate releases for any cycle-based services. After Thursday, no new releases should occur for Victoria development until the final coordinated release next Wednesday. After this Thursday, the release team will be proposing a patch to re-tag the last RC release of each deliverable to be the final version for Victoria. We encourage PTLs to +1 this patch to capture that metadata with the release. This is not the time to -1 the patch and ask for more time to push anything final in. After this Thursday, we should basically be done with Victoria development. Only if there is a very, very critical issue discovered will we allow any updates after Thursday and before the final release next Wednesday. Any other bugfixes and important updates will need to wait until after next week's coordinated release to be part of a stable release off of the stable/victoria branch. Please let us know if you have any questions as we wrap things up. --- Sean McGinnis and the entire Release Management Team From chris.macnaughton at canonical.com Mon Oct 5 11:03:44 2020 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Mon, 5 Oct 2020 13:03:44 +0200 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: > I think I remember that a conscious decision was made to avoid using > symlinks for the bundles due to the hell that openstack-mojo-specs > descended into?  Liam may want to wade in on this? > Broadly, this is one of the reasons I proposed submitting a review of a single charm to be a practical example of what this would look like, and how complex it would be. It should also help us, as a project, identify if the repetition is enough that symlinks, or refactoring the library, would be worthwhile. Chris -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x74BAF13D12E6A841.asc Type: application/pgp-keys Size: 10616 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aurelien.lourot at canonical.com Mon Oct 5 11:45:30 2020 From: aurelien.lourot at canonical.com (Aurelien Lourot) Date: Mon, 5 Oct 2020 13:45:30 +0200 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: Wasn't one of the reasons for this that we're striving to have stand-alone / complete bundles? So that we could "just give" the one-file bundle to someone and it would "just work". Anyway I'm also in favour of any solution leading to less repetition, i.e. what was suggested by Xav. Aurelien -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Oct 5 13:17:47 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 5 Oct 2020 15:17:47 +0200 Subject: [cloudkitty] Joining the "CloudKitty Drivers" Launchpad team In-Reply-To: <20201002181009.r6bigtuv5eppmnri@yuggoth.org> References: <20201001210738.3bwubnqldabiqspo@yuggoth.org> <20201002145912.sbw4iv23fuujipvw@yuggoth.org> <20201002181009.r6bigtuv5eppmnri@yuggoth.org> Message-ID: Thanks a lot Jeremy, it looks great! I've now locked down access to Launchpad bugs. On Fri, 2 Oct 2020 at 20:18, Jeremy Stanley wrote: > > On 2020-10-02 17:21:27 +0200 (+0200), Pierre Riteau wrote: > [...] > > Then could you rerun the import please? That would be great! After > > that I can lock down Launchpad bugs. > > I've completed an import update from the cloudkitty project on > Launchpad to the openstack/cloudkitty project in StoryBoard. Please > double-check that did what you expect and then feel free to proceed > locking down bug reporting in LP. > -- > Jeremy Stanley From whayutin at redhat.com Mon Oct 5 15:16:08 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 5 Oct 2020 09:16:08 -0600 Subject: [tripleo][ci] RFC, supported releases of TripleO Message-ID: Greetings, I just wanted to give folks the opportunity to speak up before we make any changes. Please review https://releases.openstack.org/ As we are branching master -> victoria it looks like we can definitely remove stable/rocky from both upstream and RDO software factory. Any comments about rocky? Stein is end of maintenance on 2020 November 11th. Any thoughts on dropping stein from upstream and rdo software factory? Thanks! Please be specific if you have comments as to which release you are referring to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Oct 5 15:40:02 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 05 Oct 2020 17:40:02 +0200 Subject: [placement][nova][cinder][neutron][blazar][tc][zun] Placement governance switch(back) In-Reply-To: <048DHQ.GTG67G0IXWTI1@est.tech> References: <3063992.oiGErgHkdL@whitebase.usersys.redhat.com> <048DHQ.GTG67G0IXWTI1@est.tech> Message-ID: On Mon, Sep 28, 2020 at 13:04, Balázs Gibizer wrote: > > > On Thu, Sep 24, 2020 at 18:12, Luigi Toscano > wrote: >> On Thursday, 24 September 2020 17:23:36 CEST Stephen Finucane wrote: >> >>> Assuming no one steps forward for the Placement PTL role, it would >>> appear to me that we have two options. Either we look at >>> transitioning >>> Placement to a PTL-less project, or we move it back under nova >>> governance. To be honest, given how important placement is to nova >>> and >>> other projects now, I'm uncomfortable with the idea of not having a >>> point person who is ultimately responsible for things like cutting >>> a >>> release (yes, delegation is encouraged but someone needs to herd >>> the >>> cats). At the same time, I do realize that placement is used by >>> more >>> that nova now so nova cores and what's left of the separate >>> placement >>> core team shouldn't be the only ones making this decision. >>> >>> So, assuming the worst happens and placement is left without a PTL >>> for >>> Victoria, what do we want to do? >> >> I mentioned this on IRC, but just for completeness, there is another >> option: >> have the Nova candidate PTL (I assume there is just one) also apply >> for >> Placement PTL, and handle the 2 realms in a personal union. > > As far as I know I'm the only nova PTL candidate so basically you > asking me to take the Placement PTL role as well. This is a valid > option. Still, first, I would like to give a chance to the DPL > concept in Placement in a way yoctozepto suggested. Bump. Do we 2-3 developers interested in running the Placement project in distributed project leadership[1] mode in Wallaby? [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > Cheers, > gibi > >> >> Ciao >> -- >> Luigi >> >> >> >> >> > > > From david.ames at canonical.com Mon Oct 5 16:01:37 2020 From: david.ames at canonical.com (David Ames) Date: Mon, 5 Oct 2020 09:01:37 -0700 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: FWIW, I have used sym links in a couple of charms [0] [1]. This seems like a perfectly rational thing to do. I suspect it could be leveraged even further as @Xav Paice suggests. @Alex Kavanagh, I think this is a separate issue from mojo, as this primarily pertains to Zaza tests. Also, the build process removes the sym-links and creates separate files for us in a built charm. [0] https://github.com/openstack/charm-mysql-innodb-cluster/tree/master/src/tests/bundles [1] https://github.com/openstack-charmers/charm-ceph-benchmarking/tree/master/tests/bundles -- David Ames On Mon, Oct 5, 2020 at 4:07 AM Chris MacNaughton wrote: > > > > I think I remember that a conscious decision was made to avoid using > > symlinks for the bundles due to the hell that openstack-mojo-specs > > descended into? Liam may want to wade in on this? > > > Broadly, this is one of the reasons I proposed submitting a review of a > single charm to be a practical example of what this would look like, and > how complex it would be. It should also help us, as a project, identify > if the repetition is enough that symlinks, or refactoring the library, > would be worthwhile. > > Chris > From gmann at ghanshyammann.com Mon Oct 5 18:43:55 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 05 Oct 2020 13:43:55 -0500 Subject: [tc][all][interop] Removed the 'tc:approved-release' tag in favor of projects.yaml Message-ID: <174fa13cb56.11ed396ab64141.3568140989479929443@ghanshyammann.com> Hello Everyone, TC has merged the recent change to remove the 'tc:approved-release' tag[1]. This tag was used as the superset of projects used by the OpenStack Foundation when creating commercial trademark programs. Basically, this tag was used to know if the project under OpenStack governance is mature and follows the defined release model so that it can be considered as a possible candidate for trademark programs. This situation was in the early stage of OpenStack when we used to have incubated vs integrated project status but now everything in projects.yaml[2] file is considered as mature, following release, active. We do not need any changes in the bylaw as all projects listed in projects.yaml file is applicable as bylaw term "OpenStack Technical Committee Approved Release" which is made clear in the resolution also[3]. Interop group can refer to the OpenStack governed projects list for possible candidates for the trademark program. [1] https://review.opendev.org/#/c/749363/ [2] https://governance.openstack.org/tc/reference/projects/ [3] https://governance.openstack.org/tc/resolutions/20200920-tc-approved-release.html -gmann From mnaser at vexxhost.com Mon Oct 5 19:06:35 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 5 Oct 2020 15:06:35 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Select Prvisep as the Wallaby Goal https://review.opendev.org/755590 - Add assert:supports-standalone https://review.opendev.org/722399 - Add Ironic charms to OpenStack charms https://review.opendev.org/754099 - Clarify impact on releases for SIGs https://review.opendev.org/752699 - Add election schedule exceptions in charter https://review.opendev.org/751941 ## Project Updates - Retire devstack-plugin-pika project https://review.opendev.org/748730 - Retire openstack/os-loganalyze https://review.opendev.org/753834 ## General Changes - Define TC-approved release in a resolution https://review.opendev.org/752256 - Reorder repos alphabetically https://review.opendev.org/754097 - Remove tc:approved-release tag https://review.opendev.org/749363 - Migrate rpm-packaging to a SIG https://review.opendev.org/752661 # Other Reminders - PTG Brainstorming: https://etherpad.opendev.org/p/tc-wallaby-ptg - PTG Registration: https://october2020ptg.eventbrite.com Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From ops at clustspace.com Mon Oct 5 20:48:19 2020 From: ops at clustspace.com (ops at clustspace.com) Date: Mon, 05 Oct 2020 23:48:19 +0300 Subject: qemu ussuri can't launch instance Message-ID: Hello, Does anyone know where to find the problem? Searched few days in google :) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [req-f16a5b6c-3475-4322-a664-45f3bd9c041a cb4c181d25a14dda958e40792968f97e 5242bcab73714ab192cca8d909b72e9a - default default] [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] Instance failed to spawn: libvirt.libvirtError: internal error: cannot load AppArmor$ 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] Traceback (most recent call last): 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2614, in _build_resources 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] yield resources 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2378, in _build_and_run_instance 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] accel_info=accel_info) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3580, in spawn 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] power_on=power_on) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6438, in _create_domain_and_network 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] destroy_disks_on_failure) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] self.force_reraise() 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] six.reraise(self.type_, self.value, self.tb) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] raise value 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6410, in _create_domain_and_network 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] post_xml_callback=post_xml_callback) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6351, in _create_domain 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] guest.launch(pause=pause) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 142, in launch 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] self._encoded_xml, errors='ignore') 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] self.force_reraise() 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] six.reraise(self.type_, self.value, self.tb) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] raise value 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 137, in launch 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] return self._domain.createWithFlags(flags) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] result = proxy_call(self._autowrap, f, *args, **kwargs) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] rv = execute(f, *args, **kwargs) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] six.reraise(c, e, tb) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] raise value 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] rv = meth(*args, **kwargs) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File "/usr/lib/python3/dist-packages/libvirt.py", line 1265, in createWithFlags 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] libvirt.libvirtError: internal error: cannot load AppArmor profile 'libvirt-63bb4205-fcbb-4f2b-9ade-e055e7ff966f' 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] From smooney at redhat.com Mon Oct 5 21:07:36 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 05 Oct 2020 22:07:36 +0100 Subject: qemu ussuri can't launch instance In-Reply-To: References: Message-ID: <575dfdf3711f91b926da76266e4a8b9eebf73b1b.camel@redhat.com> On Mon, 2020-10-05 at 23:48 +0300, ops at clustspace.com wrote: > Hello, > > Does anyone know where to find the problem? Searched few days in google > :) this looks like the trace this looks like a packaging issue ibvirt.libvirtError: internal error: cannot load AppArmor profile specificly it looks like there is a conflict between your installation and apparmor. this does not looks to intially be a nova/openstack or libvirt/qemu issue outside of a possible misinstallation or misconfiguration of those services to be in conflict with the apparmor profile. the dmesg or audit log may provie insite into exactuly whic calls are being blocked and how to adress that. > > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager > [req-f16a5b6c-3475-4322-a664-45f3bd9c041a > cb4c181d25a14dda958e40792968f97e 5242bcab73714ab192cca8d909b72e9a - > default default] [instance: 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] > Instance failed to spawn: libvirt.libvirtError: internal error: cannot > load AppArmor$ > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] Traceback (most recent call last): > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2614, in > _build_resources > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] yield resources > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2378, in > _build_and_run_instance > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] accel_info=accel_info) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3580, > in spawn > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] power_on=power_on) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6438, > in _create_domain_and_network > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] destroy_disks_on_failure) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] self.force_reraise() > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] six.reraise(self.type_, > self.value, self.tb) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] raise value > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6410, > in _create_domain_and_network > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] > post_xml_callback=post_xml_callback) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6351, > in _create_domain > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] guest.launch(pause=pause) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 142, > in launch > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] self._encoded_xml, > errors='ignore') > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] self.force_reraise() > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] six.reraise(self.type_, > self.value, self.tb) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] raise value > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 137, > in launch > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] return > self._domain.createWithFlags(flags) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] result = > proxy_call(self._autowrap, f, *args, **kwargs) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in > proxy_call > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] rv = execute(f, *args, > **kwargs) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] six.reraise(c, e, tb) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] raise value > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] rv = meth(*args, **kwargs) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] File > "/usr/lib/python3/dist-packages/libvirt.py", line 1265, in > createWithFlags > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] if ret == -1: raise > libvirtError ('virDomainCreateWithFlags() failed', dom=self) > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] libvirt.libvirtError: internal > error: cannot load AppArmor profile > 'libvirt-63bb4205-fcbb-4f2b-9ade-e055e7ff966f' > 2020-10-05 23:11:04.054 1585 ERROR nova.compute.manager [instance: > 63bb4205-fcbb-4f2b-9ade-e055e7ff966f] > From gabriel.gamero at pucp.edu.pe Mon Oct 5 22:12:20 2020 From: gabriel.gamero at pucp.edu.pe (GABRIEL OMAR GAMERO MONTENEGRO) Date: Mon, 5 Oct 2020 17:12:20 -0500 Subject: [neutron] Security groups with SR-IOV as a second ML2 mechanism driver Message-ID: Dear all, I'm planning to use the SR-IOV Networking L2 Agent with another L2 Agent as Open vSwitch or Linux Bridge (a configuration with multiple ML2 mechanism drivers). Does anybody know if I can use the Open vSwitch or Linux Bridge L2 agents with security group feature (implemented with iptables firewall driver or Native Open vSwitch firewall driver)? Or am I restricted to apply no security to my instances because SR-IOV L2 agent is being used as a second mechanism driver in the same OpenStack deployment? Thanks in advance, Gabriel Gamero From skaplons at redhat.com Tue Oct 6 06:25:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 6 Oct 2020 08:25:36 +0200 Subject: [neutron] Security groups with SR-IOV as a second ML2 mechanism driver In-Reply-To: References: Message-ID: <20201006062536.vz2bhhqvsalxo7b5@p1> Hi, On Mon, Oct 05, 2020 at 05:12:20PM -0500, GABRIEL OMAR GAMERO MONTENEGRO wrote: > Dear all, > > I'm planning to use the SR-IOV Networking L2 Agent > with another L2 Agent as Open vSwitch or Linux Bridge > (a configuration with multiple ML2 mechanism drivers). > > Does anybody know if I can use the Open vSwitch or > Linux Bridge L2 agents with security group feature (implemented > with iptables firewall driver or Native Open vSwitch firewall driver)? > Or am I restricted to apply no security to my instances because > SR-IOV L2 agent is being used as a second mechanism driver > in the same OpenStack deployment? Yes, it should works fine if You will use SG for ports which are bound by Linuxbridge or Openvswitch mech drivers. > > Thanks in advance, > Gabriel Gamero > -- Slawek Kaplonski Principal Software Engineer Red Hat From gergely.csatari at nokia.com Tue Oct 6 06:53:32 2020 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - FI/Espoo)) Date: Tue, 6 Oct 2020 06:53:32 +0000 Subject: [nova]: Separation of VM size and extra specs in flavors Message-ID: Hi, During some discussions about CNTT, where several flavor sizes and extra specs are specified we realized, that the combination of all flavor sizes and the different extra specs results in a very large number of flavors. One idea was to separate the size and the extra specs on the creation of flavors and just define some kind of rules do define how the different sizes and extra specs can be combined. The other idea was just to leave things as they are. Are there any opinions on which idea is better? Thanks, Gerg0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Oct 6 07:06:36 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 6 Oct 2020 10:06:36 +0300 Subject: [tripleo][undercloud] use local container images in insecure repo Message-ID: Hi all, I have been trying to use containers from local container image repo which is insecure, but it is always trying to use TLS version, and I do not have https there. even if I would have, I would not have CERT signed, so still it is insecure. It is always trying to access over WWW:443. my registries.conf [1] and I am able to fetch image from the registry [1] and my container image prepare file contains updated repos, I have even added insecure: true any tips? I am following [2] and [3] [1] http://paste.openstack.org/show/cYQM2k77bIh14Zzr5Kjn/ [2] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/container_image_prepare.html [3] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/installing-an-undercloud-with-containers -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Oct 6 07:41:05 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 6 Oct 2020 16:41:05 +0900 Subject: [tripleo][ci] RFC, supported releases of TripleO In-Reply-To: References: Message-ID: Hi Wes, Let me ask several questions especially about upstream maintenance. - Will we retire stable/ocata and stable/pike, too ? It seems that these 2 branches are still open. Since we have really seen activity about these 2 branches recently. I think it is the time to retire these 2 stable branches as well. - You didn't mention stable/queens but do you intend to keep it open ? If yes, how do we backport any fixes to queens after retiring R and S ? I suppose that we'll backport a change from master to V/U/T and then Q, with skipping R/S, but is it right ? Thank you, Takashi On Tue, Oct 6, 2020 at 12:18 AM Wesley Hayutin wrote: > Greetings, > > I just wanted to give folks the opportunity to speak up before we make any > changes. > Please review https://releases.openstack.org/ > > As we are branching master -> victoria it looks like we can > definitely remove stable/rocky from both upstream and RDO software > factory. Any comments about rocky? > > Stein is end of maintenance on 2020 November 11th. Any thoughts on > dropping stein from upstream and rdo software factory? > > Thanks! > Please be specific if you have comments as to which release you are > referring to. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Oct 6 07:57:35 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 6 Oct 2020 09:57:35 +0200 Subject: qemu ussuri can't launch instance In-Reply-To: <575dfdf3711f91b926da76266e4a8b9eebf73b1b.camel@redhat.com> References: <575dfdf3711f91b926da76266e4a8b9eebf73b1b.camel@redhat.com> Message-ID: <5ef28e3c-0828-6914-357c-6f2081eef18b@debian.org> On 10/5/20 11:07 PM, Sean Mooney wrote: > On Mon, 2020-10-05 at 23:48 +0300, ops at clustspace.com wrote: >> Hello, >> >> Does anyone know where to find the problem? Searched few days in google >> :) > this looks like the trace this looks like a packaging issue > ibvirt.libvirtError: internal error: cannot load AppArmor profile > > specificly it looks like there is a conflict between your installation and apparmor. > this does not looks to intially be a nova/openstack or libvirt/qemu issue outside of > a possible misinstallation or misconfiguration of those services to be in conflict with > the apparmor profile. > > the dmesg or audit log may provie insite into exactuly whic calls are being blocked and how to adress that. With OpenStack, /etc/libvirt/qemu.conf must be configured with: security_driver = "apparmor" for Buster, and the same line commented-out for Stertch. I'm not sure what distro we're talking about here... I hope this helps, Cheers, Thomas Goirand (zigo) From ralonsoh at redhat.com Tue Oct 6 08:11:01 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 6 Oct 2020 10:11:01 +0200 Subject: [Neutron] Cancelled QoS team meeting October 6 Message-ID: Hello: Due to the lack of agenda, the Neutron QoS meeting will be cancelled today. Next meeting will be held on October 20. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Oct 6 08:15:57 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 6 Oct 2020 11:15:57 +0300 Subject: [tripleo][ci] RFC, supported releases of TripleO In-Reply-To: References: Message-ID: On Tue, Oct 6, 2020 at 10:43 AM Takashi Kajinami wrote: > Hi Wes, > > Let me ask several questions especially about upstream maintenance. > > - Will we retire stable/ocata and stable/pike, too ? It seems that these 2 > branches are still open. > Since we have really seen activity about these 2 branches recently. I > think it is the time to retire these 2 stable branches as well. > you're right - while we no longer have any upstream ci running for pike/ocata - I just checked and it looks like we didn't tag the ocata/pike branches as eol - they are still active at [1][2] for example so yeah we should probably do that for these too. [1] https://github.com/openstack/tripleo-heat-templates/branches [2] https://github.com/openstack/python-tripleoclient/branches > > - You didn't mention stable/queens but do you intend to keep it open ? If > yes, how do we backport any fixes to queens after retiring R and S ? > I suppose that we'll backport a change from master to V/U/T and then Q, > with skipping R/S, but is it right ? > main reason for keeping queens is because it is part of the fast forward upgrade (ffu) ... i.e. newton->queens for ffu 1 and then queens->train for ffu 2. So indeed the backports will go as you described - to train then queens hope it helps clarify somewhat > > Thank you, > Takashi > > On Tue, Oct 6, 2020 at 12:18 AM Wesley Hayutin > wrote: > >> Greetings, >> >> I just wanted to give folks the opportunity to speak up before we make >> any changes. >> Please review https://releases.openstack.org/ >> >> As we are branching master -> victoria it looks like we can >> definitely remove stable/rocky from both upstream and RDO software >> factory. Any comments about rocky? >> >> Stein is end of maintenance on 2020 November 11th. Any thoughts on >> dropping stein from upstream and rdo software factory? >> >> Thanks! >> Please be specific if you have comments as to which release you are >> referring to. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Oct 6 08:22:40 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 6 Oct 2020 11:22:40 +0300 Subject: [tripleo][ussuri][release] bugfix stable/ussuri release for tripleo 'cycle-with-intermediary' Message-ID: o/ tripleo On request I made a bugfix stable/ussuri release for the tripleo 'cycle-with-intermediary' repos. If you are interested see [1] for the latest hashes. If you need a release for a different branch reach out to me. thanks, marios [1] https://review.opendev.org/#/c/755718/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From munnaeebd at gmail.com Tue Oct 6 08:35:15 2020 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Tue, 6 Oct 2020 14:35:15 +0600 Subject: Modify devstack after basic deployment Message-ID: Hi, I have deployed a basic openstack using devstack. now I want to install additional component like designate or magnum. What is the way to do that? Regards, Munna -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 6 11:10:34 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 6 Oct 2020 13:10:34 +0200 Subject: [nova]: Separation of VM size and extra specs in flavors In-Reply-To: References: Message-ID: On Tue, Oct 6, 2020 at 9:04 AM Csatari, Gergely (Nokia - FI/Espoo) < gergely.csatari at nokia.com> wrote: > Hi, > > > > During some discussions about CNTT , where > several flavor sizes and extra specs are specified we realized, that the > combination of all flavor sizes and the different extra specs results in a > very large number of flavors. > > > > One idea was to separate the size and the extra specs on the creation of > flavors and just define some kind of rules do define how the different > sizes and extra specs can be combined. The other idea was just to leave > things as they are. > > > > Are there any opinions on which idea is better? > > > To be honest, you're opening a can of worms. We tried to discuss about it a couple of times, the last ones I remember were during the Ussuri PTG in Shanghai where we tried to find alternatives for not changing the os-flavors API. See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010643.html for the context which was leading to https://review.opendev.org/#/c/663563 eventually ending to be abandoned as the consensus wasn't reached. -Sylvain Thanks, > > Gerg0 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Oct 6 11:11:32 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 06 Oct 2020 12:11:32 +0100 Subject: Modify devstack after basic deployment In-Reply-To: References: Message-ID: <2adaabe36530dcd8ec36a6fbb78a8214559d4f47.camel@redhat.com> On Tue, 2020-10-06 at 14:35 +0600, Md. Hejbul Tawhid MUNNA wrote: > Hi, > > I have deployed a basic openstack using devstack. now I want to install > additional component like designate or magnum. What is the way to do that? you basically have to un the unstack.sh script to tear down the devestack deployment then enable the addtional services and there dependecies in your local.conf and then redeploy with the stack.sh script. devstack is for developement uses so the intened usage model is to tear downs and redeploy it reguarly in some cases multipel times a day. there are ways to make minor changes without tearing it down and redeploying but to add a new service that is basically the only way to do it with out wriging a custom scipt to run the reslevent fphases in the correct order to install the extra service manually. if you just run stack.sh again without unstacking it will corrupt the databases fo some services as the devstack modules are not intended to be idempotent so they will end up wiping the service entires and other data when they are run again. > > Regards, > Munna From C-Albert.Braden at charter.com Tue Oct 6 12:25:57 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Tue, 6 Oct 2020 12:25:57 +0000 Subject: [kolla] kolla_docker ansible module Message-ID: <6fdbf52023354c50b2fbb48aab515f17@NCEMEXGP009.CORP.CHARTERCOM.com> I opened bug #1897948[1] the other day and today I was trying to figure out what needs to be done to fix it. In the mariadb backup container I see the offending line "--history=$(date +%d-%m-%Y)" in /usr/local/bin/kolla_mariadb_backup.sh and I had assumed that it was coming from https://github.com/openstack/kolla/blob/master/docker/mariadb/mariadb/backup.sh and the obvious solution is to replace "$(date +%d-%m-%Y)" with "$HISTORY_NAME" where HISTORY_NAME=`ls -t $BACKUP_DIR/mysqlbackup*|head -1|cut -d- -f2-4` but when I look at the playbook I see that backup.sh appears to be part of a docker image. Is the docker image pulling /usr/local/bin/kolla_mariadb_backup.sh from https://github.com/openstack/kolla/blob/master/docker/mariadb/mariadb/backup.sh ? On my kolla-ansible build server I see /opt/openstack/share/kolla-ansible/ansible/roles/mariadb/tasks/backup.yml[2] which appears to be an ansible playbook calling module kolla_docker, but I can't find anything about the kolla_docker module on the googles nor on the ansible site. Where can I find the documentation for ansible module kolla_docker? [1] https://bugs.launchpad.net/kolla-ansible/+bug/1897948 [2] http://www.hastebin.net/bimufefosy.yaml E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Oct 6 12:46:44 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Oct 2020 14:46:44 +0200 Subject: [kolla] kolla_docker ansible module In-Reply-To: <6fdbf52023354c50b2fbb48aab515f17@NCEMEXGP009.CORP.CHARTERCOM.com> References: <6fdbf52023354c50b2fbb48aab515f17@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: kolla_docker module is shipped with kolla-ansible - it gets installed with its package. There is only code documentation for that module, embedded right in it. https://opendev.org/openstack/kolla-ansible/src/commit/5e638b757bdda9fbddf0fe0be5d76caa3419af74/ansible/library/kolla_docker.py -yoctozepto On Tue, Oct 6, 2020 at 2:29 PM Braden, Albert wrote: > > I opened bug #1897948[1] the other day and today I was trying to figure out what needs to be done to fix it. In the mariadb backup container I see the offending line "--history=$(date +%d-%m-%Y)" in /usr/local/bin/kolla_mariadb_backup.sh and I had assumed that it was coming from https://github.com/openstack/kolla/blob/master/docker/mariadb/mariadb/backup.sh and the obvious solution is to replace "$(date +%d-%m-%Y)” with “$HISTORY_NAME" where HISTORY_NAME=`ls -t $BACKUP_DIR/mysqlbackup*|head -1|cut -d- -f2-4` but when I look at the playbook I see that backup.sh appears to be part of a docker image. Is the docker image pulling /usr/local/bin/kolla_mariadb_backup.sh from https://github.com/openstack/kolla/blob/master/docker/mariadb/mariadb/backup.sh ? > > > > On my kolla-ansible build server I see /opt/openstack/share/kolla-ansible/ansible/roles/mariadb/tasks/backup.yml[2] which appears to be an ansible playbook calling module kolla_docker, but I can’t find anything about the kolla_docker module on the googles nor on the ansible site. > > > > Where can I find the documentation for ansible module kolla_docker? > > > > [1] https://bugs.launchpad.net/kolla-ansible/+bug/1897948 > > [2] http://www.hastebin.net/bimufefosy.yaml > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From fungi at yuggoth.org Tue Oct 6 12:52:41 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Oct 2020 12:52:41 +0000 Subject: [tripleo][ci] RFC, supported releases of TripleO In-Reply-To: References: Message-ID: <20201006125241.t5xjp7qiffkod2ne@yuggoth.org> On 2020-10-06 11:15:57 +0300 (+0300), Marios Andreou wrote: [...] > main reason for keeping queens is because it is part of the fast > forward upgrade (ffu) ... i.e. newton->queens for ffu 1 and then > queens->train for ffu 2. So indeed the backports will go as you > described - to train then queens [...] Previous discussions highlighted the need for all stable release branches newer than a particular branch to have at least the same level of support or higher. This expectation was encoded into the Stable Branches chapter of the Project Team Guide: https://docs.openstack.org/project-team-guide/stable-branches.html#processes Further, fast forward upgrades aren't designed the way you're suggesting. You're expected to install each version of the software, so to go from Newton to Queens you need to install Ocata and Pike along the way to do the necessary upgrade steps, and then between Queens and Train you need to upgrade through Rocky and Stein. What's unique about FFU is simply that you don't need to start any services from the intermediate branch deployments, so the upgrades are performed "offline" so to speak. Granted, no TripleO deliverables have the stable:follows-policy tag, so as long as you're only referring to which branches of TripleO repositories are switching to EOL you can probably do whatever you want (assuming the TC doesn't object). If TripleO's upgrade orchestration in stable/train is able to perform the Rocky and Stein intermediate deployments, it would presumably work. The service projects don't have the same luxury however, and so can only EOL a particular stable branch if all older branches are already EOL. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aschultz at redhat.com Tue Oct 6 13:08:17 2020 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 6 Oct 2020 07:08:17 -0600 Subject: [tripleo][undercloud] use local container images in insecure repo In-Reply-To: References: Message-ID: On Tue, Oct 6, 2020 at 1:15 AM Ruslanas Gžibovskis wrote: > > Hi all, > > I have been trying to use containers from local container image repo which is insecure, but it is always trying to use TLS version, and I do not have https there. even if I would have, I would not have CERT signed, so still it is insecure. It is always trying to access over WWW:443. > > my registries.conf [1] and I am able to fetch image from the registry [1] and my container image prepare file contains updated repos, I have even added insecure: true > > any tips? I am following [2] and [3] > Use DockerInsecureRegistryAddress to configure the list of insecure registries. You can include this in the container image prepare file. If you are using push_destination: true, be sure to add the undercloud in there by default. We have logic to magically add this if DockerInsecureRegistryAddress is not configured and push_destination: true is set. It'll configure the local ip and an undercloud ctlplane host name as well. Unfortunately docker/podman always attempt https first and fallback to http if not available (this can get weird). If the host is not in the insecure list, it won't fall back to http. > [1] http://paste.openstack.org/show/cYQM2k77bIh14Zzr5Kjn/ > [2] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/container_image_prepare.html > [3] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/installing-an-undercloud-with-containers > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From akamyshnikova at mirantis.com Mon Oct 5 06:58:43 2020 From: akamyshnikova at mirantis.com (Anna Taraday) Date: Mon, 5 Oct 2020 10:58:43 +0400 Subject: [Octavia] Please help with amphorav2 provider populate db command In-Reply-To: <1344937278.133047189.1601373873142.JavaMail.zimbra@desy.de> References: <1344937278.133047189.1601373873142.JavaMail.zimbra@desy.de> Message-ID: Hello, Error in your trace shows "Access denied for user 'octavia' Please check that you followed all steps from setup guide and grant access for user. [1] - https://docs.openstack.org/octavia/latest/install/install-amphorav2.html#prerequisites On Tue, Sep 29, 2020 at 8:12 PM Bujack, Stefan wrote: > Hello, > > I think I need a little help again with the configuration of the amphora > v2 provider. I get an error when I try to populate the database. It seems > that the name of the localhost is used for the DB host and not what I > configured in octavia.conf as DB host > > > root at octavia04:~# octavia-db-manage --config-file > /etc/octavia/octavia.conf upgrade_persistence > 2020-09-29 11:45:01.911 818313 WARNING > taskflow.persistence.backends.impl_sqlalchemy [-] Engine connection > (validate) failed due to '(pymysql.err.OperationalError) (1045, "Access > denied for user 'octavia'@'octavia04.desy.de' (using password: YES)") > (Background on this error at: http://sqlalche.me/e/e3q8)' > 2020-09-29 11:45:01.912 818313 CRITICAL octavia-db-manage [-] Unhandled > error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) > (1045, "Access denied for user 'octavia'@'octavia04.desy.de' (using > password: YES)") > (Background on this error at: http://sqlalche.me/e/e3q8) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most > recent call last): > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in > _wrap_pool_connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in > unique_connection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > _ConnectionFairy._checkout(self) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in > _checkout > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = > _ConnectionRecord.checkout(pool) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in > checkout > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = > pool._do_get() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in > _do_get > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > self._dec_overflow() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, > in __exit__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > compat.reraise(exc_type, exc_value, exc_tb) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in > reraise > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in > _do_get > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self._create_connection() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in > _create_connection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > _ConnectionRecord(self) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in > __init__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > self.__connect(first_connect_check=True) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in > __connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = > pool._invoke_creator(self) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, > in connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > dialect.connect(*cargs, **cparams) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in > connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self.dbapi.connect(*cargs, **cparams) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > Connection(*args, **kwargs) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in > __init__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in > connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > self._request_authentication() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in > _request_authentication > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = > self._read_packet() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in > _read_packet > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > packet.check_error() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in > check_error > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > err.raise_mysql_exception(self._data) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in > raise_mysql_exception > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise > errorclass(errno, errval) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > pymysql.err.OperationalError: (1045, "Access denied for user 'octavia'@' > octavia04.desy.de' (using password: YES)") > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage The above exception > was the direct cause of the following exception: > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most > recent call last): > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/local/bin/octavia-db-manage", line 8, in > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sys.exit(main()) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line > 156, in main > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > CONF.command.func(config, CONF.command.name) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line > 98, in do_persistence_upgrade > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > persistence.initialize() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/local/lib/python3.8/dist-packages/octavia/controller/worker/v2/taskflow_jobboard_driver.py", > line 50, in initialize > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with > contextlib.closing(backend.get_connection()) as connection: > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", > line 335, in get_connection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > conn.validate(max_retries=self._max_retries) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", > line 394, in validate > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > _try_connect(self._engine) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 311, in > wrapped_f > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self.call(f, *args, **kw) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 391, in call > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage do = > self.iter(retry_state=retry_state) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 338, in iter > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > fut.result() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self.__get_result() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise > self._exception > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 394, in call > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage result = > fn(*args, **kwargs) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", > line 391, in _try_connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with > contextlib.closing(engine.connect()): > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2209, in > connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self._connection_cls(self, **kwargs) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 103, in > __init__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage else > engine.raw_connection() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2306, in > raw_connection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self._wrap_pool_connect( > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2279, in > _wrap_pool_connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > Connection._handle_dbapi_exception_noconnection( > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1547, in > _handle_dbapi_exception_noconnection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > util.raise_from_cause(sqlalchemy_exception, exc_info) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in > raise_from_cause > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > reraise(type(exception), exception, tb=exc_tb, cause=cause) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in > reraise > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise > value.with_traceback(tb) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in > _wrap_pool_connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in > unique_connection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > _ConnectionFairy._checkout(self) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in > _checkout > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = > _ConnectionRecord.checkout(pool) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in > checkout > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = > pool._do_get() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in > _do_get > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > self._dec_overflow() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, > in __exit__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > compat.reraise(exc_type, exc_value, exc_tb) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in > reraise > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in > _do_get > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self._create_connection() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in > _create_connection > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > _ConnectionRecord(self) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in > __init__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > self.__connect(first_connect_check=True) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in > __connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = > pool._invoke_creator(self) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, > in connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > dialect.connect(*cargs, **cparams) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in > connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > self.dbapi.connect(*cargs, **cparams) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return > Connection(*args, **kwargs) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in > __init__ > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in > connect > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > self._request_authentication() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in > _request_authentication > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = > self._read_packet() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in > _read_packet > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > packet.check_error() > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in > check_error > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > err.raise_mysql_exception(self._data) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File > "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in > raise_mysql_exception > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise > errorclass(errno, errval) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, > "Access denied for user 'octavia'@'octavia04.desy.de' (using password: > YES)") > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage (Background on this > error at: http://sqlalche.me/e/e3q8) > 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage > > > > root at octavia04:~# cat /etc/octavia/octavia.conf > [DEFAULT] > transport_url = rabbit://openstack:password at rabbit-intern.desy.de > use_journal = True > [api_settings] > bind_host = 0.0.0.0 > bind_port = 9876 > [certificates] > cert_generator = local_cert_generator > ca_certificate = /etc/octavia/certs/server_ca.cert.pem > ca_private_key = /etc/octavia/certs/server_ca.key.pem > ca_private_key_passphrase = passphrase > [controller_worker] > amp_image_owner_id = f89517ee676f4618bd55849477442aca > amp_image_tag = amphora > amp_ssh_key_name = octaviakey > amp_secgroup_list = 2236e82c-13fe-42e3-9fcf-bea43917f231 > amp_boot_network_list = 9f7fefc4-f262-4d8d-9465-240f94a7e87b > amp_flavor_id = 200 > network_driver = allowed_address_pairs_driver > compute_driver = compute_nova_driver > amphora_driver = amphora_haproxy_rest_driver > client_ca = /etc/octavia/certs/client_ca.cert.pem > [database] > connection = mysql+pymysql://octavia:password at maria-intern.desy.de/octavia > [haproxy_amphora] > client_cert = /etc/octavia/certs/client.cert-and-key.pem > server_ca = /etc/octavia/certs/server_ca.cert.pem > [health_manager] > bind_port = 5555 > bind_ip = 172.16.0.2 > controller_ip_port_list = 172.16.0.2:5555 > [keystone_authtoken] > www_authenticate_uri = https://keystone-intern.desy.de:5000/v3 > auth_url = https://keystone-intern.desy.de:5000/v3 > memcached_servers = nova-intern.desy.de:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = octavia > password = password > service_token_roles_required = True > [oslo_messaging] > topic = octavia_prov > [service_auth] > auth_url = https://keystone-intern.desy.de:5000/v3 > memcached_servers = nova-intern.desy.de:11211 > auth_type = password > project_domain_name = Default > user_domain_name = Default > project_name = service > username = octavia > password = password > [task_flow] > persistence_connection = mysql+pymysql:// > octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence > jobboard_backend_driver = 'redis_taskflow_driver' > jobboard_backend_hosts = 10.254.28.113 > jobboard_backend_port = 6379 > jobboard_backend_password = password > jobboard_backend_namespace = 'octavia_jobboard' > > > > root at octavia04:~# octavia-db-manage current > 2020-09-29 12:02:23.159 819432 INFO alembic.runtime.migration [-] Context > impl MySQLImpl. > 2020-09-29 12:02:23.160 819432 INFO alembic.runtime.migration [-] Will > assume non-transactional DDL. > fbd705961c3a (head) > > > > We have an Openstack Ussuri deployment on Ubuntu 20.04. > > > Thanks in advance, > > Stefan Bujack > > -- Regards, Ann Taraday Mirantis, Inc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.bujack at desy.de Mon Oct 5 07:34:59 2020 From: stefan.bujack at desy.de (Bujack, Stefan) Date: Mon, 5 Oct 2020 09:34:59 +0200 (CEST) Subject: [Octavia] Please help with amphorav2 provider populate db command In-Reply-To: References: <1344937278.133047189.1601373873142.JavaMail.zimbra@desy.de> Message-ID: <2068098159.169923596.1601883299129.JavaMail.zimbra@desy.de> Hello, thank you for your answer. The access for user octavia on host octavia04 is denied because my database host is maria-intern.desy.de and there is no DB service on octavia04. But why is the script trying to populate the DB on localhost and not my DB host as I configured in the /etc/octavia/octavia.conf? "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' persistence_connection = mysql+pymysql:// [ http://octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence | octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence ] Greets Stefan Bujack From: "Anna Taraday" To: "Stefan Bujack" Cc: "openstack-discuss" Sent: Monday, 5 October, 2020 08:58:43 Subject: Re: [Octavia] Please help with amphorav2 provider populate db command Hello, Error in your trace shows "Access denied for user ' octavia ' Please check that you followed all steps from setup guide and grant access for user. [1] - [ https://docs.openstack.org/octavia/latest/install/install-amphorav2.html#prerequisites | https://docs.openstack.org/octavia/latest/install/install-amphorav2.html#prerequisites ] On Tue, Sep 29, 2020 at 8:12 PM Bujack, Stefan < [ mailto:stefan.bujack at desy.de | stefan.bujack at desy.de ] > wrote: Hello, I think I need a little help again with the configuration of the amphora v2 provider. I get an error when I try to populate the database. It seems that the name of the localhost is used for the DB host and not what I configured in octavia.conf as DB host root at octavia04:~# octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade_persistence 2020-09-29 11:45:01.911 818313 WARNING taskflow.persistence.backends.impl_sqlalchemy [-] Engine connection (validate) failed due to '(pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") (Background on this error at: [ http://sqlalche.me/e/e3q8 | http://sqlalche.me/e/e3q8 ] )' 2020-09-29 11:45:01.912 818313 CRITICAL octavia-db-manage [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") (Background on this error at: [ http://sqlalche.me/e/e3q8 | http://sqlalche.me/e/e3q8 ] ) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most recent call last): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in unique_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionFairy._checkout(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in _checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = _ConnectionRecord.checkout(pool) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = pool._do_get() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._dec_overflow() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage compat.reraise(exc_type, exc_value, exc_tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._create_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in _create_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionRecord(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.__connect(first_connect_check=True) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in __connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = pool._invoke_creator(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return dialect.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.dbapi.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return Connection(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._request_authentication() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in _request_authentication 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = self._read_packet() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage packet.check_error() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage err.raise_mysql_exception(self._data) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise errorclass(errno, errval) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage pymysql.err.OperationalError: (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage The above exception was the direct cause of the following exception: 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most recent call last): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/bin/octavia-db-manage", line 8, in 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sys.exit(main()) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line 156, in main 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage CONF.command.func(config, [ http://conf.command.name/ | CONF.command.name ] ) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line 98, in do_persistence_upgrade 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage persistence.initialize() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/controller/worker/v2/taskflow_jobboard_driver.py", line 50, in initialize 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with contextlib.closing(backend.get_connection()) as connection: 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 335, in get_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage conn.validate(max_retries=self._max_retries) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 394, in validate 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage _try_connect(self._engine) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 311, in wrapped_f 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.call(f, *args, **kw) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 391, in call 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage do = self.iter(retry_state=retry_state) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 338, in iter 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fut.result() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.__get_result() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise self._exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 394, in call 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage result = fn(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 391, in _try_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with contextlib.closing(engine.connect()): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2209, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._connection_cls(self, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 103, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage else engine.raw_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2306, in raw_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._wrap_pool_connect( 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2279, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Connection._handle_dbapi_exception_noconnection( 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1547, in _handle_dbapi_exception_noconnection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage util.raise_from_cause(sqlalchemy_exception, exc_info) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage reraise(type(exception), exception, tb=exc_tb, cause=cause) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value.with_traceback(tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in unique_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionFairy._checkout(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in _checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = _ConnectionRecord.checkout(pool) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = pool._do_get() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._dec_overflow() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage compat.reraise(exc_type, exc_value, exc_tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._create_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in _create_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionRecord(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.__connect(first_connect_check=True) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in __connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = pool._invoke_creator(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return dialect.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.dbapi.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return Connection(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._request_authentication() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in _request_authentication 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = self._read_packet() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage packet.check_error() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage err.raise_mysql_exception(self._data) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise errorclass(errno, errval) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage (Background on this error at: [ http://sqlalche.me/e/e3q8 | http://sqlalche.me/e/e3q8 ] ) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage root at octavia04:~# cat /etc/octavia/octavia.conf [DEFAULT] transport_url = rabbit:// [ mailto:openstack%3Apassword at rabbit-intern.desy.de | openstack:password at rabbit-intern.desy.de ] use_journal = True [api_settings] bind_host = 0.0.0.0 bind_port = 9876 [certificates] cert_generator = local_cert_generator ca_certificate = /etc/octavia/certs/server_ca.cert.pem ca_private_key = /etc/octavia/certs/server_ca.key.pem ca_private_key_passphrase = passphrase [controller_worker] amp_image_owner_id = f89517ee676f4618bd55849477442aca amp_image_tag = amphora amp_ssh_key_name = octaviakey amp_secgroup_list = 2236e82c-13fe-42e3-9fcf-bea43917f231 amp_boot_network_list = 9f7fefc4-f262-4d8d-9465-240f94a7e87b amp_flavor_id = 200 network_driver = allowed_address_pairs_driver compute_driver = compute_nova_driver amphora_driver = amphora_haproxy_rest_driver client_ca = /etc/octavia/certs/client_ca.cert.pem [database] connection = mysql+pymysql:// [ http://octavia:password at maria-intern.desy.de/octavia | octavia:password at maria-intern.desy.de/octavia ] [haproxy_amphora] client_cert = /etc/octavia/certs/client.cert-and-key.pem server_ca = /etc/octavia/certs/server_ca.cert.pem [health_manager] bind_port = 5555 bind_ip = 172.16.0.2 controller_ip_port_list = [ http://172.16.0.2:5555/ | 172.16.0.2:5555 ] [keystone_authtoken] www_authenticate_uri = [ https://keystone-intern.desy.de:5000/v3 | https://keystone-intern.desy.de:5000/v3 ] auth_url = [ https://keystone-intern.desy.de:5000/v3 | https://keystone-intern.desy.de:5000/v3 ] memcached_servers = [ http://nova-intern.desy.de:11211/ | nova-intern.desy.de:11211 ] auth_type = password project_domain_name = default user_domain_name = default project_name = service username = octavia password = password service_token_roles_required = True [oslo_messaging] topic = octavia_prov [service_auth] auth_url = [ https://keystone-intern.desy.de:5000/v3 | https://keystone-intern.desy.de:5000/v3 ] memcached_servers = [ http://nova-intern.desy.de:11211/ | nova-intern.desy.de:11211 ] auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = octavia password = password [task_flow] persistence_connection = mysql+pymysql:// [ http://octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence | octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence ] jobboard_backend_driver = 'redis_taskflow_driver' jobboard_backend_hosts = 10.254.28.113 jobboard_backend_port = 6379 jobboard_backend_password = password jobboard_backend_namespace = 'octavia_jobboard' root at octavia04:~# octavia-db-manage current 2020-09-29 12:02:23.159 819432 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2020-09-29 12:02:23.160 819432 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. fbd705961c3a (head) We have an Openstack Ussuri deployment on Ubuntu 20.04. Thanks in advance, Stefan Bujack -- Regards, Ann Taraday Mirantis, Inc -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbengt at redhat.com Mon Oct 5 08:37:13 2020 From: dbengt at redhat.com (Daniel Bengtsson) Date: Mon, 5 Oct 2020 10:37:13 +0200 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: References: Message-ID: <52e15126-b5fc-1a4d-b21e-de7f7a37abf3@redhat.com> Le 02/10/2020 à 15:40, Sebastien Boyron a écrit : > I am opening the discussion and pointing to this right now, but I think > we should wait for the Wallaby release before doing anything on that > point to insert this modification > into the regular development cycle. On a release point of view all the > changes related to this proposal will be released through the classic > release process > and they will be landed with other projects changes, in other words it > will not require a range of specific releases for projects. It's a good idea. I agree explicit is better than implicit. I'm interesting to help on this subject. From moshele at nvidia.com Mon Oct 5 23:33:12 2020 From: moshele at nvidia.com (Moshe Levi) Date: Mon, 5 Oct 2020 23:33:12 +0000 Subject: [neutron] Security groups with SR-IOV as a second ML2 mechanism driver In-Reply-To: References: Message-ID: firewall driver is per agent config. So it fine to have SR-IOV agent firewall as noop and OVS agent as ovs/hybrid. > -----Original Message----- > From: GABRIEL OMAR GAMERO MONTENEGRO > > Sent: Tuesday, October 6, 2020 1:12 AM > To: openstack-discuss at lists.openstack.org > Subject: [neutron] Security groups with SR-IOV as a second ML2 mechanism > driver > > External email: Use caution opening links or attachments > > > Dear all, > > I'm planning to use the SR-IOV Networking L2 Agent with another L2 Agent as > Open vSwitch or Linux Bridge (a configuration with multiple ML2 mechanism > drivers). > > Does anybody know if I can use the Open vSwitch or Linux Bridge L2 agents > with security group feature (implemented with iptables firewall driver or > Native Open vSwitch firewall driver)? > Or am I restricted to apply no security to my instances because SR-IOV L2 > agent is being used as a second mechanism driver in the same OpenStack > deployment? > > Thanks in advance, > Gabriel Gamero From marios at redhat.com Tue Oct 6 13:14:22 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 6 Oct 2020 16:14:22 +0300 Subject: [tripleo][ci] RFC, supported releases of TripleO In-Reply-To: <20201006125241.t5xjp7qiffkod2ne@yuggoth.org> References: <20201006125241.t5xjp7qiffkod2ne@yuggoth.org> Message-ID: On Tue, Oct 6, 2020 at 3:54 PM Jeremy Stanley wrote: > On 2020-10-06 11:15:57 +0300 (+0300), Marios Andreou wrote: > [...] > > main reason for keeping queens is because it is part of the fast > > forward upgrade (ffu) ... i.e. newton->queens for ffu 1 and then > > queens->train for ffu 2. So indeed the backports will go as you > > described - to train then queens > [...] > > Previous discussions highlighted the need for all stable release > branches newer than a particular branch to have at least the same > level of support or higher. This expectation was encoded into the > Stable Branches chapter of the Project Team Guide: > > > https://docs.openstack.org/project-team-guide/stable-branches.html#processes > > Further, fast forward upgrades aren't designed the way you're > suggesting. You're expected to install each version of the software, > so to go from Newton to Queens you need to install Ocata and Pike > along the way to do the necessary upgrade steps, and then between > Queens and Train you need to upgrade through Rocky and Stein. What's > unique about FFU is simply that you don't need to start any > services from the intermediate branch deployments, so the upgrades > are performed "offline" so to speak. > yes you are correct - I am at least a little familiar with ffu I used to be part of the tripleo upgrades squad around the time of queens->train "ffu 1". So indeed, you may need to merge branch specific upgrades tasks or fixes into the intermediate branches and likely this is why we have not tagged older branches (like ocata and pike) as EOL I think there are two things in this thread. The first from weshay original mail, which doesn't clarify what 'removing stable/rocky' means - but I believe it means 'removing the ci jobs for stable/rocky' ... I mentioned EOL and that branches are not tagged as such - to which you replied. I now understand both that we should not do that, and likely *why* ocata/pike aren't marked eol like some older branches. Granted, no TripleO deliverables have the stable:follows-policy tag, > so as long as you're only referring to which branches of TripleO > repositories are switching to EOL you can probably do whatever you want (assuming the TC doesn't object). If TripleO's upgrade > orchestration in stable/train is able to perform the Rocky and Stein > intermediate deployments, it would presumably work. The service > projects don't have the same luxury however, and so can only EOL a particular stable branch if all older branches are already EOL. > thanks for the pointers and clarification. I apologise for the confusion - as I wrote above, I was mistaken about marking branches eol. We will not be doing this. This thread is about removing the upstream ci/rdo periodic jobs for those branches - rocky and stein to be specific thanks, marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Oct 6 13:23:17 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 6 Oct 2020 16:23:17 +0300 Subject: [tripleo][ci] RFC, supported releases of TripleO In-Reply-To: References: <20201006125241.t5xjp7qiffkod2ne@yuggoth.org> Message-ID: On Tue, Oct 6, 2020 at 4:14 PM Marios Andreou wrote: > > > On Tue, Oct 6, 2020 at 3:54 PM Jeremy Stanley wrote: > >> On 2020-10-06 11:15:57 +0300 (+0300), Marios Andreou wrote: >> [...] >> > main reason for keeping queens is because it is part of the fast >> > forward upgrade (ffu) ... i.e. newton->queens for ffu 1 and then >> > queens->train for ffu 2. So indeed the backports will go as you >> > described - to train then queens >> [...] >> >> Previous discussions highlighted the need for all stable release >> branches newer than a particular branch to have at least the same >> level of support or higher. This expectation was encoded into the >> Stable Branches chapter of the Project Team Guide: >> >> >> https://docs.openstack.org/project-team-guide/stable-branches.html#processes >> >> Further, fast forward upgrades aren't designed the way you're >> suggesting. You're expected to install each version of the software, >> so to go from Newton to Queens you need to install Ocata and Pike >> along the way to do the necessary upgrade steps, and then between >> Queens and Train you need to upgrade through Rocky and Stein. What's >> unique about FFU is simply that you don't need to start any >> services from the intermediate branch deployments, so the upgrades >> are performed "offline" so to speak. >> > > yes you are correct - I am at least a little familiar with ffu I used to > be part of the tripleo upgrades squad around the time of queens->train "ffu > 1". > nit... ^^^ newton to queens for ffu1 .... queens to train is ffu2 So indeed, you may need to merge branch specific upgrades tasks or fixes > into the intermediate branches and likely this is why we have not tagged > older branches (like ocata and pike) as EOL > > I think there are two things in this thread. The first from weshay > original mail, which doesn't clarify what 'removing stable/rocky' means - > but I believe it means 'removing the ci jobs for stable/rocky' ... > > I mentioned EOL and that branches are not tagged as such - to which you > replied. I now understand both that we should not do that, and likely *why* > ocata/pike aren't marked eol like some older branches. > > Granted, no TripleO deliverables have the stable:follows-policy tag, >> so as long as you're only referring to which branches of TripleO >> repositories are switching to EOL you can probably do whatever you > > want (assuming the TC doesn't object). If TripleO's upgrade >> orchestration in stable/train is able to perform the Rocky and Stein >> intermediate deployments, it would presumably work. The service >> projects don't have the same luxury however, and so can only EOL a > > particular stable branch if all older branches are already EOL. >> > > thanks for the pointers and clarification. I apologise for the confusion - > as I wrote above, I was mistaken about marking branches eol. We will not be > doing this. > > This thread is about removing the upstream ci/rdo periodic jobs for those > branches - rocky and stein to be specific > > so... thinking about it some more just now... i think we may need to keep some minimal ci for both rocky and stein.. in the same way we keep a smaller subset of our 'normal' jobs for queens. The reason is what fungi reminded me in his reply above... we may need to merge fixes into both rocky and stein - for example undercloud upgrade tasks or any other kind of upgrade task that will be used as part of ffu? Unless all the ffu upgrade logic lives in the target (train in the ffu2 q->t) branch - but I doubt that for example the upgrade is at least upgraded sequentially i.e. execute the n upgrade tasks, then n+1 and so on to the target. I will point the upgrades squad at this to comment and confirm about the q-->t ffu2 thanks > thanks, marios > > > >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Tue Oct 6 13:36:23 2020 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Tue, 6 Oct 2020 14:36:23 +0100 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: Hi So I'm not massively against the idea, but I would like to present some potential disadvantages for consideration: I have to admit to not being keen to using symlinks for the functional test yaml files. My main objection is maintenance as new openstack and ubuntu releases occur and bundles are added and removed from the charm. At present, without symlinks (apart from in the overlays), the bundle for an ubuntu-openstack version is a plain file. To remove a version, it is just deleted. If there are symlinks then the 'base.yaml' version represents the one that the charm starts with (say bionic-queens). And then bionic-rocky is a symlink (perhaps with an overlay) and bionic-stein is another symlink, etc. However, at some point in the future bionic-queens will eventually be removed. base.yaml is the 'bionic-queens'. So what is done with base.yaml? Do we make it 'focal-ussuri' and change all the overlays? Leave it as is? Have a new base for each Ubuntu LTS and work from that? Whilst the current system isn't DRY, it does make it simple to see what's in a particular test bundle for a variation. Having said all of the above, it is a bit of a pain to manage all the separate files as well, especially when there are changes across multiple versions of the tests. Thanks Alex. On Mon, Oct 5, 2020 at 5:02 PM David Ames wrote: > FWIW, I have used sym links in a couple of charms [0] [1]. This seems > like a perfectly rational thing to do. I suspect it could be leveraged > even further as @Xav Paice suggests. > > @Alex Kavanagh, I think this is a separate issue from mojo, as this > primarily pertains to Zaza tests. Also, the build process removes the > sym-links and creates separate files for us in a built charm. > > [0] > https://github.com/openstack/charm-mysql-innodb-cluster/tree/master/src/tests/bundles > [1] > https://github.com/openstack-charmers/charm-ceph-benchmarking/tree/master/tests/bundles > > -- > David Ames > > On Mon, Oct 5, 2020 at 4:07 AM Chris MacNaughton > wrote: > > > > > > > I think I remember that a conscious decision was made to avoid using > > > symlinks for the bundles due to the hell that openstack-mojo-specs > > > descended into? Liam may want to wade in on this? > > > > > Broadly, this is one of the reasons I proposed submitting a review of a > > single charm to be a practical example of what this would look like, and > > how complex it would be. It should also help us, as a project, identify > > if the repetition is enough that symlinks, or refactoring the library, > > would be worthwhile. > > > > Chris > > > -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Oct 6 14:20:05 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 6 Oct 2020 17:20:05 +0300 Subject: [tripleo][undercloud] use local container images in insecure repo In-Reply-To: References: Message-ID: Hi, in which place I should add DockerInsecureRegistryAddress ? In which Level? I have added in 2 levels parameter_defaults: ContainerImagePrepare: - DockerInsecureRegistryAddress: harbor.vgtu.lt set: ceph_alertmanager_image: alertmanager ceph_alertmanager_namespace: harbor.vgtu.lt/prom ceph_alertmanager_tag: v0.16.2 ceph_grafana_image: grafana ceph_grafana_namespace: harbor.vgtu.lt/grafana ceph_grafana_tag: 5.4.3 ceph_image: daemon ceph_namespace: harbor.vgtu.lt/ceph ceph_node_exporter_image: node-exporter ceph_node_exporter_namespace: harbor.vgtu.lt/prom ceph_node_exporter_tag: v0.17.0 ceph_prometheus_image: prometheus ceph_prometheus_namespace: harbor.vgtu.lt/prom ceph_prometheus_tag: v2.7.2 ceph_tag: v4.0.12-stable-4.0-nautilus-centos-7-x86_64 default_tag: true name_prefix: centos-binary- name_suffix: '' namespace: harbor.vgtu.lt/testukas insecure: true DockerInsecureRegistryAddress: harbor.vgtu.lt neutron_driver: ovn rhel_containers: false tag: current-tripleo tag_from_label: rdo_version And I have launched tcpdump with filter: host harbor.vgtu.lt and port 80 and I do not receive any. Also it is in undercloud.conf insecure list (first and last one, twice :) and it is in registries.conf in /etc/containers On Tue, 6 Oct 2020 at 16:09, Alex Schultz wrote: > On Tue, Oct 6, 2020 at 1:15 AM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I have been trying to use containers from local container image repo > which is insecure, but it is always trying to use TLS version, and I do not > have https there. even if I would have, I would not have CERT signed, so > still it is insecure. It is always trying to access over WWW:443. > > > > my registries.conf [1] and I am able to fetch image from the registry > [1] and my container image prepare file contains updated repos, I have even > added insecure: true > > > > any tips? I am following [2] and [3] > > > > Use DockerInsecureRegistryAddress to configure the list of insecure > registries. You can include this in the container image prepare file. > If you are using push_destination: true, be sure to add the undercloud > in there by default. We have logic to magically add this if > DockerInsecureRegistryAddress is not configured and push_destination: > true is set. It'll configure the local ip and an undercloud ctlplane > host name as well. > > Unfortunately docker/podman always attempt https first and fallback to > http if not available (this can get weird). If the host is not in the > insecure list, it won't fall back to http. > > > [1] http://paste.openstack.org/show/cYQM2k77bIh14Zzr5Kjn/ > > [2] > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/container_image_prepare.html > > [3] > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/installing-an-undercloud-with-containers > > > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Oct 6 14:21:54 2020 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 6 Oct 2020 08:21:54 -0600 Subject: [tripleo][undercloud] use local container images in insecure repo In-Reply-To: References: Message-ID: It's a top level var and expects a list. So under parameter_defaults. paramter_defaults: DockerInsecureRegistryAddress: - harbor.vgtu.lt ContainerImagePrepare: - set: .... On Tue, Oct 6, 2020 at 8:20 AM Ruslanas Gžibovskis wrote: > > Hi, in which place I should add DockerInsecureRegistryAddress ? > In which Level? I have added in 2 levels > parameter_defaults: > ContainerImagePrepare: > - DockerInsecureRegistryAddress: harbor.vgtu.lt > set: > ceph_alertmanager_image: alertmanager > ceph_alertmanager_namespace: harbor.vgtu.lt/prom > ceph_alertmanager_tag: v0.16.2 > ceph_grafana_image: grafana > ceph_grafana_namespace: harbor.vgtu.lt/grafana > ceph_grafana_tag: 5.4.3 > ceph_image: daemon > ceph_namespace: harbor.vgtu.lt/ceph > ceph_node_exporter_image: node-exporter > ceph_node_exporter_namespace: harbor.vgtu.lt/prom > ceph_node_exporter_tag: v0.17.0 > ceph_prometheus_image: prometheus > ceph_prometheus_namespace: harbor.vgtu.lt/prom > ceph_prometheus_tag: v2.7.2 > ceph_tag: v4.0.12-stable-4.0-nautilus-centos-7-x86_64 > default_tag: true > name_prefix: centos-binary- > name_suffix: '' > namespace: harbor.vgtu.lt/testukas > insecure: true > DockerInsecureRegistryAddress: harbor.vgtu.lt > neutron_driver: ovn > rhel_containers: false > tag: current-tripleo > tag_from_label: rdo_version > > And I have launched tcpdump with filter: host harbor.vgtu.lt and port 80 and I do not receive any. > Also it is in undercloud.conf insecure list (first and last one, twice :) and it is in registries.conf in /etc/containers > > > > On Tue, 6 Oct 2020 at 16:09, Alex Schultz wrote: >> >> On Tue, Oct 6, 2020 at 1:15 AM Ruslanas Gžibovskis wrote: >> > >> > Hi all, >> > >> > I have been trying to use containers from local container image repo which is insecure, but it is always trying to use TLS version, and I do not have https there. even if I would have, I would not have CERT signed, so still it is insecure. It is always trying to access over WWW:443. >> > >> > my registries.conf [1] and I am able to fetch image from the registry [1] and my container image prepare file contains updated repos, I have even added insecure: true >> > >> > any tips? I am following [2] and [3] >> > >> >> Use DockerInsecureRegistryAddress to configure the list of insecure >> registries. You can include this in the container image prepare file. >> If you are using push_destination: true, be sure to add the undercloud >> in there by default. We have logic to magically add this if >> DockerInsecureRegistryAddress is not configured and push_destination: >> true is set. It'll configure the local ip and an undercloud ctlplane >> host name as well. >> >> Unfortunately docker/podman always attempt https first and fallback to >> http if not available (this can get weird). If the host is not in the >> insecure list, it won't fall back to http. >> >> > [1] http://paste.openstack.org/show/cYQM2k77bIh14Zzr5Kjn/ >> > [2] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/container_image_prepare.html >> > [3] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/installing-an-undercloud-with-containers >> > >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From mnaser at vexxhost.com Tue Oct 6 14:43:15 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Oct 2020 10:43:15 -0400 Subject: [Trove] Project udpate In-Reply-To: References: Message-ID: Lingxian, This is all awesome work. I'm happy to see this progress. I will try and play with Trove when we have sometime. Thank you for your awesome progress. Regards Mohammed On Sun, Oct 4, 2020 at 8:11 PM Lingxian Kong wrote: > > Hi there, > > As the official Victoria release is approaching and it has been a long time > silence for Trove in the upstream, I think it's good time for me as the Trove > PTL for the last 3 dev cycles to have a project update. The things that will be > described below have not been achieved in one single dev cycle, but are some > significant changes since the 'dark time' of Trove project in the past. Tips > hat to those who have made contributions to Trove project ever before. > > ## Service tenant configuration > > Service tenant configuration was added in Stein release, before that, it's > impossible to deploy Trove in the public cloud (even not for some private cloud > due to security concerns) because the user may have access to the guest > instance which contains sensitive data in the config files, the users can also > perform operations towards either storage or networking resources which may > bring much management overhead and make it easy to break the database > functionality. > > With service tenant configuration (which is currently the default setting in > devstack), almost all the cloud resources(except the Swift objects for backup > data) created for a Trove instance are only visible to the Trove service user. > As Trove users, they can only see a Trove instance, but know nothing about the > Nova VM, Cinder volume, Neutron management network, and security groups under > the hood. The only way to operate Trove instances is to interact with Trove API. > > ## Message queue security concerns > > To do database operations, trove controller services communicate with > trove-guestagent service inside the instance via message queue service (i.e. > RabbitMQ in most environments). In the meantime, trove-guestagent periodically > sends status update information to trove-conductor through the same messaging > system. > > In the current design, the RabbitMQ username and password need to be configured > in the trove-guestagent config file, which brought significant security concern > for the cloud deployers in the past. If the guest instance is compromised, then > guest credentials are compromised, which means the messaging system is > compromised. > > As part of the solution, a security enhancement was introduced in the Ocata > release, using encryption keys to protect the messages between the control > plane and the guest instances. First, the rabbitmq credential should only have > access to trove services. Second, even with the rabbitmq credential and the > message encryption key of the particular instance, the communication from the > guest agent and trove controller services are restricted in the context of that > particular instance, other instances are not affected as the malicious user > doesn't know their message encryption keys. > > Additionally, since Ussuri, trove is running in service tenant model in > devstack by default which is also the recommended deployment configuration. > Most of the cloud resources(except the Swift objects for backup data) created > for a trove instance should only be visible to the trove service user, which > also could decrease the attack surface. > > ## Datastore images > > Before Victoria, Trove provided a bunch of diskimage-builder elements for > building different datastore images. As contributors were leaving, most of the > elements just became unmaintained except for MySQL and MariaDB. To solve the > problem, database containerization was introduced in Victoria dev cycle, so > that the database service is running in a docker container inside the guest > instance, trove guest agent is pulling container image for a particular > datastore when initializing guest instance. Trove is not maintaining those > container images. > > That means, since Victoria, the cloud provider only needs to maintain one > single datastore image which only contains common code that is datastore > independent. However, for backward compatibility, the cloud provider still > needs to create different datastores but using the same Glance image ID. > > Additionally, using database container also makes it much easier for database > operations and management. > > To upgrade from the older version to Victoria onwards, the Trove user has to > create backups before upgrading, then create instances from the backup, so > downtime is expected. > > ## Supported datastores > > Trove used to support several datastores such as MySQL, MariaDB, PostgreSQL, > MongoDB, CouchDB, etc. Most of them became unmaintained because of a lack of > maintainers in the community. > > Currently, only MySQL and MariaDB drivers are fully supported and tested in the > upstream. PostgreSQL driver was refactored in Victoria dev cycle and is in > experimental status again. > > Adding extra datastores should be quite easy by implementing the interfaces > between trove task manager and guest agent. Again, no need to maintain separate > datastore images thanks to the container technology. > > ## Instance backup > > At the same time as we were moving to use container for database services, we > also moved the backup and restore functions out of trove guest agent code > because the backup function is usually using some 3rd party software which we > don't want to pre-install inside the datastore image. As a result, we are using > container as well for database backup and restore. > > For more information about the backup container image, see > https://lingxiankong.github.io/2020-04-14-database-backup-docker-image.html. > > ## Others > > There are many other improvements not mentioned above added to Trove since Train, e.g. > > * Access configuration for the instance. > * The swift backend customization for backup. > * Online volume resize support. > * XFS disk format for database data volume. > * API documentation improvement. > * etc. > > By the way, Catalyst Cloud has already deployed Trove (in Alpha) in our public > cloud in New Zealand, we are getting feedback from customers. I believe there > are other deployers already have Trove in their production but running an old > version because of previous upstream situation in the past. If you are one of > them and interested in upgrading to the latest, please either reply to this > email or send personal email to me, I would be very happy to provide any help > or guidance. For those who are still in evaluation phase, you are also welcome > to reach out for any questions. I'm always in the position to help in > #openstack-trove IRC channel. > > --- > Lingxian Kong > Senior Software Engineer > Catalyst Cloud > www.catalystcloud.nz -- Mohammed Naser VEXXHOST, Inc. From ruslanas at lpic.lt Tue Oct 6 14:43:22 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 6 Oct 2020 17:43:22 +0300 Subject: [tripleo][undercloud] use local container images in insecure repo In-Reply-To: References: Message-ID: Or maybe I can specify, some exact version of ironic containers, that were working: docker.io/tripleou/centos-binary-ironic-inspector at sha256:ad5d58c4cce48ed0c660a0be7fed69f53202a781e75b1037dcee96147e9b8c4b for installation to grab? And trying your suggestion. Also generating self signed Cert and will be adding it to undercloud host to trust list, or it should be also added to undercloud.conf with env files also? On Tue, 6 Oct 2020 at 17:22, Alex Schultz wrote: > It's a top level var and expects a list. So under parameter_defaults. > > paramter_defaults: > DockerInsecureRegistryAddress: > - harbor.vgtu.lt > ContainerImagePrepare: > - set: > .... > > On Tue, Oct 6, 2020 at 8:20 AM Ruslanas Gžibovskis > wrote: > > > > Hi, in which place I should add DockerInsecureRegistryAddress ? > > In which Level? I have added in 2 levels > > parameter_defaults: > > ContainerImagePrepare: > > - DockerInsecureRegistryAddress: harbor.vgtu.lt > > set: > > ceph_alertmanager_image: alertmanager > > ceph_alertmanager_namespace: harbor.vgtu.lt/prom > > ceph_alertmanager_tag: v0.16.2 > > ceph_grafana_image: grafana > > ceph_grafana_namespace: harbor.vgtu.lt/grafana > > ceph_grafana_tag: 5.4.3 > > ceph_image: daemon > > ceph_namespace: harbor.vgtu.lt/ceph > > ceph_node_exporter_image: node-exporter > > ceph_node_exporter_namespace: harbor.vgtu.lt/prom > > ceph_node_exporter_tag: v0.17.0 > > ceph_prometheus_image: prometheus > > ceph_prometheus_namespace: harbor.vgtu.lt/prom > > ceph_prometheus_tag: v2.7.2 > > ceph_tag: v4.0.12-stable-4.0-nautilus-centos-7-x86_64 > > default_tag: true > > name_prefix: centos-binary- > > name_suffix: '' > > namespace: harbor.vgtu.lt/testukas > > insecure: true > > DockerInsecureRegistryAddress: harbor.vgtu.lt > > neutron_driver: ovn > > rhel_containers: false > > tag: current-tripleo > > tag_from_label: rdo_version > > > > And I have launched tcpdump with filter: host harbor.vgtu.lt and port > 80 and I do not receive any. > > Also it is in undercloud.conf insecure list (first and last one, twice > :) and it is in registries.conf in /etc/containers > > > > > > > > On Tue, 6 Oct 2020 at 16:09, Alex Schultz wrote: > >> > >> On Tue, Oct 6, 2020 at 1:15 AM Ruslanas Gžibovskis > wrote: > >> > > >> > Hi all, > >> > > >> > I have been trying to use containers from local container image repo > which is insecure, but it is always trying to use TLS version, and I do not > have https there. even if I would have, I would not have CERT signed, so > still it is insecure. It is always trying to access over WWW:443. > >> > > >> > my registries.conf [1] and I am able to fetch image from the registry > [1] and my container image prepare file contains updated repos, I have even > added insecure: true > >> > > >> > any tips? I am following [2] and [3] > >> > > >> > >> Use DockerInsecureRegistryAddress to configure the list of insecure > >> registries. You can include this in the container image prepare file. > >> If you are using push_destination: true, be sure to add the undercloud > >> in there by default. We have logic to magically add this if > >> DockerInsecureRegistryAddress is not configured and push_destination: > >> true is set. It'll configure the local ip and an undercloud ctlplane > >> host name as well. > >> > >> Unfortunately docker/podman always attempt https first and fallback to > >> http if not available (this can get weird). If the host is not in the > >> insecure list, it won't fall back to http. > >> > >> > [1] http://paste.openstack.org/show/cYQM2k77bIh14Zzr5Kjn/ > >> > [2] > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/container_image_prepare.html > >> > [3] > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/installing-an-undercloud-with-containers > >> > > >> > > >> > > >> > -- > >> > Ruslanas Gžibovskis > >> > +370 6030 7030 > >> > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Oct 6 14:57:29 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 6 Oct 2020 17:57:29 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: I am curious, could I somehow use my last known working version? It was: docker.io/tripleou/centos-binary-ironic-inspector at sha256:ad5d58c4cce48ed0c660a0be7fed69f53202a781e75b1037dcee96147e9b8c4b On Thu, 1 Oct 2020 at 21:00, Ruslanas Gžibovskis wrote: > Replying in line, not my favourite way, so not sure if i do this correctly > or not. > I could try to make access to this undercloud host if you want. > > On Thu, 1 Oct 2020, 20:36 Julia Kreger, > wrote: > >> If memory serves me correctly, TripleO shares a folder outside the >> container for the configuration and logs are written out to the >> container console so the container itself is not exactly helpful. >> > > Would you like to see exact configs? Which ones? I can grep/cat it. Same > with all log files. If you need i can provide them to you. > > Interestingly the container contents you supplied is labeled >> ironic-inspector, but contains the ironic release from Ussuri. >> > > Yes. I use ussuri release from centos8 repos, and all the scripts it > provides. > >> >> I think you're going to need someone with more context into how >> TripleO has assembled the container assets to provide more clarity >> than I can provide. My feeling is likely some sort of configuration >> issue for inspector, since the single inspection fails and the >> supplied log data shows the request coming in. >> > > My earlier setup, which was deployed around 4 weeks ago, worked fine, and > the one i have deployed last Friday, was not working. So something, if you > have reverted it, might not been reverted in centos flows? Might it be > right? > >> >> On Thu, Oct 1, 2020 at 9:54 AM Ruslanas Gžibovskis >> wrote: >> > >> > you can access it here [1] >> > I have done xz -9 to it in addition ;) so takes around 110 MB instead >> of 670MB >> > >> > >> > [1] >> https://proxy.qwq.lt/fun/centos-binary-ironic-inspector.current-tripleo.tar.xz >> > >> > On Thu, 1 Oct 2020 at 19:37, Ruslanas Gžibovskis >> wrote: >> >> >> >> Hi Julia, >> >> >> >> 1) I think, podman ps sorts according to starting time. [1] >> >> So if we trust in it, so ironic is first one (in the bottom) and first >> which is still running (not configuration run). >> >> >> >> 2.1) ok, fails same place. baremetal node show CPU2 [2] >> >> 2.2) Now, logs look same too [3] >> >> >> >> 0) regarding image I have, I can podman save (a first option from man >> podman-save = podman save --quiet -o alpine.tar >> ironic-inspector:current-tripleo) >> >> >> >> P.S. baremetal is alias: alias baremetal="openstack baremetal" >> >> >> >> [1] http://paste.openstack.org/show/uejDzLWpPvMdLFAJTCam/ >> >> [2] http://paste.openstack.org/show/ryYv54g9XoWSKGdCOuqh/ >> >> [3] http://paste.openstack.org/show/syKp1MtkeOa1J5aglfNj/ >> >> >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue Oct 6 16:43:18 2020 From: mthode at mthode.org (Matthew Thode) Date: Tue, 6 Oct 2020 11:43:18 -0500 Subject: [requirements][all] requirements is unfrozen, cycle trailing beware Message-ID: <20201006164318.uez5xydb4nwhu2cs@mthode.org> Hi all, The requirements project just branched stable/victoria meaning that we are also now unfrozen and open for business. This also means that non-branched (cycle trailing) projects will need to watch out for constraints updates meant for wallaby (now master) could impact your development work on victoria. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jp.methot at planethoster.info Tue Oct 6 18:35:16 2020 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Tue, 6 Oct 2020 14:35:16 -0400 Subject: [nova][ops] The need for healing instance info cache to base itself on neutron for its port list Message-ID: Hi, This is related to bug https://bugs.launchpad.net/nova/+bug/1751923 . I don’t see if this was fixed in more recent versions as we are running Rocky, but according to the different code reviews linked to the bug report, this was never committed into Openstack master. I apologize in advance if this was already fixed elsewhere (it’s marked as fixed in Stein, but the reviews say the code was never committed?). Essentially, we’re running into a production issue where sometimes, after being shutdown for a while, our VMs ports just straight up disappear from Nova. Obviously, since this is production, we have to scramble to link back the port to the VM to bring the VM back up. As a result, we have not identified yet the exact source of our issue. However, we do have tested Mohammed Naser’s patch linked to this issue and it has at the very least offered us a band-aid since the VMs appear to be keeping their ports now. Would it be possible to review and commit this patch or Matt Riedeman’s patch to master and backport it? Couldn’t it just have a configuration option to enable it? While I’m not convinced it can fix the root cause of our problem, it could at least contribute to the stability of our and other people’s Openstack cluster. Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada TEL : +1.514.802.1644 - Poste : 2644 FAX : +1.514.612.0678 CA/US : 1.855.774.4678 FR : 01 76 60 41 43 UK : 0808 189 0423 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Oct 6 19:13:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 06 Oct 2020 20:13:59 +0100 Subject: [nova][ops] The need for healing instance info cache to base itself on neutron for its port list In-Reply-To: References: Message-ID: On Tue, 2020-10-06 at 14:35 -0400, Jean-Philippe Méthot wrote: > Hi, > > This is related to bug https://bugs.launchpad.net/nova/+bug/1751923 . I > don’t see if this was fixed in more recent versions as we are running Rocky, but according to the different code > reviews linked to the bug report, this was never committed into Openstack master. I apologize in advance if this was > already fixed elsewhere (it’s marked as fixed in Stein, but the reviews say the code was never committed?). this was commited in https://review.opendev.org/#/c/591607/ and was first released in stien. it was not backported upstream becasue https://review.opendev.org/#/c/614167/20 has a bug. but we backported just https://review.opendev.org/#/c/591607/ downstream in redhat osp all the way back too newton and it works fine. so for redhat osp at least this is fixed but we did not backport the online db migration in https://review.opendev.org/#/c/614167/20 which trys to popultate the virtual interface table jsut the force refresh. > > Essentially, we’re running into a production issue where sometimes, after being shutdown for a while, our VMs ports > just straight up disappear from Nova. Obviously, since this is production, we have to scramble to link back the port > to the VM to bring the VM back up. As a result, we have not identified yet the exact source of our issue. However, we > do have tested Mohammed Naser’s patch linked to this issue and it has at the very least offered us a band-aid since > the VMs appear to be keeping their ports now. > > Would it be possible to review and commit this patch or Matt Riedeman’s patch to master and backport it? we did not backport it due to the db migration bug but its fixed form stein on upstream. given we have not had issue backporting https://review.opendev.org/#/c/591607/ without https://review.opendev.org/#/c/614167/20 downstream i think it would be resonable to do upstream. > Couldn’t it just have a configuration option to enable it? While I’m not convinced it can fix the root cause of our > problem, it could at least contribute to the stability of our and other people’s Openstack cluster. so this is a subtel thing. its not really a nova bug. its an issue where invalid data is returned by neuton and that currupts the nova database. The force refesh will heal nova if and only if the neutron issue that casue the issue in the first place is resovled. if the neutron issue is not fix then the force refresh will contiune to force update the nova networking info cache with incomplete data. so if you never have a netuon issue that returns invalid data then you will never need this patch if you do for say because you broke the neutron policy file then this backprot will fix the nova database only once the policy issue is corrected. we have had several large customer that have had issue with neutron due to misconfiging the polify file or due to a third part sdn contol who maintianed port information in an external db seperate form neutron. in the case of the policy file customer this self healing worked once they corrected the issue. in the case of the sdn contoler customer it did not until the sdn vendor fix the sdn contols db. once it returned correct data again the periodic task healed nova. > > > Jean-Philippe Méthot > Senior Openstack system administrator > Administrateur système Openstack sénior > PlanetHoster inc. > 4414-4416 Louis B Mayer > Laval, QC, H7P 0G1, Canada > TEL : +1.514.802.1644 - Poste : 2644 > FAX : +1.514.612.0678 > CA/US : 1.855.774.4678 > FR : 01 76 60 41 43 > UK : 0808 189 0423 > > > > > > From jp.methot at planethoster.info Tue Oct 6 20:15:15 2020 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Tue, 6 Oct 2020 16:15:15 -0400 Subject: [nova][ops] The need for healing instance info cache to base itself on neutron for its port list In-Reply-To: References: Message-ID: <49BF501E-D62E-4671-9039-C674FB37A487@planethoster.info> > we did not backport it due to the db migration bug but its fixed form stein on upstream. > given we have not had issue backporting https://review.opendev.org/#/c/591607/ without > https://review.opendev.org/#/c/614167/20 downstream i think it would be resonable to do upstream. If it could be backported to Rocky and maybe even Queens, for those who still run Queens, I’m sure it would be strongly appreciated (at least we would since we wouldn’t have to patch manually when we update packages) >> Couldn’t it just have a configuration option to enable it? While I’m not convinced it can fix the root cause of our >> problem, it could at least contribute to the stability of our and other people’s Openstack cluster. > so this is a subtel thing. its not really a nova bug. its an issue where invalid data is returned by neuton and that > currupts the nova database. The force refesh will heal nova if and only if the neutron issue that casue the issue in the > first place is resovled. if the neutron issue is not fix then the force refresh will contiune to force update the nova > networking info cache with incomplete data. > > so if you never have a netuon issue that returns invalid data then you will never need this patch > if you do for say because you broke the neutron policy file then this backprot will fix the nova database only > once the policy issue is corrected. we have had several large customer that have had issue with neutron due to > misconfiging the polify file or due to a third part sdn contol who maintianed port information in an external db > seperate form neutron. in the case of the policy file customer this self healing worked once they corrected the issue. > in the case of the sdn contoler customer it did not until the sdn vendor fix the sdn contols db. once it returned > correct data again the periodic task healed nova. That’s interesting because we run a very basic neutron + openvswitch setup with default policies. Additionally, we have tested the nova patch I mentioned earlier for a long while and it seemed to at least prevent the instances from losing their port. Doesn’t that imply that neutron has consistently returned correct data in our setup in particular? So our issue could be elsewhere? I could be wrong and it’s not a hill I’m willing to die on, I’m just pointing out my own observations. Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada TEL : +1.514.802.1644 - Poste : 2644 FAX : +1.514.612.0678 CA/US : 1.855.774.4678 FR : 01 76 60 41 43 UK : 0808 189 0423 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Tue Oct 6 21:04:58 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Tue, 6 Oct 2020 21:04:58 +0000 Subject: [nova][ops] The need for healing instance info cache to base itself on neutron for its port list In-Reply-To: <49BF501E-D62E-4671-9039-C674FB37A487@planethoster.info> References: <49BF501E-D62E-4671-9039-C674FB37A487@planethoster.info> Message-ID: <20201006210458.GR8890@sync> Hello, We also backported this patch up to newton and it works fine most of the time. The thing is that, the heal operation is healing instances one by one, the default interval between heal is 60 seconds. So based on number of instances you have on host, you may have to wait a long time before the instance is really healed. You can of course reduce this interval between heal, but then it would load your neutron server. If you have a lot of computes it can be an issue. We choose another way in my company by implementing this: https://review.opendev.org/#/c/702394/ which is not perfect as commented by Sean and others, but with this, you have a quick and easy way to refresh one instance info cache using: nova refresh-network Cheers, -- Arnaud Morin On 06.10.20 - 16:15, Jean-Philippe Méthot wrote: > > > > we did not backport it due to the db migration bug but its fixed form stein on upstream. > > given we have not had issue backporting https://review.opendev.org/#/c/591607/ without > > https://review.opendev.org/#/c/614167/20 downstream i think it would be resonable to do upstream. > > If it could be backported to Rocky and maybe even Queens, for those who still run Queens, I’m sure it would be strongly > appreciated (at least we would since we wouldn’t have to patch manually when we update packages) > >> Couldn’t it just have a configuration option to enable it? While I’m not convinced it can fix the root cause of our > >> problem, it could at least contribute to the stability of our and other people’s Openstack cluster. > > so this is a subtel thing. its not really a nova bug. its an issue where invalid data is returned by neuton and that > > currupts the nova database. The force refesh will heal nova if and only if the neutron issue that casue the issue in the > > first place is resovled. if the neutron issue is not fix then the force refresh will contiune to force update the nova > > networking info cache with incomplete data. > > > > so if you never have a netuon issue that returns invalid data then you will never need this patch > > if you do for say because you broke the neutron policy file then this backprot will fix the nova database only > > once the policy issue is corrected. we have had several large customer that have had issue with neutron due to > > misconfiging the polify file or due to a third part sdn contol who maintianed port information in an external db > > seperate form neutron. in the case of the policy file customer this self healing worked once they corrected the issue. > > in the case of the sdn contoler customer it did not until the sdn vendor fix the sdn contols db. once it returned > > correct data again the periodic task healed nova. > > That’s interesting because we run a very basic neutron + openvswitch setup with default policies. Additionally, > we have tested the nova patch I mentioned earlier for a long while and it seemed to at least prevent the instances > from losing their port. Doesn’t that imply that neutron has consistently returned correct data in our setup in particular? > So our issue could be elsewhere? I could be wrong and it’s not a hill I’m willing to die on, I’m just pointing out my own > observations. > > Jean-Philippe Méthot > Senior Openstack system administrator > Administrateur système Openstack sénior > PlanetHoster inc. > 4414-4416 Louis B Mayer > Laval, QC, H7P 0G1, Canada > TEL : +1.514.802.1644 - Poste : 2644 > FAX : +1.514.612.0678 > CA/US : 1.855.774.4678 > FR : 01 76 60 41 43 > UK : 0808 189 0423 > From johnsomor at gmail.com Tue Oct 6 21:46:59 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Oct 2020 14:46:59 -0700 Subject: [designate] PTG planning Message-ID: Hello Designate community! I have set up a time slot during the Wallaby PTG to discuss all things Designate on October 29th, 13:00-15:00 UTC. I have also created an etherpad for PTG planning: https://etherpad.opendev.org/p/wallaby-ptg-designate Please add your topics to the list and I will try to set a rough agenda. Michael From johnsomor at gmail.com Tue Oct 6 21:50:09 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Oct 2020 14:50:09 -0700 Subject: [octavia] PTG planning Message-ID: Hello Octavia community! I have set up a time slot during the Wallaby PTG to discuss all things Octavia on October 27th and 28th, 13:00-17:00 UTC. There is an Octavia etherpad for PTG planning: https://etherpad.opendev.org/p/wallaby-ptg-octavia Please add your topics to the list and I will try to set a rough agenda. Michael From fungi at yuggoth.org Tue Oct 6 23:58:50 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Oct 2020 23:58:50 +0000 Subject: [all][elections][ptl][tc] Conbined PTL/TC Voting Kickoff Message-ID: <20201006235850.dngjnx27sbugd2yt@yuggoth.org> Polls for PTL and TC elections are now open and will remain open for you to cast your vote until Oct 13, 2020 23:45 UTC. We are selecting 4 TC members, and are having PTL elections for Telemetry. Please rank all candidates in your order of preference. You are eligible to vote in the TC election if you are a Foundation individual member[0] that also has committed to any official project team's deliverable repositories[1] over the Sep 27, 2019 00:00 UTC - Sep 29, 2020 00:00 UTC timeframe (Ussuri to Victoria) or if you are in the list of extra-atcs[2] for any official project team. You are eligible to vote in a PTL election if you are a Foundation individual member[0] and had a commit in one of that team's deliverable repositories[1] over the Sep 27, 2019 00:00 UTC - Sep 29, 2020 00:00 UTC timeframe (Ussuri to Victoria) or if you are in that team's list of extra-atcs[2]. If you are eligible to vote in an election, you should find your email with a link to the Condorcet page to cast your vote in the inbox of your Gerrit preferred email[3]. What to do if you don't see the email and have a commit in at least one of the projects having an election: * check the trash or spam folders of your Gerrit preferred email address, in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the project's deliverable repos[0] and email the election officials[4]. If we can confirm that you are entitled to vote, we will add you to the voters list for the appropriate election. Our democratic process is important to the health of OpenStack, please exercise your right to vote! Candidate statements/platforms can be found linked to Candidate names on this page: https://governance.openstack.org/election/ Happy voting, [0] https://www.openstack.org/community/members/ [1] The list of the repositories eligible for electoral status: https://opendev.org/openstack/governance/raw/tag/0.8.0/reference/projects.yaml [2] Look for the extra-atcs element in [1] [3] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your preferred email. That is where the ballot has been sent. [4] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From xavpaice at gmail.com Wed Oct 7 06:42:21 2020 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 7 Oct 2020 19:42:21 +1300 Subject: [charms] Zaza bundle tests In-Reply-To: <88f989d3-0496-d696-13a1-4272e971945b@canonical.com> References: <88f989d3-0496-d696-13a1-4272e971945b@canonical.com> Message-ID: On Mon, 5 Oct 2020 at 19:04, Chris MacNaughton < chris.macnaughton at canonical.com> wrote: > On 04-10-2020 08:54, Xav Paice wrote: > > I was writing a patch recently and in order to test it, I needed to > > make changes to the test bundles. I ended up making the same change > > across several files, and missing one (thanks to the reviewer for > > noticing that!). > > > > Some of the other projects I'm involved with use symlinks to a base > > bundle with overlays: > -- snip -- > > This means that I can edit base.yaml just once, and if a change is > > specific to any of the particular bundles there's a place for that in > > the individual overlays. When we have bundles for each release going > > back to Mitaka, this could be quite an effort saver. > > I'd be quite interested in seeing where this could go, as there is a lot > of duplication in the charms' test code that could probably be > dramatically reduced by taking this approach! Could you propose a change > to one of the repos as an example that we could functionally validate, > as well as confirming the assumption that the only differences between > the bundles is the series, and openstack-origin/source configs? > > Here's a simple example, where there's two base bundles and a pile of overlays - https://review.opendev.org/#/c/756399/ > Chris MacNaughton > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavpaice at gmail.com Wed Oct 7 06:45:22 2020 From: xavpaice at gmail.com (Xav Paice) Date: Wed, 7 Oct 2020 19:45:22 +1300 Subject: [charms] Zaza bundle tests In-Reply-To: References: Message-ID: On Wed, 7 Oct 2020 at 02:37, Alex Kavanagh wrote: > Hi > > So I'm not massively against the idea, but I would like to present some > potential disadvantages for consideration: > > I have to admit to not being keen to using symlinks for the functional > test yaml files. My main objection is maintenance as new openstack and > ubuntu releases occur and bundles are added and removed from the charm. > > At present, without symlinks (apart from in the overlays), the bundle for > an ubuntu-openstack version is a plain file. To remove a version, it is > just deleted. If there are symlinks then the 'base.yaml' version > represents the one that the charm starts with (say bionic-queens). And > then bionic-rocky is a symlink (perhaps with an overlay) and bionic-stein > is another symlink, etc. However, at some point in the future > bionic-queens will eventually be removed. base.yaml is the > 'bionic-queens'. So what is done with base.yaml? Do we make it > 'focal-ussuri' and change all the overlays? Leave it as is? Have a new > base for each Ubuntu LTS and work from that? > > Take a quick look at https://review.opendev.org/#/c/756399/ for an example of what that might look like - to remove a bundle, e.g. the Trusty bundle, we could just rm trusty-mitaka.yaml (which we would do anyway) and rm overlays/trusty-mitaka.yaml.j2. Job done. If there's a bundle with no more symlinks pointing at it, that is something which could easily be missed though, and that's really up to folks that work with these charms on a daily basis (like yourself) to decide if that's an issue or not. Whilst the current system isn't DRY, it does make it simple to see what's > in a particular test bundle for a variation. > > Having said all of the above, it is a bit of a pain to manage all the > separate files as well, especially when there are changes across multiple > versions of the tests. > > Thanks > Alex. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Wed Oct 7 09:20:10 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 7 Oct 2020 17:20:10 +0800 Subject: [Octavia][kolla-ansible][kayobe] - network configuration knowledge gathering In-Reply-To: References: Message-ID: Hi Mark et al, thank you for your help the other day. I'm still a bit stuck with this one and I am trying to test the octavia network by deploying a regular openstack instance onto it (CentOS7) which is failing. In fact, my other and currently "working" external network is also failing to deploy instances directly onto this neetwork also. So I am wondering if there's some other step which I am missing here. Completely forgetting about the octavia network, I'm curious to understand why deploying instances to an external network has always failed for me. I have a network like this: Real network switch VLAN 20 ----> openstack external network "br-ex" ( 192.168.20.0/24)----openstack router----- openstack vxlan local network ( 172.16.1.0/24) I can successfully deploy instances onto 172.16.1.0/24 but always fail when attempting to deploy to 192.168.20.0/24. The octavia network is almost a mirror of the above except that the controller also has an IP address / ip interface onto the same. But forgetting about this, would you happen to have any ideas or pointers that I could check that could help me with regards to why I am unable to deploy an instance to 192.168.20.0/24 network? There is a DHCP agent on this network. When I try and deploy an instance using Horizon, the dashboard shows that the instance has an ip on this network for a brief moment, but then it disappears and soon after, fails with an error that it cannot plug into it. The understanding / expectation I have is that the instance will run on the compute node and tunnel the network back to the network node where it will be presented onto 192.168.20.0/24. Does the compute node also need an ip interface within this network to work? I ask this because the octavia network did indeed have this but it was too failing with the same error. Any pointers appreciated so I can try and keep my hair. Thank you :) Tony Pearce On Mon, 5 Oct 2020 at 16:20, Mark Goddard wrote: > Following up in IRC: > > http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-10-05.log.html#t2020-10-05T06:44:47 > > On Mon, 5 Oct 2020 at 08:50, Tony Pearce wrote: > > > > Hi all, > > > > Openstack version is Train > > Deployed via Kayobe > > > > I am trying to deploy octavia lbaas but hitting some blockers with > regards to how this should be set up. I think the current issue is the lack > of neutron bridge for the octavia network and I cannot locate how to > achieve this from the documentation. > > > > I have this setup at the moment which I've added another layer 2 network > provisioned to the controller and compute node, for running octavia lbaas: > > > > [Controller node]------------octavia network-----------[Compute node] > > > > However as there's no bridge, the octavia instance cannot connect to it. > The exact error from the logs: > > > > 2020-10-05 14:37:34.070 6 INFO > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping > physical network physnet3 to bridge broct > > 2020-10-05 14:37:34.070 6 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge > broct for physical network physnet3 does not > > > > Bridge "broct" does exist but it's not a neutron bridge: > > > > [root at juc-kocon1-prd kolla]# brctl show > > bridge name bridge id STP enabled interfaces > > brext 8000.001a4a16019a no eth5 > > p-brext-phy > > broct 8000.001a4a160173 no eth6 > > docker0 8000.0242f5ed2aac no > > [root at juc-kocon1-prd kolla]# > > > > > > I've been through the docs a few times but I am unable to locate this > info. Most likely the information is there but I am unsure what I need to > look for, hence missing it. > > > > Would any of you be able to help shed light on this or point me to the > documentation? > > > > Thank you > > > > Tony Pearce > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Oct 7 10:13:33 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 7 Oct 2020 11:13:33 +0100 Subject: [Octavia][kolla-ansible][kayobe] - network configuration knowledge gathering In-Reply-To: References: Message-ID: On Wed, 7 Oct 2020 at 10:20, Tony Pearce wrote: > > Hi Mark et al, thank you for your help the other day. I'm still a bit stuck with this one and I am trying to test the octavia network by deploying a regular openstack instance onto it (CentOS7) which is failing. In fact, my other and currently "working" external network is also failing to deploy instances directly onto this neetwork also. So I am wondering if there's some other step which I am missing here. Completely forgetting about the octavia network, I'm curious to understand why deploying instances to an external network has always failed for me. I have a network like this: > > Real network switch VLAN 20 ----> openstack external network "br-ex" (192.168.20.0/24)----openstack router----- openstack vxlan local network (172.16.1.0/24) > > I can successfully deploy instances onto 172.16.1.0/24 but always fail when attempting to deploy to 192.168.20.0/24. > > The octavia network is almost a mirror of the above except that the controller also has an IP address / ip interface onto the same. But forgetting about this, would you happen to have any ideas or pointers that I could check that could help me with regards to why I am unable to deploy an instance to 192.168.20.0/24 network? There is a DHCP agent on this network. When I try and deploy an instance using Horizon, the dashboard shows that the instance has an ip on this network for a brief moment, but then it disappears and soon after, fails with an error that it cannot plug into it. The understanding / expectation I have is that the instance will run on the compute node and tunnel the network back to the network node where it will be presented onto 192.168.20.0/24. Does the compute node also need an ip interface within this network to work? I ask this because the octavia network did indeed have this but it was too failing with the same error. > > Any pointers appreciated so I can try and keep my hair. Thank you :) For instances to be attached to provider networks (VLAN or flat), you need to set kolla_enable_neutron_provider_networks to true in kolla.yml. The compute hosts will need to be connected to the physical network in the same way as controllers, i.e. they will have an interface on the networks in the external_net_names list. To apply the change, you'll need to run host configure, then service deploy for openvswitch and neutron. Mark > > Tony Pearce > > > > On Mon, 5 Oct 2020 at 16:20, Mark Goddard wrote: >> >> Following up in IRC: >> http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-10-05.log.html#t2020-10-05T06:44:47 >> >> On Mon, 5 Oct 2020 at 08:50, Tony Pearce wrote: >> > >> > Hi all, >> > >> > Openstack version is Train >> > Deployed via Kayobe >> > >> > I am trying to deploy octavia lbaas but hitting some blockers with regards to how this should be set up. I think the current issue is the lack of neutron bridge for the octavia network and I cannot locate how to achieve this from the documentation. >> > >> > I have this setup at the moment which I've added another layer 2 network provisioned to the controller and compute node, for running octavia lbaas: >> > >> > [Controller node]------------octavia network-----------[Compute node] >> > >> > However as there's no bridge, the octavia instance cannot connect to it. The exact error from the logs: >> > >> > 2020-10-05 14:37:34.070 6 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet3 to bridge broct >> > 2020-10-05 14:37:34.070 6 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge broct for physical network physnet3 does not >> > >> > Bridge "broct" does exist but it's not a neutron bridge: >> > >> > [root at juc-kocon1-prd kolla]# brctl show >> > bridge name bridge id STP enabled interfaces >> > brext 8000.001a4a16019a no eth5 >> > p-brext-phy >> > broct 8000.001a4a160173 no eth6 >> > docker0 8000.0242f5ed2aac no >> > [root at juc-kocon1-prd kolla]# >> > >> > >> > I've been through the docs a few times but I am unable to locate this info. Most likely the information is there but I am unsure what I need to look for, hence missing it. >> > >> > Would any of you be able to help shed light on this or point me to the documentation? >> > >> > Thank you >> > >> > Tony Pearce >> > From C-Albert.Braden at charter.com Wed Oct 7 12:10:21 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 7 Oct 2020 12:10:21 +0000 Subject: [kolla] Restarting RMQ Message-ID: When I learned OpenStack at eBay we ran RMQ on dedicated VMs. My new employer runs kolla and everything is in containers. When I was running RMQ on VMs, it would lock up and we would have to restart it on all 3 VMs. If that didn't work, we had a "cold start" procedure where we would stop all 3, delete the contents of /var/lib/rabbitmq/mnesia/ and then run some commands to set the correct config and permissions before starting. What is the correct way to restart RMQ in kolla? Should I log into the containers and restart services there, or use rabbitmqctl, or just stop and start the containers? Is stop/starting the containers the equivalent of the "cold start" procedure? I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Wed Oct 7 12:14:55 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 7 Oct 2020 20:14:55 +0800 Subject: [Octavia][kolla-ansible][kayobe] - network configuration knowledge gathering In-Reply-To: References: Message-ID: Thank you Mark for taking the time to reply to me and provide me with this information. It's been a big help. I'll check this tomorrow. Thanks again. Have a great day and stay safe. Regards, Tony Pearce On Wed, 7 Oct 2020 at 18:13, Mark Goddard wrote: > On Wed, 7 Oct 2020 at 10:20, Tony Pearce wrote: > > > > Hi Mark et al, thank you for your help the other day. I'm still a bit > stuck with this one and I am trying to test the octavia network by > deploying a regular openstack instance onto it (CentOS7) which is failing. > In fact, my other and currently "working" external network is also failing > to deploy instances directly onto this neetwork also. So I am wondering if > there's some other step which I am missing here. Completely forgetting > about the octavia network, I'm curious to understand why deploying > instances to an external network has always failed for me. I have a network > like this: > > > > Real network switch VLAN 20 ----> openstack external network "br-ex" ( > 192.168.20.0/24)----openstack router----- openstack vxlan local network ( > 172.16.1.0/24) > > > > I can successfully deploy instances onto 172.16.1.0/24 but always fail > when attempting to deploy to 192.168.20.0/24. > > > > The octavia network is almost a mirror of the above except that the > controller also has an IP address / ip interface onto the same. But > forgetting about this, would you happen to have any ideas or pointers that > I could check that could help me with regards to why I am unable to deploy > an instance to 192.168.20.0/24 network? There is a DHCP agent on this > network. When I try and deploy an instance using Horizon, the dashboard > shows that the instance has an ip on this network for a brief moment, but > then it disappears and soon after, fails with an error that it cannot plug > into it. The understanding / expectation I have is that the instance will > run on the compute node and tunnel the network back to the network node > where it will be presented onto 192.168.20.0/24. Does the compute node > also need an ip interface within this network to work? I ask this because > the octavia network did indeed have this but it was too failing with the > same error. > > > > Any pointers appreciated so I can try and keep my hair. Thank you :) > > For instances to be attached to provider networks (VLAN or flat), you > need to set kolla_enable_neutron_provider_networks to true in > kolla.yml. The compute hosts will need to be connected to the physical > network in the same way as controllers, i.e. they will have an > interface on the networks in the external_net_names list. To apply the > change, you'll need to run host configure, then service deploy for > openvswitch and neutron. > Mark > > > > > Tony Pearce > > > > > > > > On Mon, 5 Oct 2020 at 16:20, Mark Goddard wrote: > >> > >> Following up in IRC: > >> > http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-10-05.log.html#t2020-10-05T06:44:47 > >> > >> On Mon, 5 Oct 2020 at 08:50, Tony Pearce wrote: > >> > > >> > Hi all, > >> > > >> > Openstack version is Train > >> > Deployed via Kayobe > >> > > >> > I am trying to deploy octavia lbaas but hitting some blockers with > regards to how this should be set up. I think the current issue is the lack > of neutron bridge for the octavia network and I cannot locate how to > achieve this from the documentation. > >> > > >> > I have this setup at the moment which I've added another layer 2 > network provisioned to the controller and compute node, for running octavia > lbaas: > >> > > >> > [Controller node]------------octavia network-----------[Compute node] > >> > > >> > However as there's no bridge, the octavia instance cannot connect to > it. The exact error from the logs: > >> > > >> > 2020-10-05 14:37:34.070 6 INFO > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping > physical network physnet3 to bridge broct > >> > 2020-10-05 14:37:34.070 6 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge > broct for physical network physnet3 does not > >> > > >> > Bridge "broct" does exist but it's not a neutron bridge: > >> > > >> > [root at juc-kocon1-prd kolla]# brctl show > >> > bridge name bridge id STP enabled interfaces > >> > brext 8000.001a4a16019a no eth5 > >> > p-brext-phy > >> > broct 8000.001a4a160173 no eth6 > >> > docker0 8000.0242f5ed2aac no > >> > [root at juc-kocon1-prd kolla]# > >> > > >> > > >> > I've been through the docs a few times but I am unable to locate this > info. Most likely the information is there but I am unsure what I need to > look for, hence missing it. > >> > > >> > Would any of you be able to help shed light on this or point me to > the documentation? > >> > > >> > Thank you > >> > > >> > Tony Pearce > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Oct 7 12:46:59 2020 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Oct 2020 07:46:59 -0500 Subject: [oslo] Proposing Lance Bragstad as oslo.cache core In-Reply-To: References: Message-ID: Hey all, I look forward to helping out where I can and working with this group more closely. Thank you for the vote of confidence! Lance On Mon, Sep 14, 2020 at 11:18 AM Ben Nemec wrote: > This is now done. Welcome to the oslo.cache team, Lance! > > On 8/13/20 10:06 AM, Moises Guimaraes de Medeiros wrote: > > Hello everybody, > > > > It is my pleasure to propose Lance Bragstad (lbragstad) as a new member > > of the oslo.core core team. > > > > Lance has been a big contributor to the project and is known as a > > walking version of the Keystone documentation, which happens to be one > > of the biggest consumers of oslo.cache. > > > > Obviously we think he'd make a good addition to the core team. If there > > are no objections, I'll make that happen in a week. > > > > Thanks. > > > > -- > > > > Moisés Guimarães > > > > Software Engineer > > > > Red Hat > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Oct 7 13:00:29 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 7 Oct 2020 14:00:29 +0100 Subject: [kolla] Kolla klub meeting Message-ID: Hi, Tomorrow (Thursday) we will have a Kolla Klub meeting. As usual, the meeting will be at 15:00 UTC. Let's discuss the following topics: * Virtual PTG * Open discussion on networking Look forward to seeing you there. https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw/edit# Thanks, Mark From nischay.mundas at sooktha.com Wed Oct 7 12:40:06 2020 From: nischay.mundas at sooktha.com (Nischay Mundas) Date: Wed, 7 Oct 2020 18:10:06 +0530 Subject: Nova-Docker package missing in the git repository Message-ID: Hi, I am trying to deploy docker with OpenStack Nova on CentOs7 VM. For this reason, I am referring to the link - https://wiki.openstack.org/wiki/Docker. I have come across a point where the nova-docker package is not present in the mentioned git repository i.e https://opendev.org/x/nova-docker. So, can you please help me with this? -- Thanks & Regards, Nischay Mundas Sooktha Consulting Pvt. Ltd., Bangalore Web: www.sooktha.com Email: nischay.mundas at sooktha.com Mob: +91 8496861949 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Wed Oct 7 13:05:34 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Wed, 7 Oct 2020 15:05:34 +0200 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances Message-ID: <4b1d09a9-0f75-0f8e-1b14-d54b0b088cc3@dhbw-mannheim.de> Hi, I've deployed OpenStack successfully using openstack-ansible. I use cinder with LVM backend and can create volumes. However, when I attach them to an instance, they stay detached (though there's no Error Message) both using CLI and the Dashboard. Looking for a solution I read that the cinder logs might contain relevant information but in Ussuri they don't seem to be present under /var/log/cinder... Here's the part of my openstack_user_config.yml regarding Cinder: ``` storage_hosts: lvm-storage1: ip: 192.168.110.202 container_vars: cinder_backends: lvm: volume_backend_name: LVM_iSCSI volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver volume_group: cinder-volumes iscsi_ip_address: 10.0.3.202 limit_container_types: cinder_volume ``` I've created cinder-volumes with vgcreate before the installation and all cinder services are up: # openstack volume service list +------------------+--------------------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+--------------------------------------+------+---------+-------+----------------------------+ | cinder-backup | bc1bl10 | nova | enabled | up | 2020-10-07T11:24:10.000000 | | cinder-volume | bc1bl10 at lvm | nova | enabled | up | 2020-10-07T11:24:05.000000 | | cinder-scheduler | infra1-cinder-api-container-1dacc920 | nova | enabled | up | 2020-10-07T11:24:05.000000 | +------------------+--------------------------------------+------+---------+-------+----------------------------+ Thanks in advance! Kind regards, Oliver From fungi at yuggoth.org Wed Oct 7 13:12:22 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Oct 2020 13:12:22 +0000 Subject: [nova][magnum][zun] Nova-Docker package missing in the git repository In-Reply-To: References: Message-ID: <20201007131222.lozlt5x74f3sttft@yuggoth.org> On 2020-10-07 18:10:06 +0530 (+0530), Nischay Mundas wrote: > I am trying to deploy docker with OpenStack Nova on CentOs7 VM. > For this reason, I am referring to the link - > https://wiki.openstack.org/wiki/Docker. That article describes an unmaintained Nova hypervisor backend which likely no longer works given it was last updated in early 2016. I have added a warning at the top of the page just now, referring readers to Zun as an actively maintained alternative solution for managing individual containers in OpenStack. > I have come across a point where the nova-docker package is not > present in the mentioned git repository i.e > https://opendev.org/x/nova-docker. If you really need the old source code, it can be accessed from the commit previous to its retirement, like so: https://opendev.org/x/nova-docker/src/commit/034a4842fc1ebba5912e02cff8cd197ae81eb0c3/ > So, can you please help me with this? Chances are you're going to have a better experience if you try Zun (for managing individual containers) or Magnum (for managing Kubernetes pods of containers): https://docs.openstack.org/zun/ https://docs.openstack.org/magnum/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mark at stackhpc.com Wed Oct 7 13:13:38 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 7 Oct 2020 14:13:38 +0100 Subject: [kolla] Restarting RMQ In-Reply-To: References: Message-ID: On Wed, 7 Oct 2020 at 13:11, Braden, Albert wrote: > > When I learned OpenStack at eBay we ran RMQ on dedicated VMs. My new employer runs kolla and everything is in containers. When I was running RMQ on VMs, it would lock up and we would have to restart it on all 3 VMs. If that didn't work, we had a "cold start" procedure where we would stop all 3, delete the contents of /var/lib/rabbitmq/mnesia/ and then run some commands to set the correct config and permissions before starting. > > > > What is the correct way to restart RMQ in kolla? Should I log into the containers and restart services there, or use rabbitmqctl, or just stop and start the containers? Is stop/starting the containers the equivalent of the "cold start" procedure? Hi Albert. You shouldn't ever need to exec into containers to restart services - restart the containers. Kolla Ansible has some orchestration in place to avoid restarting all nodes at once. However, the deploy command won't restart containers unless something has changed. For a cold start, you would need to stop the containers (you could use kolla-ansible stop --tags rabbitmq), then run a deploy again. Note that state in Kolla is stored in Docker volumes, which get bind mounted into containers. Mark > > > > I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From mark at stackhpc.com Wed Oct 7 13:16:44 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 7 Oct 2020 14:16:44 +0100 Subject: [kolla] kolla_docker ansible module In-Reply-To: <6fdbf52023354c50b2fbb48aab515f17@NCEMEXGP009.CORP.CHARTERCOM.com> References: <6fdbf52023354c50b2fbb48aab515f17@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: On Tue, 6 Oct 2020 at 13:27, Braden, Albert wrote: > > I opened bug #1897948[1] the other day and today I was trying to figure out what needs to be done to fix it. In the mariadb backup container I see the offending line "--history=$(date +%d-%m-%Y)" in /usr/local/bin/kolla_mariadb_backup.sh and I had assumed that it was coming from https://github.com/openstack/kolla/blob/master/docker/mariadb/mariadb/backup.sh and the obvious solution is to replace "$(date +%d-%m-%Y)” with “$HISTORY_NAME" where HISTORY_NAME=`ls -t $BACKUP_DIR/mysqlbackup*|head -1|cut -d- -f2-4` but when I look at the playbook I see that backup.sh appears to be part of a docker image. Is the docker image pulling /usr/local/bin/kolla_mariadb_backup.sh from https://github.com/openstack/kolla/blob/master/docker/mariadb/mariadb/backup.sh ? > Yes - see https://opendev.org/openstack/kolla/src/commit/524a821577d568a7a19983a7b6adc36e78fb9e4d/docker/mariadb/mariadb-server/Dockerfile.j2#L51 > > > On my kolla-ansible build server I see /opt/openstack/share/kolla-ansible/ansible/roles/mariadb/tasks/backup.yml[2] which appears to be an ansible playbook calling module kolla_docker, but I can’t find anything about the kolla_docker module on the googles nor on the ansible site. > > > > Where can I find the documentation for ansible module kolla_docker? > > > > [1] https://bugs.launchpad.net/kolla-ansible/+bug/1897948 > > [2] http://www.hastebin.net/bimufefosy.yaml > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From i at liuyulong.me Wed Oct 7 13:32:06 2020 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Wed, 7 Oct 2020 21:32:06 +0800 Subject: [Neutron] No L3 meeting today 2020-10-07 In-Reply-To: References: Message-ID: Hi, Because I'm on the China National Day vacation, so there is no chair for the L3 meeting this week. The next L3 meeting will be on October 21, 2020 before the PTG week. So see you guys then. Regards, LIU Yulong -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Wed Oct 7 14:07:34 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 7 Oct 2020 10:07:34 -0400 Subject: [nova][magnum][zun] Nova-Docker package missing in the git repository In-Reply-To: <20201007131222.lozlt5x74f3sttft@yuggoth.org> References: <20201007131222.lozlt5x74f3sttft@yuggoth.org> Message-ID: On Wed, Oct 7, 2020 at 9:19 AM Jeremy Stanley wrote: > On 2020-10-07 18:10:06 +0530 (+0530), Nischay Mundas wrote: > > I am trying to deploy docker with OpenStack Nova on CentOs7 VM. > > For this reason, I am referring to the link - > > https://wiki.openstack.org/wiki/Docker. > > That article describes an unmaintained Nova hypervisor backend which > likely no longer works given it was last updated in early 2016. I > have added a warning at the top of the page just now, referring > readers to Zun as an actively maintained alternative solution for > managing individual containers in OpenStack. > > > I have come across a point where the nova-docker package is not > > present in the mentioned git repository i.e > > https://opendev.org/x/nova-docker. > > If you really need the old source code, it can be accessed from the > commit previous to its retirement, like so: > > > https://opendev.org/x/nova-docker/src/commit/034a4842fc1ebba5912e02cff8cd197ae81eb0c3/ > > > So, can you please help me with this? > > Chances are you're going to have a better experience if you try Zun > (for managing individual containers) or Magnum (for managing > Kubernetes pods of containers): > To be exactly accurate, Magnum is for managing Kubernetes clusters (pods are managed by Kuberentes not Magnum). The description of Zun is correct. > > https://docs.openstack.org/zun/ > https://docs.openstack.org/magnum/ > > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Wed Oct 7 14:14:26 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Wed, 7 Oct 2020 22:14:26 +0800 Subject: [kolla] Restarting RMQ In-Reply-To: References: Message-ID: Hi Albert. In my case, I usually do restart the RMQ container directly when RMQ got some issue. BTW, for the env which use only 2 ethernets ( 1 for Neutron External & another for other interfaces.) and both 1Gb/s speed, or disk I/O is not so powerful for system, often met RMQ brain split. That made me need to restart whole RMA cluster sometimes. I still investigate this issue without hardware or network changes. The temp workaround is increase net.ticktime in RMQ configuration. Although the issue still exist but not so often at least. Mark Goddard 於 2020年10月7日 週三 下午9:19寫道: > On Wed, 7 Oct 2020 at 13:11, Braden, Albert > wrote: > > > > When I learned OpenStack at eBay we ran RMQ on dedicated VMs. My new > employer runs kolla and everything is in containers. When I was running RMQ > on VMs, it would lock up and we would have to restart it on all 3 VMs. If > that didn't work, we had a "cold start" procedure where we would stop all > 3, delete the contents of /var/lib/rabbitmq/mnesia/ and then run some > commands to set the correct config and permissions before starting. > > > > > > > > What is the correct way to restart RMQ in kolla? Should I log into the > containers and restart services there, or use rabbitmqctl, or just stop and > start the containers? Is stop/starting the containers the equivalent of the > "cold start" procedure? > > Hi Albert. You shouldn't ever need to exec into containers to restart > services - restart the containers. Kolla Ansible has some > orchestration in place to avoid restarting all nodes at once. However, > the deploy command won't restart containers unless something has > changed. For a cold start, you would need to stop the containers (you > could use kolla-ansible stop --tags rabbitmq), then run a deploy > again. Note that state in Kolla is stored in Docker volumes, which get > bind mounted into containers. > Mark > > > > > > > > > I apologize for the nonsense below. So far I have not been able to stop > it from being attached to my external emails. I'm working on it. > > > > > > > > The contents of this e-mail message and > > any attachments are intended solely for the > > addressee(s) and may contain confidential > > and/or legally privileged information. If you > > are not the intended recipient of this message > > or if this message has been addressed to you > > in error, please immediately alert the sender > > by reply e-mail and then delete this message > > and any attachments. If you are not the > > intended recipient, you are notified that > > any use, dissemination, distribution, copying, > > or storage of this message or any attachment > > is strictly prohibited. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Oct 7 14:24:30 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Oct 2020 07:24:30 -0700 Subject: [tripleo][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: You're really in the territory of TripleO at this point. As such I'm replying with an altered subject to get their attention. On Tue, Oct 6, 2020 at 7:57 AM Ruslanas Gžibovskis wrote: > > I am curious, could I somehow use my last known working version? > It was: docker.io/tripleou/centos-binary-ironic-inspector at sha256:ad5d58c4cce48ed0c660a0be7fed69f53202a781e75b1037dcee96147e9b8c4b > > > On Thu, 1 Oct 2020 at 21:00, Ruslanas Gžibovskis wrote: >> >> Replying in line, not my favourite way, so not sure if i do this correctly or not. >> I could try to make access to this undercloud host if you want. >> >> On Thu, 1 Oct 2020, 20:36 Julia Kreger, wrote: >>> >>> If memory serves me correctly, TripleO shares a folder outside the >>> container for the configuration and logs are written out to the >>> container console so the container itself is not exactly helpful. >> >> >> Would you like to see exact configs? Which ones? I can grep/cat it. Same with all log files. If you need i can provide them to you. >> >>> Interestingly the container contents you supplied is labeled >>> ironic-inspector, but contains the ironic release from Ussuri. >> >> >> Yes. I use ussuri release from centos8 repos, and all the scripts it provides. >>> >>> >>> I think you're going to need someone with more context into how >>> TripleO has assembled the container assets to provide more clarity >>> than I can provide. My feeling is likely some sort of configuration >>> issue for inspector, since the single inspection fails and the >>> supplied log data shows the request coming in. >> >> >> My earlier setup, which was deployed around 4 weeks ago, worked fine, and the one i have deployed last Friday, was not working. So something, if you have reverted it, might not been reverted in centos flows? Might it be right? >>> >>> >>> On Thu, Oct 1, 2020 at 9:54 AM Ruslanas Gžibovskis wrote: >>> > >>> > you can access it here [1] >>> > I have done xz -9 to it in addition ;) so takes around 110 MB instead of 670MB >>> > >>> > >>> > [1] https://proxy.qwq.lt/fun/centos-binary-ironic-inspector.current-tripleo.tar.xz >>> > >>> > On Thu, 1 Oct 2020 at 19:37, Ruslanas Gžibovskis wrote: >>> >> >>> >> Hi Julia, >>> >> >>> >> 1) I think, podman ps sorts according to starting time. [1] >>> >> So if we trust in it, so ironic is first one (in the bottom) and first which is still running (not configuration run). >>> >> >>> >> 2.1) ok, fails same place. baremetal node show CPU2 [2] >>> >> 2.2) Now, logs look same too [3] >>> >> >>> >> 0) regarding image I have, I can podman save (a first option from man podman-save = podman save --quiet -o alpine.tar ironic-inspector:current-tripleo) >>> >> >>> >> P.S. baremetal is alias: alias baremetal="openstack baremetal" >>> >> >>> >> [1] http://paste.openstack.org/show/uejDzLWpPvMdLFAJTCam/ >>> >> [2] http://paste.openstack.org/show/ryYv54g9XoWSKGdCOuqh/ >>> >> [3] http://paste.openstack.org/show/syKp1MtkeOa1J5aglfNj/ >>> >> >>> > >>> > >>> > -- >>> > Ruslanas Gžibovskis >>> > +370 6030 7030 > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From elfosardo at gmail.com Wed Oct 7 15:09:56 2020 From: elfosardo at gmail.com (Riccardo Pittau) Date: Wed, 7 Oct 2020 17:09:56 +0200 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset Message-ID: Hello fellow openstackers! At the moment it's not possible to migrate the ironic-python-agent-builder src jobs from bionic to focal nodeset because of diskimage-builder limitations. We're stuck with ubuntu bionic and we're pinning those jobs to the bionic nodeset for the time being: https://review.opendev.org/756291 One of the community goals for victoria is to move the base nodeset of the CI jobs from ubuntu bionic to focal. In general, doing this for most of the ironic projects has not been trivial, but still doable, and it has been accomplished almost entirely. The biggest challenge comes from the src jobs in ironic-python-agent-builder where, for some of them, we build ironic-python-agent ramdisks using rpm-based distributions (mainly centos) with diskimage-builder on ubuntu bionic. This is possible using utilities (e.g. yumdownloader) included in packages still present in the ubuntu repositories, such as yum-utils and rpm. Starting from Ubuntu focal, the yum-utils package has been removed from the repositories because of lack of support of Python 2.x and there's no plan to provide such support, at least to my knowledge. The alternative provided by dnf is not usable as there's also no plan to compile and provide a package of dnf for deb-based distributions. For the reasons mentioned above, currently the ironic project team can't complete the migration of the CI jobs from bionic to focal and there's no ETA on when this can be accomplished. Considering all the things in the preamble, two possibilities are available: - change the mechanics in diskimage-builder; this process would completely change the way DIB builds rpm-based distros; this approach delegates the work almost entirely to the DIB team. - instead of migrating to focal, migrate to centos-8 nodeset; that would mean having devstack+ironic working on centos-8, which poses an interesting challenge and would consume no little resources from the ironic team. Opinions and advice are very welcome! Thanks, Riccardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Oct 7 15:17:00 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 7 Oct 2020 17:17:00 +0200 Subject: [cyborg] Edge sync up at the PTG Message-ID: <00FB488C-AF95-4B17-B878-D6FFEE44A1F6@gmail.com> Hi Cyborg Team, I’m reaching out to check if you are available during the upcoming PTG to continue discussions we were having in June to see where Cyborg evolved since then and how we can continue collaborating. Like last time, the OSF Edge Computing Group is meeting on the first three days and we have a slot reserved to sync up with OpenStack projects such as Cyborg on Wednesday (October 28) at 1300 UTC - 1400 UTC. Would the team be available to join at that time? Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 Thanks, Ildikó From ildiko.vancsa at gmail.com Wed Oct 7 15:27:35 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 7 Oct 2020 17:27:35 +0200 Subject: [cinder][manila][swift] Edge discussions at the upcoming PTG Message-ID: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> Hi, We’ve started to have discussions in the area of object storage needs and solutions for edge use cases at the last PTG in June. I’m reaching out with the intention to continue this chat at the upcoming PTG in a few weeks. The OSF Edge Computing Group is meeting during the first three days of the PTG like last time. We are planning to have edge reference architecture models and testing type of discussions in the first two days (October 26-27) and have a cross-project and cross-community day on Wednesday (October 28). We would like to have a dedicated section for storage either on Monday or Tuesday. I think it might also be time to revisit other storage options as well if there’s interest. What do people think? For reference: * Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 * Notes from the previous PTG is here: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 Thanks, Ildikó From mark at stackhpc.com Wed Oct 7 18:13:57 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 7 Oct 2020 19:13:57 +0100 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: References: Message-ID: On Wed, 7 Oct 2020 at 16:11, Riccardo Pittau wrote: > > Hello fellow openstackers! > > At the moment it's not possible to migrate the ironic-python-agent-builder src jobs from bionic to focal nodeset because of diskimage-builder limitations. > We're stuck with ubuntu bionic and we're pinning those jobs to the bionic nodeset for the time being: > https://review.opendev.org/756291 > > One of the community goals for victoria is to move the base nodeset of the CI jobs from ubuntu bionic to focal. > In general, doing this for most of the ironic projects has not been trivial, but still doable, and it has been accomplished almost entirely. > The biggest challenge comes from the src jobs in ironic-python-agent-builder where, for some of them, we build ironic-python-agent ramdisks using rpm-based distributions (mainly centos) with diskimage-builder on ubuntu bionic. > This is possible using utilities (e.g. yumdownloader) included in packages still present in the ubuntu repositories, such as yum-utils and rpm. > Starting from Ubuntu focal, the yum-utils package has been removed from the repositories because of lack of support of Python 2.x and there's no plan to provide such support, at least to my knowledge. > The alternative provided by dnf is not usable as there's also no plan to compile and provide a package of dnf for deb-based distributions. > For the reasons mentioned above, currently the ironic project team can't complete the migration of the CI jobs from bionic to focal and there's no ETA on when this can be accomplished. > > Considering all the things in the preamble, two possibilities are available: > - change the mechanics in diskimage-builder; this process would completely change the way DIB builds rpm-based distros; this approach delegates the work almost entirely to the DIB team. > - instead of migrating to focal, migrate to centos-8 nodeset; that would mean having devstack+ironic working on centos-8, which poses an interesting challenge and would consume no little resources from the ironic team. > > Opinions and advice are very welcome! My first reaction would be to consider the user impact beyond the box checking of fulfilling a goal. Does this imply Ubuntu users can no longer build IPA images from Focal? That would be a shame. In terms of fulfilling the goal, perhaps running DIB in a CentOS container would help? This of course adds complexity, potential problems, and doesn't really test what users would do. > > Thanks, > > Riccardo > > From rosmaita.fossdev at gmail.com Wed Oct 7 19:10:16 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Oct 2020 15:10:16 -0400 Subject: [cinder] type-checking hackfest next week Message-ID: At today's Cinder meeting, we decided to celebrate the Victoria release happening next Wednesday by cancelling the Cinder meeting we'd be having on that day, and instead using the time for a hackfest to review the patches Eric has posted for using mypy for type checking in the cinder code, and to add type annotations elsewhere in the code. So we'll be meeting from 1300-1600 UTC on Wednesday 14 October in the opendev Jitsi: https://meetpad.opendev.org/cinder-type-checking-hackfest If you want an advance look, you can start here: - https://review.opendev.org/#/c/733620/ - https://review.opendev.org/#/c/733621/ For general information: - https://docs.python.org/3/library/typing.html - https://www.python.org/dev/peps/pep-3107/ - http://mypy-lang.org/index.html See you on Wednesday! brian From rosmaita.fossdev at gmail.com Wed Oct 7 20:14:12 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Oct 2020 16:14:12 -0400 Subject: [cinder] wallaby PTG planning + happy hour Message-ID: <9358f1d6-1d1a-172e-bffa-ba7f9613ab4e@gmail.com> Here's your weekly reminder to add topics to the cinder PTG planning etherpad: https://etherpad.opendev.org/p/wallaby-ptg-cinder-planning (and don't forget to register for the PTG) At today's Cinder meeting, we decided to hold a happy hour on the first day Cinder is meeting during the PTG so that everyone can get to know each other informally a bit before we get down to serious business for the rest of the week. So please plan to be happy for this hour: 15:00-16:00 UTC on Tuesday 27 October 2020 You don't have to work on Cinder or even be attending the Cinder sessions of the PTG to attend the happy hour; but we do request that you be happy if you attend. Unfortunately, due to the virtual nature of the PTG, the only beverages available will also be virtual (but you can have as much as you want!). Or you can bring your own actual beverages (in which case, you can also have as much as you want!). cheers, brian From juliaashleykreger at gmail.com Wed Oct 7 20:18:03 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Oct 2020 13:18:03 -0700 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: References: Message-ID: On Wed, Oct 7, 2020 at 11:16 AM Mark Goddard wrote: > > On Wed, 7 Oct 2020 at 16:11, Riccardo Pittau wrote: > > > > Hello fellow openstackers! > > > > At the moment it's not possible to migrate the ironic-python-agent-builder src jobs from bionic to focal nodeset because of diskimage-builder limitations. > > We're stuck with ubuntu bionic and we're pinning those jobs to the bionic nodeset for the time being: > > https://review.opendev.org/756291 > > > > One of the community goals for victoria is to move the base nodeset of the CI jobs from ubuntu bionic to focal. > > In general, doing this for most of the ironic projects has not been trivial, but still doable, and it has been accomplished almost entirely. > > The biggest challenge comes from the src jobs in ironic-python-agent-builder where, for some of them, we build ironic-python-agent ramdisks using rpm-based distributions (mainly centos) with diskimage-builder on ubuntu bionic. > > This is possible using utilities (e.g. yumdownloader) included in packages still present in the ubuntu repositories, such as yum-utils and rpm. > > Starting from Ubuntu focal, the yum-utils package has been removed from the repositories because of lack of support of Python 2.x and there's no plan to provide such support, at least to my knowledge. > > The alternative provided by dnf is not usable as there's also no plan to compile and provide a package of dnf for deb-based distributions. > > For the reasons mentioned above, currently the ironic project team can't complete the migration of the CI jobs from bionic to focal and there's no ETA on when this can be accomplished. > > > > Considering all the things in the preamble, two possibilities are available: > > - change the mechanics in diskimage-builder; this process would completely change the way DIB builds rpm-based distros; this approach delegates the work almost entirely to the DIB team. > > - instead of migrating to focal, migrate to centos-8 nodeset; that would mean having devstack+ironic working on centos-8, which poses an interesting challenge and would consume no little resources from the ironic team. > > > > Opinions and advice are very welcome! > > My first reaction would be to consider the user impact beyond the box > checking of fulfilling a goal. Does this imply Ubuntu users can no > longer build IPA images from Focal? That would be a shame. The goals are really meant to drive the community together as a group. And if box checking is not appropriate, then it is not appropriate. I think the key is in the meaning of the goal which is to drive everyone forward together. I do concur end user impact is the key item to focus on. I don't think this is helped due to pre-existing constraints. I seem to remember that you couldn't build ubuntu on centos previously, so this sort of issue does not surprise me. At least, not without having some extra packages present that one could install and it might work with some hope. My guess is that we're effectively entering a situation where if I want to build a centos/rhel/fedora IPA image, I need to run the ipa builder command on one of those machine types, and if I want the same for debian/ubuntu, I need to run the build on one of those operating systems. Is that situation horrible for users, not really because they are and likely should keep the distribution the same for familiarity and compatibility. The thing we likely need to do is do an ubuntu IPA image test in CI but not save the artifact. Or debian! > > In terms of fulfilling the goal, perhaps running DIB in a CentOS > container would help? This of course adds complexity, potential > problems, and doesn't really test what users would do. > > > > > Thanks, > > > > Riccardo > > > > > From mark at stackhpc.com Wed Oct 7 20:51:30 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 7 Oct 2020 21:51:30 +0100 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: References: Message-ID: On Wed, 7 Oct 2020, 21:18 Julia Kreger, wrote: > On Wed, Oct 7, 2020 at 11:16 AM Mark Goddard wrote: > > > > On Wed, 7 Oct 2020 at 16:11, Riccardo Pittau > wrote: > > > > > > Hello fellow openstackers! > > > > > > At the moment it's not possible to migrate the > ironic-python-agent-builder src jobs from bionic to focal nodeset because > of diskimage-builder limitations. > > > We're stuck with ubuntu bionic and we're pinning those jobs to the > bionic nodeset for the time being: > > > https://review.opendev.org/756291 > > > > > > One of the community goals for victoria is to move the base nodeset of > the CI jobs from ubuntu bionic to focal. > > > In general, doing this for most of the ironic projects has not been > trivial, but still doable, and it has been accomplished almost entirely. > > > The biggest challenge comes from the src jobs in > ironic-python-agent-builder where, for some of them, we build > ironic-python-agent ramdisks using rpm-based distributions (mainly centos) > with diskimage-builder on ubuntu bionic. > > > This is possible using utilities (e.g. yumdownloader) included in > packages still present in the ubuntu repositories, such as yum-utils and > rpm. > > > Starting from Ubuntu focal, the yum-utils package has been removed > from the repositories because of lack of support of Python 2.x and there's > no plan to provide such support, at least to my knowledge. > > > The alternative provided by dnf is not usable as there's also no plan > to compile and provide a package of dnf for deb-based distributions. > > > For the reasons mentioned above, currently the ironic project team > can't complete the migration of the CI jobs from bionic to focal and > there's no ETA on when this can be accomplished. > > > > > > Considering all the things in the preamble, two possibilities are > available: > > > - change the mechanics in diskimage-builder; this process would > completely change the way DIB builds rpm-based distros; this approach > delegates the work almost entirely to the DIB team. > > > - instead of migrating to focal, migrate to centos-8 nodeset; that > would mean having devstack+ironic working on centos-8, which poses an > interesting challenge and would consume no little resources from the ironic > team. > > > > > > Opinions and advice are very welcome! > > > > My first reaction would be to consider the user impact beyond the box > > checking of fulfilling a goal. Does this imply Ubuntu users can no > > longer build IPA images from Focal? That would be a shame. > > The goals are really meant to drive the community together as a group. > And if box checking is not appropriate, then it is not appropriate. I > think the key is in the meaning of the goal which is to drive everyone > forward together. I do concur end user impact is the key item to focus > on. I don't think this is helped due to pre-existing constraints. I > seem to remember that you couldn't build ubuntu on centos previously, > so this sort of issue does not surprise me. At least, not without > having some extra packages present that one could install and it might > work with some hope. > I didn't mean to imply that the goal is not a worthwhile endeavour, only that user impact should come first. > > My guess is that we're effectively entering a situation where if I > want to build a centos/rhel/fedora IPA image, I need to run the ipa > builder command on one of those machine types, and if I want the same > for debian/ubuntu, I need to run the build on one of those operating > systems. Is that situation horrible for users, not really because they > are and likely should keep the distribution the same for familiarity > and compatibility. The thing we likely need to do is do an ubuntu IPA > image test in CI but not save the artifact. Or debian! > This does raise the question of how to test a centos based IPA image before it is published though. > > > > > In terms of fulfilling the goal, perhaps running DIB in a CentOS > > container would help? This of course adds complexity, potential > > problems, and doesn't really test what users would do. > > > > > > > > Thanks, > > > > > > Riccardo > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Oct 7 20:55:19 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Oct 2020 13:55:19 -0700 Subject: =?UTF-8?Q?Re:_[diskimage-builder][ironic-python-agent-builder][ci][focal?= =?UTF-8?Q?][ironic]_ipa-builder_CI_jobs_can't_migrate_to_ubuntu_focal_n?= =?UTF-8?Q?odeset?= In-Reply-To: References: Message-ID: <86886aec-d0b9-4f09-8efa-7cc2ed36a0eb@www.fastmail.com> On Wed, Oct 7, 2020, at 1:18 PM, Julia Kreger wrote: > On Wed, Oct 7, 2020 at 11:16 AM Mark Goddard wrote: > > > > On Wed, 7 Oct 2020 at 16:11, Riccardo Pittau wrote: > > > > > > Hello fellow openstackers! > > > > > > At the moment it's not possible to migrate the ironic-python-agent-builder src jobs from bionic to focal nodeset because of diskimage-builder limitations. > > > We're stuck with ubuntu bionic and we're pinning those jobs to the bionic nodeset for the time being: > > > https://review.opendev.org/756291 > > > > > > One of the community goals for victoria is to move the base nodeset of the CI jobs from ubuntu bionic to focal. > > > In general, doing this for most of the ironic projects has not been trivial, but still doable, and it has been accomplished almost entirely. > > > The biggest challenge comes from the src jobs in ironic-python-agent-builder where, for some of them, we build ironic-python-agent ramdisks using rpm-based distributions (mainly centos) with diskimage-builder on ubuntu bionic. > > > This is possible using utilities (e.g. yumdownloader) included in packages still present in the ubuntu repositories, such as yum-utils and rpm. > > > Starting from Ubuntu focal, the yum-utils package has been removed from the repositories because of lack of support of Python 2.x and there's no plan to provide such support, at least to my knowledge. > > > The alternative provided by dnf is not usable as there's also no plan to compile and provide a package of dnf for deb-based distributions. > > > For the reasons mentioned above, currently the ironic project team can't complete the migration of the CI jobs from bionic to focal and there's no ETA on when this can be accomplished. > > > > > > Considering all the things in the preamble, two possibilities are available: > > > - change the mechanics in diskimage-builder; this process would completely change the way DIB builds rpm-based distros; this approach delegates the work almost entirely to the DIB team. > > > - instead of migrating to focal, migrate to centos-8 nodeset; that would mean having devstack+ironic working on centos-8, which poses an interesting challenge and would consume no little resources from the ironic team. > > > > > > Opinions and advice are very welcome! > > > > My first reaction would be to consider the user impact beyond the box > > checking of fulfilling a goal. Does this imply Ubuntu users can no > > longer build IPA images from Focal? That would be a shame. > > The goals are really meant to drive the community together as a group. > And if box checking is not appropriate, then it is not appropriate. I > think the key is in the meaning of the goal which is to drive everyone > forward together. I do concur end user impact is the key item to focus > on. I don't think this is helped due to pre-existing constraints. I > seem to remember that you couldn't build ubuntu on centos previously, > so this sort of issue does not surprise me. At least, not without > having some extra packages present that one could install and it might > work with some hope. > > My guess is that we're effectively entering a situation where if I > want to build a centos/rhel/fedora IPA image, I need to run the ipa > builder command on one of those machine types, and if I want the same > for debian/ubuntu, I need to run the build on one of those operating > systems. Is that situation horrible for users, not really because they > are and likely should keep the distribution the same for familiarity > and compatibility. The thing we likely need to do is do an ubuntu IPA > image test in CI but not save the artifact. Or debian! > There has actually been some thought on this problem recently. Those tools are all used to bootstrap a chroot with the necessary components to then build the rest of the image. Fortunately there are other approaches that can be taken in DIB to get to that point. Some of the elements start with a preexisting cloud image for example (ubuntu does this where ubuntu-minimal starts with debootstrap). More recently the thought has been that we should probably think about bootstrapping with docker images because basically all the distros publish such a thing. That element exists and is called "docker" [0] but may need more testing [1]. This is attractive because it means we should be able to run on just about any distro as long as the DIB host's kernel doesn't conflict with the userspace in the docker container used to bootstrap image building. I think the work here has largely stalled out, but I'm sure help would be welcome if Ironic and others think this is a useful approach to take. I'm not in a great spot to pick this up myself, but can probably help out if someone else is driving it (reviews, testing assistance, etc) [0] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/docker [1] https://review.opendev.org/#/c/700041/ From juliaashleykreger at gmail.com Wed Oct 7 21:00:17 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Oct 2020 14:00:17 -0700 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: References: Message-ID: On Wed, Oct 7, 2020 at 1:51 PM Mark Goddard wrote: > > > > On Wed, 7 Oct 2020, 21:18 Julia Kreger, wrote: >> >> On Wed, Oct 7, 2020 at 11:16 AM Mark Goddard wrote: >> > >> > On Wed, 7 Oct 2020 at 16:11, Riccardo Pittau wrote: >> > > >> > > Hello fellow openstackers! >> > > >> > > At the moment it's not possible to migrate the ironic-python-agent-builder src jobs from bionic to focal nodeset because of diskimage-builder limitations. >> > > We're stuck with ubuntu bionic and we're pinning those jobs to the bionic nodeset for the time being: >> > > https://review.opendev.org/756291 >> > > >> > > One of the community goals for victoria is to move the base nodeset of the CI jobs from ubuntu bionic to focal. >> > > In general, doing this for most of the ironic projects has not been trivial, but still doable, and it has been accomplished almost entirely. >> > > The biggest challenge comes from the src jobs in ironic-python-agent-builder where, for some of them, we build ironic-python-agent ramdisks using rpm-based distributions (mainly centos) with diskimage-builder on ubuntu bionic. >> > > This is possible using utilities (e.g. yumdownloader) included in packages still present in the ubuntu repositories, such as yum-utils and rpm. >> > > Starting from Ubuntu focal, the yum-utils package has been removed from the repositories because of lack of support of Python 2.x and there's no plan to provide such support, at least to my knowledge. >> > > The alternative provided by dnf is not usable as there's also no plan to compile and provide a package of dnf for deb-based distributions. >> > > For the reasons mentioned above, currently the ironic project team can't complete the migration of the CI jobs from bionic to focal and there's no ETA on when this can be accomplished. >> > > >> > > Considering all the things in the preamble, two possibilities are available: >> > > - change the mechanics in diskimage-builder; this process would completely change the way DIB builds rpm-based distros; this approach delegates the work almost entirely to the DIB team. >> > > - instead of migrating to focal, migrate to centos-8 nodeset; that would mean having devstack+ironic working on centos-8, which poses an interesting challenge and would consume no little resources from the ironic team. >> > > >> > > Opinions and advice are very welcome! >> > >> > My first reaction would be to consider the user impact beyond the box >> > checking of fulfilling a goal. Does this imply Ubuntu users can no >> > longer build IPA images from Focal? That would be a shame. >> >> The goals are really meant to drive the community together as a group. >> And if box checking is not appropriate, then it is not appropriate. I >> think the key is in the meaning of the goal which is to drive everyone >> forward together. I do concur end user impact is the key item to focus >> on. I don't think this is helped due to pre-existing constraints. I >> seem to remember that you couldn't build ubuntu on centos previously, >> so this sort of issue does not surprise me. At least, not without >> having some extra packages present that one could install and it might >> work with some hope. > > I didn't mean to imply that the goal is not a worthwhile endeavour, only that user impact should come first. >> I never thought you were trying to imply that for a second! But your raising a great point and I guess I'm also kind of thinking maybe the goal doesn't always fit even though it is worthwhile. >> >> My guess is that we're effectively entering a situation where if I >> want to build a centos/rhel/fedora IPA image, I need to run the ipa >> builder command on one of those machine types, and if I want the same >> for debian/ubuntu, I need to run the build on one of those operating >> systems. Is that situation horrible for users, not really because they >> are and likely should keep the distribution the same for familiarity >> and compatibility. The thing we likely need to do is do an ubuntu IPA >> image test in CI but not save the artifact. Or debian! > > This does raise the question of how to test a centos based IPA image before it is published though. The thought that comes to mind is that we could hybridize and run some jobs on centos some on ubuntu. At least that is my thought. I see Clark has sent a reply, so I'm off to read that! :) >> >> >> > >> > In terms of fulfilling the goal, perhaps running DIB in a CentOS >> > container would help? This of course adds complexity, potential >> > problems, and doesn't really test what users would do. >> > >> > > >> > > Thanks, >> > > >> > > Riccardo >> > > >> > > >> > From kennelson11 at gmail.com Wed Oct 7 20:39:25 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 7 Oct 2020 13:39:25 -0700 Subject: vPTG Oct 2020 Registration & Schedule Message-ID: Hey everyone, The October 2020 Project Teams Gathering is right around the corner! The official schedule has now been posted on the PTG website [1], the PTGbot has been updated[2], and we have also attached it to this email. Friendly reminder, if you have not already registered, please do so [3]. It is important that we get everyone to register for the event as this is how we will contact you about tooling information/passwords and other event details. Please let us know if you have any questions. Cheers, The Kendalls (diablo_rojo & wendallkaters) [1] PTG Website www.openstack.org/ptg [2] PTGbot: http://ptg.openstack.org/ptg.html [3] PTG Registration: https://october2020ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PTG2-Oct26-30-2020_Schedule (1).pdf Type: application/pdf Size: 706133 bytes Desc: not available URL: From iwienand at redhat.com Thu Oct 8 04:18:35 2020 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 8 Oct 2020 15:18:35 +1100 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: References: Message-ID: <20201008041835.GA1011725@fedora19.localdomain> On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote: > This is possible using utilities (e.g. yumdownloader) included in packages > still present in the ubuntu repositories, such as yum-utils and rpm. > Starting from Ubuntu focal, the yum-utils package has been removed from the > repositories because of lack of support of Python 2.x and there's no plan > to provide such support, at least to my knowledge. Yes, this is a problem for the "-minimal" elements that build an non-native chroot environment. Similar issues have occured with Suse and the zypper package manager not being available on the build host. The options I can see: - use the native build-host; i.e. build on centos as you described - the non-minimal, i.e. "centos" and "suse", for example, images might work under the current circumstances. They use the upsream ISO to create the initial chroot. These are generally bigger, and we've had stability issues in the past with the upstream images changing suddenly in various ways that were a maintenance headache. - use a container for dib. DIB doesn't have a specific container, but is part of the nodepool-builder container [1]. This is ultimately based on Debian buster [2] which has enough support to build everything ... for now. As noted this doesn't really solve the problem indefinitely, but certainly buys some time if you run dib out of that container (we could, of course, make a separate dib container; but it would be basically the same just without nodepool in it). This is what OpenDev production is using now, and all the CI is ultimately based on this container environment. - As clarkb has mentioned, probably the most promising alternative is to use the upstream container images as the basis for the initial chroot environments. jeblair has done most of this work with [3]. I'm fiddling with it to merge to master and see what's up ... I feel like maybe there were bootloader issues, although the basic extraction was working. This will allow the effort put into existing elements to not be lost. If I had to pick; I'd probably say that using the nodepool-builder container is the best path. That has the most momentum behind it because it's used for the OpenDev image builds. As we work on the container-image base elements, this work will be deployed into the container (meaning the container is less reliant on the underlying version of Debian) and you can switch to them as appropriate. -i [1] https://hub.docker.com/r/zuul/nodepool-builder [2] https://opendev.org/opendev/system-config/src/branch/master/docker/python-base/Dockerfile#L17 [3] https://review.opendev.org/#/c/700083/ From mark at stackhpc.com Thu Oct 8 08:03:53 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 8 Oct 2020 09:03:53 +0100 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: References: Message-ID: On Wed, 7 Oct 2020 at 22:00, Julia Kreger wrote: > > On Wed, Oct 7, 2020 at 1:51 PM Mark Goddard wrote: > > > > > > > > On Wed, 7 Oct 2020, 21:18 Julia Kreger, wrote: > >> > >> On Wed, Oct 7, 2020 at 11:16 AM Mark Goddard wrote: > >> > > >> > On Wed, 7 Oct 2020 at 16:11, Riccardo Pittau wrote: > >> > > > >> > > Hello fellow openstackers! > >> > > > >> > > At the moment it's not possible to migrate the ironic-python-agent-builder src jobs from bionic to focal nodeset because of diskimage-builder limitations. > >> > > We're stuck with ubuntu bionic and we're pinning those jobs to the bionic nodeset for the time being: > >> > > https://review.opendev.org/756291 > >> > > > >> > > One of the community goals for victoria is to move the base nodeset of the CI jobs from ubuntu bionic to focal. > >> > > In general, doing this for most of the ironic projects has not been trivial, but still doable, and it has been accomplished almost entirely. > >> > > The biggest challenge comes from the src jobs in ironic-python-agent-builder where, for some of them, we build ironic-python-agent ramdisks using rpm-based distributions (mainly centos) with diskimage-builder on ubuntu bionic. > >> > > This is possible using utilities (e.g. yumdownloader) included in packages still present in the ubuntu repositories, such as yum-utils and rpm. > >> > > Starting from Ubuntu focal, the yum-utils package has been removed from the repositories because of lack of support of Python 2.x and there's no plan to provide such support, at least to my knowledge. > >> > > The alternative provided by dnf is not usable as there's also no plan to compile and provide a package of dnf for deb-based distributions. > >> > > For the reasons mentioned above, currently the ironic project team can't complete the migration of the CI jobs from bionic to focal and there's no ETA on when this can be accomplished. > >> > > > >> > > Considering all the things in the preamble, two possibilities are available: > >> > > - change the mechanics in diskimage-builder; this process would completely change the way DIB builds rpm-based distros; this approach delegates the work almost entirely to the DIB team. > >> > > - instead of migrating to focal, migrate to centos-8 nodeset; that would mean having devstack+ironic working on centos-8, which poses an interesting challenge and would consume no little resources from the ironic team. > >> > > > >> > > Opinions and advice are very welcome! > >> > > >> > My first reaction would be to consider the user impact beyond the box > >> > checking of fulfilling a goal. Does this imply Ubuntu users can no > >> > longer build IPA images from Focal? That would be a shame. > >> > >> The goals are really meant to drive the community together as a group. > >> And if box checking is not appropriate, then it is not appropriate. I > >> think the key is in the meaning of the goal which is to drive everyone > >> forward together. I do concur end user impact is the key item to focus > >> on. I don't think this is helped due to pre-existing constraints. I > >> seem to remember that you couldn't build ubuntu on centos previously, > >> so this sort of issue does not surprise me. At least, not without > >> having some extra packages present that one could install and it might > >> work with some hope. > > > > I didn't mean to imply that the goal is not a worthwhile endeavour, only that user impact should come first. > >> > > I never thought you were trying to imply that for a second! But your > raising a great point and I guess I'm also kind of thinking maybe the > goal doesn't always fit even though it is worthwhile. I was just clarifying - it wasn't the best choice of wording on my part originally. > > >> > >> My guess is that we're effectively entering a situation where if I > >> want to build a centos/rhel/fedora IPA image, I need to run the ipa > >> builder command on one of those machine types, and if I want the same > >> for debian/ubuntu, I need to run the build on one of those operating > >> systems. Is that situation horrible for users, not really because they > >> are and likely should keep the distribution the same for familiarity > >> and compatibility. The thing we likely need to do is do an ubuntu IPA > >> image test in CI but not save the artifact. Or debian! > > > > This does raise the question of how to test a centos based IPA image before it is published though. > > The thought that comes to mind is that we could hybridize and run some > jobs on centos some on ubuntu. At least that is my thought. I see > Clark has sent a reply, so I'm off to read that! :) > > >> > >> > >> > > >> > In terms of fulfilling the goal, perhaps running DIB in a CentOS > >> > container would help? This of course adds complexity, potential > >> > problems, and doesn't really test what users would do. > >> > > >> > > > >> > > Thanks, > >> > > > >> > > Riccardo > >> > > > >> > > > >> > From mark at stackhpc.com Thu Oct 8 08:08:15 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 8 Oct 2020 09:08:15 +0100 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <20201008041835.GA1011725@fedora19.localdomain> References: <20201008041835.GA1011725@fedora19.localdomain> Message-ID: On Thu, 8 Oct 2020 at 05:20, Ian Wienand wrote: > > On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote: > > This is possible using utilities (e.g. yumdownloader) included in packages > > still present in the ubuntu repositories, such as yum-utils and rpm. > > Starting from Ubuntu focal, the yum-utils package has been removed from the > > repositories because of lack of support of Python 2.x and there's no plan > > to provide such support, at least to my knowledge. > > Yes, this is a problem for the "-minimal" elements that build an > non-native chroot environment. Similar issues have occured with Suse > and the zypper package manager not being available on the build host. > > The options I can see: > > - use the native build-host; i.e. build on centos as you described > > - the non-minimal, i.e. "centos" and "suse", for example, images might > work under the current circumstances. They use the upsream ISO to > create the initial chroot. These are generally bigger, and we've > had stability issues in the past with the upstream images changing > suddenly in various ways that were a maintenance headache. > > - use a container for dib. DIB doesn't have a specific container, but > is part of the nodepool-builder container [1]. This is ultimately > based on Debian buster [2] which has enough support to build > everything ... for now. As noted this doesn't really solve the > problem indefinitely, but certainly buys some time if you run dib > out of that container (we could, of course, make a separate dib > container; but it would be basically the same just without nodepool > in it). This is what OpenDev production is using now, and all the > CI is ultimately based on this container environment. If this could be wrapped up in a DIB-like command, this seems the most flexible to me. > > - As clarkb has mentioned, probably the most promising alternative is > to use the upstream container images as the basis for the initial > chroot environments. jeblair has done most of this work with [3]. > I'm fiddling with it to merge to master and see what's up ... I feel > like maybe there were bootloader issues, although the basic > extraction was working. This will allow the effort put into > existing elements to not be lost. Initial reaction is that this would suffer from the same problems as using a cloud image as the base, but worse. Container images are seen as disposable, and who knows what measures might have been taken to reduce their size and disable/remove the init system? > > If I had to pick; I'd probably say that using the nodepool-builder > container is the best path. That has the most momentum behind it > because it's used for the OpenDev image builds. As we work on the > container-image base elements, this work will be deployed into the > container (meaning the container is less reliant on the underlying > version of Debian) and you can switch to them as appropriate. > > -i > > [1] https://hub.docker.com/r/zuul/nodepool-builder > [2] https://opendev.org/opendev/system-config/src/branch/master/docker/python-base/Dockerfile#L17 > [3] https://review.opendev.org/#/c/700083/ > > From ltoscano at redhat.com Thu Oct 8 08:17:19 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 08 Oct 2020 10:17:19 +0200 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <20201008041835.GA1011725@fedora19.localdomain> References: <20201008041835.GA1011725@fedora19.localdomain> Message-ID: <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> On Thursday, 8 October 2020 06:18:35 CEST Ian Wienand wrote: > On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote: > > This is possible using utilities (e.g. yumdownloader) included in packages > > still present in the ubuntu repositories, such as yum-utils and rpm. > > Starting from Ubuntu focal, the yum-utils package has been removed from > > the > > repositories because of lack of support of Python 2.x and there's no plan > > to provide such support, at least to my knowledge. > > Yes, this is a problem for the "-minimal" elements that build an > non-native chroot environment. Similar issues have occured with Suse > and the zypper package manager not being available on the build host. > > The options I can see: > > - use the native build-host; i.e. build on centos as you described > > - the non-minimal, i.e. "centos" and "suse", for example, images might > work under the current circumstances. They use the upsream ISO to > create the initial chroot. These are generally bigger, and we've > had stability issues in the past with the upstream images changing > suddenly in various ways that were a maintenance headache. > > - use a container for dib. DIB doesn't have a specific container, but > is part of the nodepool-builder container [1]. This is ultimately > based on Debian buster [2] which has enough support to build > everything ... for now. As noted this doesn't really solve the > problem indefinitely, but certainly buys some time if you run dib > out of that container (we could, of course, make a separate dib > container; but it would be basically the same just without nodepool > in it). This is what OpenDev production is using now, and all the > CI is ultimately based on this container environment. > > - As clarkb has mentioned, probably the most promising alternative is > to use the upstream container images as the basis for the initial > chroot environments. jeblair has done most of this work with [3]. > I'm fiddling with it to merge to master and see what's up ... I feel > like maybe there were bootloader issues, although the basic > extraction was working. This will allow the effort put into > existing elements to not be lost. > > If I had to pick; I'd probably say that using the nodepool-builder > container is the best path. That has the most momentum behind it > because it's used for the OpenDev image builds. As we work on the > container-image base elements, this work will be deployed into the > container (meaning the container is less reliant on the underlying > version of Debian) and you can switch to them as appropriate. I have to mention at this point, at risk of reharshing old debates, that an alternative in various scenarios (maybe not all) is the usage of libguestfs and its tools which modifies an existing base image. https://libguestfs.org/ We switched to it in Sahara for most of the guest images and that saved some headaches when building from a different host. https://docs.openstack.org/sahara/latest/user/building-guest-images.html I'd like to mention that libguestfs has been carrying a virt-dib tool for a while, but it has been tested only back to a certain version of dib: https://libguestfs.org/virt-dib.1.html -- Luigi From thierry at openstack.org Thu Oct 8 08:47:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Oct 2020 10:47:50 +0200 Subject: [largescale-sig] Next meeting: October 7, 16utc In-Reply-To: <9b198ee4-049e-4dcd-1ce4-d92b02fb7abe@openstack.org> References: <9b198ee4-049e-4dcd-1ce4-d92b02fb7abe@openstack.org> Message-ID: Once again the US+EU meeting time was lightly attended (only 3 people). We discussed the three workstreams as well as the details of out upcoming Forum/PTG sessions. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-10-07-16.01.html TODOs: - genekuo/masahito to push latest patches to oslo.metrics Our next meetings will be videomeetings during the PTG, Wednesday Oct 28 7UTC-8UTC and 16UTC-17UTC. Register here: openstack.org/ptg Our regular IRC meetings will be back Nov 10, 8utc. -- Thierry Carrez (ttx) From sebastian.luna.valero at gmail.com Thu Oct 8 08:48:00 2020 From: sebastian.luna.valero at gmail.com (Sebastian Luna Valero) Date: Thu, 8 Oct 2020 10:48:00 +0200 Subject: [neutron][security groups] Drop egress traffic to specific subnets Message-ID: Hi, I am looking at the docs in here: https://wiki.openstack.org/wiki/Neutron/SecurityGroups and I find: > For egress traffic: Only traffic matched with security group rules are allowed. So we currently have the default security group rule allowing all traffic to everywhere. We would like to prevent egress traffic from our VMs into a couple of internally reachable subnets in our deployment. Is there a way to achieve this in OpenStack? Many thanks, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Thu Oct 8 10:00:48 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Thu, 8 Oct 2020 18:00:48 +0800 Subject: [Octavia][kolla-ansible][kayobe] - network configuration knowledge gathering In-Reply-To: References: Message-ID: Hi Mark et al, I've added this network back to the compute node but unable to launch instances on this flat network. Both the controller and compute node [1] have IP interfaces on this network and bridge. After trying to launch the instance I've searched through the neutron logs and this log shows the fail but it's unclear to me why it is failing [2] . Nova says "binding failed" and to check neutron logs where it states "binding:vif_type=binding_failed". The network is created successfully via Horizon as "flat" after deploying this configuration to the host and openstack as "physnet3" and it is mapped to "broct-ovs" in ml2. I have Designate installed. Previously this skewed me because Designate the cause of another issue I had some time back. I also have just 2 compute nodes in separate Nova Availability Zones. Would you mind taking a look if you have time at the log and output below to see if I a) have something wrong with the setup which is causing the problem? or b) does it look like Designate could be causing the problem, either? For sanity sake I make re-deploy and remove Designate and re-test. [1] http://paste.openstack.org/show/798830/ [2] http://paste.openstack.org/show/798831/ Many thanks for any help and guidance. Tony Pearce On Wed, 7 Oct 2020 at 20:14, Tony Pearce wrote: > Thank you Mark for taking the time to reply to me and provide me with this > information. It's been a big help. I'll check this tomorrow. > > Thanks again. Have a great day and stay safe. > > Regards, > Tony Pearce > > > > On Wed, 7 Oct 2020 at 18:13, Mark Goddard wrote: > >> On Wed, 7 Oct 2020 at 10:20, Tony Pearce wrote: >> > >> > Hi Mark et al, thank you for your help the other day. I'm still a bit >> stuck with this one and I am trying to test the octavia network by >> deploying a regular openstack instance onto it (CentOS7) which is failing. >> In fact, my other and currently "working" external network is also failing >> to deploy instances directly onto this neetwork also. So I am wondering if >> there's some other step which I am missing here. Completely forgetting >> about the octavia network, I'm curious to understand why deploying >> instances to an external network has always failed for me. I have a network >> like this: >> > >> > Real network switch VLAN 20 ----> openstack external network "br-ex" ( >> 192.168.20.0/24)----openstack router----- openstack vxlan local network ( >> 172.16.1.0/24) >> > >> > I can successfully deploy instances onto 172.16.1.0/24 but always fail >> when attempting to deploy to 192.168.20.0/24. >> > >> > The octavia network is almost a mirror of the above except that the >> controller also has an IP address / ip interface onto the same. But >> forgetting about this, would you happen to have any ideas or pointers that >> I could check that could help me with regards to why I am unable to deploy >> an instance to 192.168.20.0/24 network? There is a DHCP agent on this >> network. When I try and deploy an instance using Horizon, the dashboard >> shows that the instance has an ip on this network for a brief moment, but >> then it disappears and soon after, fails with an error that it cannot plug >> into it. The understanding / expectation I have is that the instance will >> run on the compute node and tunnel the network back to the network node >> where it will be presented onto 192.168.20.0/24. Does the compute node >> also need an ip interface within this network to work? I ask this because >> the octavia network did indeed have this but it was too failing with the >> same error. >> > >> > Any pointers appreciated so I can try and keep my hair. Thank you :) >> >> For instances to be attached to provider networks (VLAN or flat), you >> need to set kolla_enable_neutron_provider_networks to true in >> kolla.yml. The compute hosts will need to be connected to the physical >> network in the same way as controllers, i.e. they will have an >> interface on the networks in the external_net_names list. To apply the >> change, you'll need to run host configure, then service deploy for >> openvswitch and neutron. >> Mark >> >> > >> > Tony Pearce >> > >> > >> > >> > On Mon, 5 Oct 2020 at 16:20, Mark Goddard wrote: >> >> >> >> Following up in IRC: >> >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-10-05.log.html#t2020-10-05T06:44:47 >> >> >> >> On Mon, 5 Oct 2020 at 08:50, Tony Pearce wrote: >> >> > >> >> > Hi all, >> >> > >> >> > Openstack version is Train >> >> > Deployed via Kayobe >> >> > >> >> > I am trying to deploy octavia lbaas but hitting some blockers with >> regards to how this should be set up. I think the current issue is the lack >> of neutron bridge for the octavia network and I cannot locate how to >> achieve this from the documentation. >> >> > >> >> > I have this setup at the moment which I've added another layer 2 >> network provisioned to the controller and compute node, for running octavia >> lbaas: >> >> > >> >> > [Controller node]------------octavia network-----------[Compute node] >> >> > >> >> > However as there's no bridge, the octavia instance cannot connect to >> it. The exact error from the logs: >> >> > >> >> > 2020-10-05 14:37:34.070 6 INFO >> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Mapping >> physical network physnet3 to bridge broct >> >> > 2020-10-05 14:37:34.070 6 ERROR >> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Bridge >> broct for physical network physnet3 does not >> >> > >> >> > Bridge "broct" does exist but it's not a neutron bridge: >> >> > >> >> > [root at juc-kocon1-prd kolla]# brctl show >> >> > bridge name bridge id STP enabled interfaces >> >> > brext 8000.001a4a16019a no eth5 >> >> > p-brext-phy >> >> > broct 8000.001a4a160173 no eth6 >> >> > docker0 8000.0242f5ed2aac no >> >> > [root at juc-kocon1-prd kolla]# >> >> > >> >> > >> >> > I've been through the docs a few times but I am unable to locate >> this info. Most likely the information is there but I am unsure what I need >> to look for, hence missing it. >> >> > >> >> > Would any of you be able to help shed light on this or point me to >> the documentation? >> >> > >> >> > Thank you >> >> > >> >> > Tony Pearce >> >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Thu Oct 8 11:58:58 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Thu, 08 Oct 2020 14:58:58 +0300 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances In-Reply-To: <4b1d09a9-0f75-0f8e-1b14-d54b0b088cc3@dhbw-mannheim.de> References: <4b1d09a9-0f75-0f8e-1b14-d54b0b088cc3@dhbw-mannheim.de> Message-ID: <53281602158056@mail.yandex.ru> An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Oct 8 12:32:32 2020 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 8 Oct 2020 08:32:32 -0400 Subject: [tripleo] deprecating Mistral service Message-ID: Hi folks, In our long term goal to simplify TripleO and deprecate the services that aren't used by our community anymore, I propose that we deprecate Mistral services. Mistral was used on the Undercloud in the previous cycles but not anymore. While the service could be deployed on the Overcloud, we haven't seen any of our users doing it. If that would be the case, please let us know as soon as possible. Removing it from TripleO will help us with maintenance (container images, THT/puppet integration, CI, etc). Maybe we could deprecate it in Victoria and remove it in Wallaby? Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Thu Oct 8 07:42:01 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Thu, 8 Oct 2020 09:42:01 +0200 Subject: [Octavia] Please help with amphorav2 provider populate db command In-Reply-To: <2068098159.169923596.1601883299129.JavaMail.zimbra@desy.de> References: <1344937278.133047189.1601373873142.JavaMail.zimbra@desy.de> <2068098159.169923596.1601883299129.JavaMail.zimbra@desy.de> Message-ID: Hi, the the server is just telling you: Hey, the user octavia connecting FROM xxx is not allowed. So check your privilege config on the db server (esp the host-field) Fabian Bujack, Stefan schrieb am Di., 6. Okt. 2020, 15:23: > Hello, > > thank you for your answer. The access for user octavia on host octavia04 > is denied because my database host is maria-intern.desy.de and there is > no DB service on octavia04. But why is the script trying to populate the DB > on localhost and not my DB host as I configured in the > /etc/octavia/octavia.conf? > > "Access denied for user 'octavia'@'octavia04.desy.de' > > persistence_connection = mysql+pymysql:// > octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence > > Greets Stefan Bujack > > > ------------------------------ > *From: *"Anna Taraday" > *To: *"Stefan Bujack" > *Cc: *"openstack-discuss" > *Sent: *Monday, 5 October, 2020 08:58:43 > *Subject: *Re: [Octavia] Please help with amphorav2 provider populate db > command > > Hello, > Error in your trace shows "Access denied for user 'octavia' > Please check that you followed all steps from setup guide and grant access > for user. > [1] - > https://docs.openstack.org/octavia/latest/install/install-amphorav2.html#prerequisites > > On Tue, Sep 29, 2020 at 8:12 PM Bujack, Stefan > wrote: > >> Hello, >> >> I think I need a little help again with the configuration of the amphora >> v2 provider. I get an error when I try to populate the database. It seems >> that the name of the localhost is used for the DB host and not what I >> configured in octavia.conf as DB host >> >> >> root at octavia04:~# octavia-db-manage --config-file >> /etc/octavia/octavia.conf upgrade_persistence >> 2020-09-29 11:45:01.911 818313 WARNING >> taskflow.persistence.backends.impl_sqlalchemy [-] Engine connection >> (validate) failed due to '(pymysql.err.OperationalError) (1045, "Access >> denied for user 'octavia'@'octavia04.desy.de' (using password: YES)") >> (Background on this error at: http://sqlalche.me/e/e3q8)' >> 2020-09-29 11:45:01.912 818313 CRITICAL octavia-db-manage [-] Unhandled >> error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) >> (1045, "Access denied for user 'octavia'@'octavia04.desy.de' (using >> password: YES)") >> (Background on this error at: http://sqlalche.me/e/e3q8) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most >> recent call last): >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in >> _wrap_pool_connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in >> unique_connection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> _ConnectionFairy._checkout(self) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in >> _checkout >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = >> _ConnectionRecord.checkout(pool) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in >> checkout >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = >> pool._do_get() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in >> _do_get >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> self._dec_overflow() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, >> in __exit__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> compat.reraise(exc_type, exc_value, exc_tb) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in >> reraise >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in >> _do_get >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self._create_connection() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in >> _create_connection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> _ConnectionRecord(self) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in >> __init__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> self.__connect(first_connect_check=True) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in >> __connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = >> pool._invoke_creator(self) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, >> in connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> dialect.connect(*cargs, **cparams) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in >> connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self.dbapi.connect(*cargs, **cparams) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> Connection(*args, **kwargs) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in >> __init__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in >> connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> self._request_authentication() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in >> _request_authentication >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = >> self._read_packet() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in >> _read_packet >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> packet.check_error() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in >> check_error >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> err.raise_mysql_exception(self._data) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in >> raise_mysql_exception >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise >> errorclass(errno, errval) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> pymysql.err.OperationalError: (1045, "Access denied for user 'octavia'@' >> octavia04.desy.de' (using password: YES)") >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage The above >> exception was the direct cause of the following exception: >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most >> recent call last): >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/local/bin/octavia-db-manage", line 8, in >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> sys.exit(main()) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line >> 156, in main >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> CONF.command.func(config, CONF.command.name) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line >> 98, in do_persistence_upgrade >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> persistence.initialize() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/local/lib/python3.8/dist-packages/octavia/controller/worker/v2/taskflow_jobboard_driver.py", >> line 50, in initialize >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with >> contextlib.closing(backend.get_connection()) as connection: >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", >> line 335, in get_connection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> conn.validate(max_retries=self._max_retries) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", >> line 394, in validate >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> _try_connect(self._engine) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 311, in >> wrapped_f >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self.call(f, *args, **kw) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 391, in call >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage do = >> self.iter(retry_state=retry_state) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 338, in iter >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> fut.result() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self.__get_result() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise >> self._exception >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 394, in call >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage result = >> fn(*args, **kwargs) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", >> line 391, in _try_connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with >> contextlib.closing(engine.connect()): >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2209, in >> connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self._connection_cls(self, **kwargs) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 103, in >> __init__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage else >> engine.raw_connection() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2306, in >> raw_connection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self._wrap_pool_connect( >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2279, in >> _wrap_pool_connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> Connection._handle_dbapi_exception_noconnection( >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1547, in >> _handle_dbapi_exception_noconnection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> util.raise_from_cause(sqlalchemy_exception, exc_info) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in >> raise_from_cause >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> reraise(type(exception), exception, tb=exc_tb, cause=cause) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in >> reraise >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise >> value.with_traceback(tb) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in >> _wrap_pool_connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in >> unique_connection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> _ConnectionFairy._checkout(self) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in >> _checkout >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = >> _ConnectionRecord.checkout(pool) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in >> checkout >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = >> pool._do_get() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in >> _do_get >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> self._dec_overflow() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, >> in __exit__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> compat.reraise(exc_type, exc_value, exc_tb) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in >> reraise >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in >> _do_get >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self._create_connection() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in >> _create_connection >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> _ConnectionRecord(self) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in >> __init__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> self.__connect(first_connect_check=True) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in >> __connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = >> pool._invoke_creator(self) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, >> in connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> dialect.connect(*cargs, **cparams) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in >> connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> self.dbapi.connect(*cargs, **cparams) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return >> Connection(*args, **kwargs) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in >> __init__ >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in >> connect >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> self._request_authentication() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in >> _request_authentication >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = >> self._read_packet() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in >> _read_packet >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> packet.check_error() >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in >> check_error >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> err.raise_mysql_exception(self._data) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File >> "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in >> raise_mysql_exception >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise >> errorclass(errno, errval) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, >> "Access denied for user 'octavia'@'octavia04.desy.de' (using password: >> YES)") >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage (Background on >> this error at: http://sqlalche.me/e/e3q8) >> 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage >> >> >> >> root at octavia04:~# cat /etc/octavia/octavia.conf >> [DEFAULT] >> transport_url = rabbit://openstack:password at rabbit-intern.desy.de >> use_journal = True >> [api_settings] >> bind_host = 0.0.0.0 >> bind_port = 9876 >> [certificates] >> cert_generator = local_cert_generator >> ca_certificate = /etc/octavia/certs/server_ca.cert.pem >> ca_private_key = /etc/octavia/certs/server_ca.key.pem >> ca_private_key_passphrase = passphrase >> [controller_worker] >> amp_image_owner_id = f89517ee676f4618bd55849477442aca >> amp_image_tag = amphora >> amp_ssh_key_name = octaviakey >> amp_secgroup_list = 2236e82c-13fe-42e3-9fcf-bea43917f231 >> amp_boot_network_list = 9f7fefc4-f262-4d8d-9465-240f94a7e87b >> amp_flavor_id = 200 >> network_driver = allowed_address_pairs_driver >> compute_driver = compute_nova_driver >> amphora_driver = amphora_haproxy_rest_driver >> client_ca = /etc/octavia/certs/client_ca.cert.pem >> [database] >> connection = mysql+pymysql:// >> octavia:password at maria-intern.desy.de/octavia >> [haproxy_amphora] >> client_cert = /etc/octavia/certs/client.cert-and-key.pem >> server_ca = /etc/octavia/certs/server_ca.cert.pem >> [health_manager] >> bind_port = 5555 >> bind_ip = 172.16.0.2 >> controller_ip_port_list = 172.16.0.2:5555 >> [keystone_authtoken] >> www_authenticate_uri = https://keystone-intern.desy.de:5000/v3 >> auth_url = https://keystone-intern.desy.de:5000/v3 >> memcached_servers = nova-intern.desy.de:11211 >> auth_type = password >> project_domain_name = default >> user_domain_name = default >> project_name = service >> username = octavia >> password = password >> service_token_roles_required = True >> [oslo_messaging] >> topic = octavia_prov >> [service_auth] >> auth_url = https://keystone-intern.desy.de:5000/v3 >> memcached_servers = nova-intern.desy.de:11211 >> auth_type = password >> project_domain_name = Default >> user_domain_name = Default >> project_name = service >> username = octavia >> password = password >> [task_flow] >> persistence_connection = mysql+pymysql:// >> octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence >> jobboard_backend_driver = 'redis_taskflow_driver' >> jobboard_backend_hosts = 10.254.28.113 >> jobboard_backend_port = 6379 >> jobboard_backend_password = password >> jobboard_backend_namespace = 'octavia_jobboard' >> >> >> >> root at octavia04:~# octavia-db-manage current >> 2020-09-29 12:02:23.159 819432 INFO alembic.runtime.migration [-] Context >> impl MySQLImpl. >> 2020-09-29 12:02:23.160 819432 INFO alembic.runtime.migration [-] Will >> assume non-transactional DDL. >> fbd705961c3a (head) >> >> >> >> We have an Openstack Ussuri deployment on Ubuntu 20.04. >> >> >> Thanks in advance, >> >> Stefan Bujack >> >> > > -- > Regards, > Ann Taraday > Mirantis, Inc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.bujack at desy.de Thu Oct 8 08:07:42 2020 From: stefan.bujack at desy.de (Bujack, Stefan) Date: Thu, 8 Oct 2020 10:07:42 +0200 (CEST) Subject: [Octavia] Please help with amphorav2 provider populate db command In-Reply-To: References: <1344937278.133047189.1601373873142.JavaMail.zimbra@desy.de> <2068098159.169923596.1601883299129.JavaMail.zimbra@desy.de> Message-ID: <1758157783.189241919.1602144462150.JavaMail.zimbra@desy.de> Hy, thank you for your answer. I got the right direction where to looknow . I messed up my DB privileges. Now it works. Thank you Greets Stefan Bujack From: "Fabian Zimmermann" To: "Stefan Bujack" Cc: "Anna Taraday" , "openstack-discuss" Sent: Thursday, 8 October, 2020 09:42:01 Subject: Re: [Octavia] Please help with amphorav2 provider populate db command Hi, the the server is just telling you: Hey, the user octavia connecting FROM xxx is not allowed. So check your privilege config on the db server (esp the host-field) Fabian Bujack, Stefan < [ mailto:stefan.bujack at desy.de | stefan.bujack at desy.de ] > schrieb am Di., 6. Okt. 2020, 15:23: Hello, thank you for your answer. The access for user octavia on host octavia04 is denied because my database host is [ http://maria-intern.desy.de/ | maria-intern.desy.de ] and there is no DB service on octavia04. But why is the script trying to populate the DB on localhost and not my DB host as I configured in the /etc/octavia/octavia.conf? "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' persistence_connection = mysql+pymysql:// [ http://octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence | octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence ] Greets Stefan Bujack From: "Anna Taraday" < [ mailto:akamyshnikova at mirantis.com | akamyshnikova at mirantis.com ] > To: "Stefan Bujack" < [ mailto:stefan.bujack at desy.de | stefan.bujack at desy.de ] > Cc: "openstack-discuss" < [ mailto:openstack-discuss at lists.openstack.org | openstack-discuss at lists.openstack.org ] > Sent: Monday, 5 October, 2020 08:58:43 Subject: Re: [Octavia] Please help with amphorav2 provider populate db command Hello, Error in your trace shows "Access denied for user 'octavia' Please check that you followed all steps from setup guide and grant access for user. [1] - [ https://docs.openstack.org/octavia/latest/install/install-amphorav2.html#prerequisites | https://docs.openstack.org/octavia/latest/install/install-amphorav2.html#prerequisites ] On Tue, Sep 29, 2020 at 8:12 PM Bujack, Stefan < [ mailto:stefan.bujack at desy.de | stefan.bujack at desy.de ] > wrote: BQ_BEGIN Hello, I think I need a little help again with the configuration of the amphora v2 provider. I get an error when I try to populate the database. It seems that the name of the localhost is used for the DB host and not what I configured in octavia.conf as DB host root at octavia04:~# octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade_persistence 2020-09-29 11:45:01.911 818313 WARNING taskflow.persistence.backends.impl_sqlalchemy [-] Engine connection (validate) failed due to '(pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") (Background on this error at: [ http://sqlalche.me/e/e3q8 | http://sqlalche.me/e/e3q8 ] )' 2020-09-29 11:45:01.912 818313 CRITICAL octavia-db-manage [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") (Background on this error at: [ http://sqlalche.me/e/e3q8 | http://sqlalche.me/e/e3q8 ] ) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most recent call last): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in unique_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionFairy._checkout(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in _checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = _ConnectionRecord.checkout(pool) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = pool._do_get() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._dec_overflow() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage compat.reraise(exc_type, exc_value, exc_tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._create_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in _create_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionRecord(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.__connect(first_connect_check=True) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in __connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = pool._invoke_creator(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return dialect.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.dbapi.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return Connection(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._request_authentication() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in _request_authentication 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = self._read_packet() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage packet.check_error() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage err.raise_mysql_exception(self._data) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise errorclass(errno, errval) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage pymysql.err.OperationalError: (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage The above exception was the direct cause of the following exception: 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most recent call last): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/bin/octavia-db-manage", line 8, in 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sys.exit(main()) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line 156, in main 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage CONF.command.func(config, [ http://conf.command.name/ | CONF.command.name ] ) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line 98, in do_persistence_upgrade 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage persistence.initialize() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/controller/worker/v2/taskflow_jobboard_driver.py", line 50, in initialize 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with contextlib.closing(backend.get_connection()) as connection: 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 335, in get_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage conn.validate(max_retries=self._max_retries) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 394, in validate 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage _try_connect(self._engine) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 311, in wrapped_f 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.call(f, *args, **kw) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 391, in call 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage do = self.iter(retry_state=retry_state) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 338, in iter 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fut.result() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.__get_result() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise self._exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 394, in call 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage result = fn(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 391, in _try_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with contextlib.closing(engine.connect()): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2209, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._connection_cls(self, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 103, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage else engine.raw_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2306, in raw_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._wrap_pool_connect( 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2279, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Connection._handle_dbapi_exception_noconnection( 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1547, in _handle_dbapi_exception_noconnection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage util.raise_from_cause(sqlalchemy_exception, exc_info) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage reraise(type(exception), exception, tb=exc_tb, cause=cause) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value.with_traceback(tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in unique_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionFairy._checkout(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in _checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = _ConnectionRecord.checkout(pool) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = pool._do_get() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._dec_overflow() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage compat.reraise(exc_type, exc_value, exc_tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._create_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in _create_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionRecord(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.__connect(first_connect_check=True) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in __connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = pool._invoke_creator(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return dialect.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.dbapi.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return Connection(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._request_authentication() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in _request_authentication 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = self._read_packet() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage packet.check_error() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage err.raise_mysql_exception(self._data) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise errorclass(errno, errval) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@' [ http://octavia04.desy.de/ | octavia04.desy.de ] ' (using password: YES)") 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage (Background on this error at: [ http://sqlalche.me/e/e3q8 | http://sqlalche.me/e/e3q8 ] ) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage root at octavia04:~# cat /etc/octavia/octavia.conf [DEFAULT] transport_url = rabbit:// [ mailto:openstack%3Apassword at rabbit-intern.desy.de | openstack:password at rabbit-intern.desy.de ] use_journal = True [api_settings] bind_host = 0.0.0.0 bind_port = 9876 [certificates] cert_generator = local_cert_generator ca_certificate = /etc/octavia/certs/server_ca.cert.pem ca_private_key = /etc/octavia/certs/server_ca.key.pem ca_private_key_passphrase = passphrase [controller_worker] amp_image_owner_id = f89517ee676f4618bd55849477442aca amp_image_tag = amphora amp_ssh_key_name = octaviakey amp_secgroup_list = 2236e82c-13fe-42e3-9fcf-bea43917f231 amp_boot_network_list = 9f7fefc4-f262-4d8d-9465-240f94a7e87b amp_flavor_id = 200 network_driver = allowed_address_pairs_driver compute_driver = compute_nova_driver amphora_driver = amphora_haproxy_rest_driver client_ca = /etc/octavia/certs/client_ca.cert.pem [database] connection = mysql+pymysql:// [ http://octavia:password at maria-intern.desy.de/octavia | octavia:password at maria-intern.desy.de/octavia ] [haproxy_amphora] client_cert = /etc/octavia/certs/client.cert-and-key.pem server_ca = /etc/octavia/certs/server_ca.cert.pem [health_manager] bind_port = 5555 bind_ip = 172.16.0.2 controller_ip_port_list = [ http://172.16.0.2:5555/ | 172.16.0.2:5555 ] [keystone_authtoken] www_authenticate_uri = [ https://keystone-intern.desy.de:5000/v3 | https://keystone-intern.desy.de:5000/v3 ] auth_url = [ https://keystone-intern.desy.de:5000/v3 | https://keystone-intern.desy.de:5000/v3 ] memcached_servers = [ http://nova-intern.desy.de:11211/ | nova-intern.desy.de:11211 ] auth_type = password project_domain_name = default user_domain_name = default project_name = service username = octavia password = password service_token_roles_required = True [oslo_messaging] topic = octavia_prov [service_auth] auth_url = [ https://keystone-intern.desy.de:5000/v3 | https://keystone-intern.desy.de:5000/v3 ] memcached_servers = [ http://nova-intern.desy.de:11211/ | nova-intern.desy.de:11211 ] auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = octavia password = password [task_flow] persistence_connection = mysql+pymysql:// [ http://octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence | octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence ] jobboard_backend_driver = 'redis_taskflow_driver' jobboard_backend_hosts = 10.254.28.113 jobboard_backend_port = 6379 jobboard_backend_password = password jobboard_backend_namespace = 'octavia_jobboard' root at octavia04:~# octavia-db-manage current 2020-09-29 12:02:23.159 819432 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2020-09-29 12:02:23.160 819432 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. fbd705961c3a (head) We have an Openstack Ussuri deployment on Ubuntu 20.04. Thanks in advance, Stefan Bujack -- Regards, Ann Taraday Mirantis, Inc BQ_END -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Duncan at ncirl.ie Thu Oct 8 09:42:44 2020 From: Robert.Duncan at ncirl.ie (Robert Duncan) Date: Thu, 8 Oct 2020 09:42:44 +0000 Subject: help with Openstack magnum in train Message-ID: Hi, I have openstack train deployed by kolla-ansible and am trying to deploy a k8s cluster on Fedora Atomic 27 with magnum it seems there is no podman binary in the Atomic 27 image specifically, Fedora-Atomic-27-20180419.0.x86_64.qcow2 I have set the label use_podman=false however, the template seems to ignore that which results in this error in the heat agent log: WARNING Attempt 12: Trying to install kubectl. Sleeping 5s * i=12 * '[' 12 -gt 60 ']' * echo 'WARNING Attempt 12: Trying to install kubectl. Sleeping 5s' * sleep 5s * ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'''' bash: /usr/bin/podman: No such file or directory [fedora at test-quznqfqfa5ld-master-0 ~]$ which podman /usr/bin/which: no podman in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/fedora/.local/bin:/home/fedora/bin) I have tried a later version of atomic but the dependencies are closely coupled and it seems I must use version 27 - what am I missing? I'm following along with the Train documentation https://docs.openstack.org/magnum/train/install/launch-instance.html and was able to deploy a docker swarm cluster. thanks, Rob. ________________________________ The information contained and transmitted in this e-mail is confidential information, and is intended only for the named recipient to which it is addressed. The content of this e-mail may not have been sent with the authority of National College of Ireland. Any views or opinions presented are solely those of the author and do not necessarily represent those of National College of Ireland. If the reader of this message is not the named recipient or a person responsible for delivering it to the named recipient, you are notified that the review, dissemination, distribution, transmission, printing or copying, forwarding, or any other use of this message or any part of it, including any attachments, is strictly prohibited. If you have received this communication in error, please delete the e-mail and destroy all record of this communication. Thank you for your assistance. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Thu Oct 8 13:13:31 2020 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Thu, 8 Oct 2020 15:13:31 +0200 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: Hi, Deprecating in Victoria and removing in Wallaby sounds reasonable. чт, 8 окт. 2020 г. в 14:35, Emilien Macchi : > Hi folks, > > In our long term goal to simplify TripleO and deprecate the services that > aren't used by our community anymore, I propose that we deprecate Mistral > services. > Mistral was used on the Undercloud in the previous cycles but not anymore. > While the service could be deployed on the Overcloud, we haven't seen any > of our users doing it. If that would be the case, please let us know as > soon as possible. > Removing it from TripleO will help us with maintenance (container images, > THT/puppet integration, CI, etc). > Maybe we could deprecate it in Victoria and remove it in Wallaby? > > Thanks, > -- > Emilien Macchi > -- Sergii Golovatiuk Senior Software Developer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Oct 8 13:35:22 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 8 Oct 2020 15:35:22 +0200 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> References: <20201008041835.GA1011725@fedora19.localdomain> <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> Message-ID: On Thu, Oct 8, 2020 at 10:19 AM Luigi Toscano wrote: > On Thursday, 8 October 2020 06:18:35 CEST Ian Wienand wrote: > > On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote: > > > This is possible using utilities (e.g. yumdownloader) included in > packages > > > still present in the ubuntu repositories, such as yum-utils and rpm. > > > Starting from Ubuntu focal, the yum-utils package has been removed from > > > the > > > repositories because of lack of support of Python 2.x and there's no > plan > > > to provide such support, at least to my knowledge. > > > > Yes, this is a problem for the "-minimal" elements that build an > > non-native chroot environment. Similar issues have occured with Suse > > and the zypper package manager not being available on the build host. > > > > The options I can see: > > > > - use the native build-host; i.e. build on centos as you described > > > > - the non-minimal, i.e. "centos" and "suse", for example, images might > > work under the current circumstances. They use the upsream ISO to > > create the initial chroot. These are generally bigger, and we've > > had stability issues in the past with the upstream images changing > > suddenly in various ways that were a maintenance headache. > > > > - use a container for dib. DIB doesn't have a specific container, but > > is part of the nodepool-builder container [1]. This is ultimately > > based on Debian buster [2] which has enough support to build > > everything ... for now. As noted this doesn't really solve the > > problem indefinitely, but certainly buys some time if you run dib > > out of that container (we could, of course, make a separate dib > > container; but it would be basically the same just without nodepool > > in it). This is what OpenDev production is using now, and all the > > CI is ultimately based on this container environment. > > > > - As clarkb has mentioned, probably the most promising alternative is > > to use the upstream container images as the basis for the initial > > chroot environments. jeblair has done most of this work with [3]. > > I'm fiddling with it to merge to master and see what's up ... I feel > > like maybe there were bootloader issues, although the basic > > extraction was working. This will allow the effort put into > > existing elements to not be lost. > > > > If I had to pick; I'd probably say that using the nodepool-builder > > container is the best path. That has the most momentum behind it > > because it's used for the OpenDev image builds. As we work on the > > container-image base elements, this work will be deployed into the > > container (meaning the container is less reliant on the underlying > > version of Debian) and you can switch to them as appropriate. > > I have to mention at this point, at risk of reharshing old debates, that > an > alternative in various scenarios (maybe not all) is the usage of > libguestfs > and its tools which modifies an existing base image. > > https://libguestfs.org/ > > We switched to it in Sahara for most of the guest images and that saved > some > headaches when building from a different host. > https://docs.openstack.org/sahara/latest/user/building-guest-images.html I like guestfish (a lot), but our IPA images are a bit awkward: we take a qcow2 image and convert it to a kernel/ramdisk pair. I guess we can use guestfish to get the former, but not the latter. It also means changing the approach that has been used and documented for years. Not impossible, but should not be done lightly. Dmitry > > > > I'd like to mention that libguestfs has been carrying a virt-dib tool for > a > while, but it has been tested only back to a certain version of dib: > https://libguestfs.org/virt-dib.1.html > > -- > Luigi > > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Oct 8 13:44:46 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 8 Oct 2020 16:44:46 +0300 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: On Thu, Oct 8, 2020 at 3:34 PM Emilien Macchi wrote: > Hi folks, > > In our long term goal to simplify TripleO and deprecate the services that > aren't used by our community anymore, I propose that we deprecate Mistral > services. > Mistral was used on the Undercloud in the previous cycles but not anymore. > While the service could be deployed on the Overcloud, we haven't seen any > of our users doing it. If that would be the case, please let us know as > soon as possible. > Removing it from TripleO will help us with maintenance (container images, > THT/puppet integration, CI, etc). > Maybe we could deprecate it in Victoria and remove it in Wallaby? > +1 - at least for tripleo CI we aren't running mistral on anything newer than train > > Thanks, > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Thu Oct 8 13:58:22 2020 From: johfulto at redhat.com (John Fulton) Date: Thu, 8 Oct 2020 09:58:22 -0400 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: On Thu, Oct 8, 2020 at 9:47 AM Marios Andreou wrote: > On Thu, Oct 8, 2020 at 3:34 PM Emilien Macchi wrote: >> >> Hi folks, >> >> In our long term goal to simplify TripleO and deprecate the services that aren't used by our community anymore, I propose that we deprecate Mistral services. >> Mistral was used on the Undercloud in the previous cycles but not anymore. While the service could be deployed on the Overcloud, we haven't seen any of our users doing it. If that would be the case, please let us know as soon as possible. >> Removing it from TripleO will help us with maintenance (container images, THT/puppet integration, CI, etc). >> Maybe we could deprecate it in Victoria and remove it in Wallaby? > > > +1 - at least for tripleo CI we aren't running mistral on anything newer than train +1 In main branch (to be Victoria) Ceph and Derived parameters are not using Mistral anymore. > > > >> >> >> Thanks, >> -- >> Emilien Macchi From bharat at stackhpc.com Thu Oct 8 14:04:10 2020 From: bharat at stackhpc.com (Bharat Kunwar) Date: Thu, 8 Oct 2020 15:04:10 +0100 Subject: help with Openstack magnum in train In-Reply-To: References: Message-ID: Use fedora atomic 29 if you have to but it is end of life and no longer security patched. We recommend using fedora coreos instead. Sent from my iPhone > On 8 Oct 2020, at 13:44, Robert Duncan wrote: > >  > Hi, > > I have openstack train deployed by kolla-ansible and am trying to deploy a k8s cluster on Fedora Atomic 27 with magnum > it seems there is no podman binary in the Atomic 27 image > specifically, Fedora-Atomic-27-20180419.0.x86_64.qcow2 > > > I have set the label use_podman=false > > however, the template seems to ignore that which results in this error in the heat agent log: > > WARNING Attempt 12: Trying to install kubectl. Sleeping 5s > > i=12 > '[' 12 -gt 60 ']' > echo 'WARNING Attempt 12: Trying to install kubectl. Sleeping 5s' > sleep 5s > ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'''' > bash: /usr/bin/podman: No such file or directory > [fedora at test-quznqfqfa5ld-master-0 ~]$ which podman > /usr/bin/which: no podman in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/fedora/.local/bin:/home/fedora/bin) > > > > I have tried a later version of atomic but the dependencies are closely coupled and it seems I must use version 27 - what am I missing? > > I'm following along with the Train documentation https://docs.openstack.org/magnum/train/install/launch-instance.html > > and was able to deploy a docker swarm cluster. > > > > thanks, > > Rob. > > > > > > > The information contained and transmitted in this e-mail is confidential information, and is intended only for the named recipient to which it is addressed. The content of this e-mail may not have been sent with the authority of National College of Ireland. Any views or opinions presented are solely those of the author and do not necessarily represent those of National College of Ireland. If the reader of this message is not the named recipient or a person responsible for delivering it to the named recipient, you are notified that the review, dissemination, distribution, transmission, printing or copying, forwarding, or any other use of this message or any part of it, including any attachments, is strictly prohibited. If you have received this communication in error, please delete the e-mail and destroy all record of this communication. Thank you for your assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Thu Oct 8 14:24:17 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 8 Oct 2020 10:24:17 -0400 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: References: Message-ID: Can I make a feature request? Would it be possible to get the calendar in iCal format (or whatever we can import/embed in our calendaring tool of choice), ideally with one calendar per project/track? It would help deal with time zones, and would be *really* appreciated. Thanks in advance! On Wed, Oct 7, 2020 at 5:46 PM Kendall Nelson wrote: > > Hey everyone, > > The October 2020 Project Teams Gathering is right around the corner! The official schedule has now been posted on the PTG website [1], the PTGbot has been updated[2], and we have also attached it to this email. > > Friendly reminder, if you have not already registered, please do so [3]. It is important that we get everyone to register for the event as this is how we will contact you about tooling information/passwords and other event details. > > Please let us know if you have any questions. > > Cheers, > The Kendalls > (diablo_rojo & wendallkaters) > > [1] PTG Website www.openstack.org/ptg > [2] PTGbot: http://ptg.openstack.org/ptg.html > [3] PTG Registration: https://october2020ptg.eventbrite.com From fungi at yuggoth.org Thu Oct 8 14:37:07 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Oct 2020 14:37:07 +0000 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: References: Message-ID: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> On 2020-10-08 10:24:17 -0400 (-0400), Artom Lifshitz wrote: > Can I make a feature request? Would it be possible to get the calendar > in iCal format (or whatever we can import/embed in our calendaring > tool of choice), ideally with one calendar per project/track? It would > help deal with time zones, and would be *really* appreciated. [...] The live schedule at http://ptg.openstack.org/ptg.html is continuously updated by an IRC bot, and its source code lives here: https://opendev.org/openstack/ptgbot If someone wants, it would probably not be hard to integrate or lift some of the code from this tool: https://opendev.org/opendev/yaml2ical Just have the bot emit a ptg.ical file the same way it does its ptg.json file and then add a link for it in the ptg.html file it builds. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From Robert.Duncan at ncirl.ie Thu Oct 8 14:36:26 2020 From: Robert.Duncan at ncirl.ie (Robert Duncan) Date: Thu, 8 Oct 2020 14:36:26 +0000 Subject: help with Openstack magnum in train In-Reply-To: References: , Message-ID: Thanks Bharat, yes Fedora 29 works!!, I will give coreos a try also - however the documentation says that the heat templates and machine images are closely coupled and the coreos links are broken, do you know specifically which version or coreos for magnum train release is correct? e.g. - this is broken link on Magnum user guide page http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2 and the support matrix lists coreos version 4.3.6 but I can't find a coreos with anything like that version now Rob. ________________________________ From: Bharat Kunwar Sent: Thursday 8 October 2020 15:04 To: Robert Duncan Cc: OpenStack Discuss Subject: Re: help with Openstack magnum in train Use fedora atomic 29 if you have to but it is end of life and no longer security patched. We recommend using fedora coreos instead. Sent from my iPhone On 8 Oct 2020, at 13:44, Robert Duncan wrote:  Hi, I have openstack train deployed by kolla-ansible and am trying to deploy a k8s cluster on Fedora Atomic 27 with magnum it seems there is no podman binary in the Atomic 27 image specifically, Fedora-Atomic-27-20180419.0.x86_64.qcow2 I have set the label use_podman=false however, the template seems to ignore that which results in this error in the heat agent log: WARNING Attempt 12: Trying to install kubectl. Sleeping 5s * i=12 * '[' 12 -gt 60 ']' * echo 'WARNING Attempt 12: Trying to install kubectl. Sleeping 5s' * sleep 5s * ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'''' bash: /usr/bin/podman: No such file or directory [fedora at test-quznqfqfa5ld-master-0 ~]$ which podman /usr/bin/which: no podman in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/fedora/.local/bin:/home/fedora/bin) I have tried a later version of atomic but the dependencies are closely coupled and it seems I must use version 27 - what am I missing? I'm following along with the Train documentation https://docs.openstack.org/magnum/train/install/launch-instance.html and was able to deploy a docker swarm cluster. thanks, Rob. ________________________________ The information contained and transmitted in this e-mail is confidential information, and is intended only for the named recipient to which it is addressed. The content of this e-mail may not have been sent with the authority of National College of Ireland. Any views or opinions presented are solely those of the author and do not necessarily represent those of National College of Ireland. If the reader of this message is not the named recipient or a person responsible for delivering it to the named recipient, you are notified that the review, dissemination, distribution, transmission, printing or copying, forwarding, or any other use of this message or any part of it, including any attachments, is strictly prohibited. If you have received this communication in error, please delete the e-mail and destroy all record of this communication. Thank you for your assistance. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Oct 8 15:38:57 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 08 Oct 2020 08:38:57 -0700 Subject: =?UTF-8?Q?Re:_[diskimage-builder][ironic-python-agent-builder][ci][focal?= =?UTF-8?Q?][ironic]_ipa-builder_CI_jobs_can't_migrate_to_ubuntu_focal_n?= =?UTF-8?Q?odeset?= In-Reply-To: <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> References: <20201008041835.GA1011725@fedora19.localdomain> <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> Message-ID: <33d43b9f-89e2-4c24-baf2-729fbad10ad7@www.fastmail.com> On Thu, Oct 8, 2020, at 1:17 AM, Luigi Toscano wrote: > On Thursday, 8 October 2020 06:18:35 CEST Ian Wienand wrote: > > On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote: > > > This is possible using utilities (e.g. yumdownloader) included in packages > > > still present in the ubuntu repositories, such as yum-utils and rpm. > > > Starting from Ubuntu focal, the yum-utils package has been removed from > > > the > > > repositories because of lack of support of Python 2.x and there's no plan > > > to provide such support, at least to my knowledge. > > > > Yes, this is a problem for the "-minimal" elements that build an > > non-native chroot environment. Similar issues have occured with Suse > > and the zypper package manager not being available on the build host. > > > > The options I can see: > > > > - use the native build-host; i.e. build on centos as you described > > > > - the non-minimal, i.e. "centos" and "suse", for example, images might > > work under the current circumstances. They use the upsream ISO to > > create the initial chroot. These are generally bigger, and we've > > had stability issues in the past with the upstream images changing > > suddenly in various ways that were a maintenance headache. > > > > - use a container for dib. DIB doesn't have a specific container, but > > is part of the nodepool-builder container [1]. This is ultimately > > based on Debian buster [2] which has enough support to build > > everything ... for now. As noted this doesn't really solve the > > problem indefinitely, but certainly buys some time if you run dib > > out of that container (we could, of course, make a separate dib > > container; but it would be basically the same just without nodepool > > in it). This is what OpenDev production is using now, and all the > > CI is ultimately based on this container environment. > > > > - As clarkb has mentioned, probably the most promising alternative is > > to use the upstream container images as the basis for the initial > > chroot environments. jeblair has done most of this work with [3]. > > I'm fiddling with it to merge to master and see what's up ... I feel > > like maybe there were bootloader issues, although the basic > > extraction was working. This will allow the effort put into > > existing elements to not be lost. > > > > If I had to pick; I'd probably say that using the nodepool-builder > > container is the best path. That has the most momentum behind it > > because it's used for the OpenDev image builds. As we work on the > > container-image base elements, this work will be deployed into the > > container (meaning the container is less reliant on the underlying > > version of Debian) and you can switch to them as appropriate. > > I have to mention at this point, at risk of reharshing old debates, that an > alternative in various scenarios (maybe not all) is the usage of libguestfs > and its tools which modifies an existing base image. > > https://libguestfs.org/ > > We switched to it in Sahara for most of the guest images and that saved some > headaches when building from a different host. > https://docs.openstack.org/sahara/latest/user/building-guest-images.html > > > I'd like to mention that libguestfs has been carrying a virt-dib tool for a > while, but it has been tested only back to a certain version of dib: > https://libguestfs.org/virt-dib.1.html The major issues with libguestfs is that it seem primarily used to modify existing images. This has all of the problems that ianw points out above with size and stability. I'm sure you can bootstrap a base image to use with it too, but would you end up in the same toolchain problem as with dib if you did that? Also, to be clear DIB supports the same use case of starting from an existing image (but using different tools) if you want to avoid the headaches with bootstrapping images yourself. I think the more specific issue we're trying to figure out is "how do you bootstrap images from scratch for one distro on top of another if they use different bootstrapping tools". I don't think libguestfs helps with that much. And if you are starting from an existing image then DIB or libguestfs should work fine. From ltoscano at redhat.com Thu Oct 8 15:57:55 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 08 Oct 2020 17:57:55 +0200 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <33d43b9f-89e2-4c24-baf2-729fbad10ad7@www.fastmail.com> References: <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> <33d43b9f-89e2-4c24-baf2-729fbad10ad7@www.fastmail.com> Message-ID: <4058831.ejJDZkT8p0@whitebase.usersys.redhat.com> On Thursday, 8 October 2020 17:38:57 CEST Clark Boylan wrote: > On Thu, Oct 8, 2020, at 1:17 AM, Luigi Toscano wrote: > > On Thursday, 8 October 2020 06:18:35 CEST Ian Wienand wrote: > > > On Wed, Oct 07, 2020 at 05:09:56PM +0200, Riccardo Pittau wrote: > > > > This is possible using utilities (e.g. yumdownloader) included in > > > > packages > > > > still present in the ubuntu repositories, such as yum-utils and rpm. > > > > Starting from Ubuntu focal, the yum-utils package has been removed > > > > from > > > > the > > > > repositories because of lack of support of Python 2.x and there's no > > > > plan > > > > to provide such support, at least to my knowledge. > > > > > > Yes, this is a problem for the "-minimal" elements that build an > > > non-native chroot environment. Similar issues have occured with Suse > > > and the zypper package manager not being available on the build host. > > > > > > The options I can see: > > > > > > - use the native build-host; i.e. build on centos as you described > > > > > > - the non-minimal, i.e. "centos" and "suse", for example, images might > > > > > > work under the current circumstances. They use the upsream ISO to > > > create the initial chroot. These are generally bigger, and we've > > > had stability issues in the past with the upstream images changing > > > suddenly in various ways that were a maintenance headache. > > > > > > - use a container for dib. DIB doesn't have a specific container, but > > > > > > is part of the nodepool-builder container [1]. This is ultimately > > > based on Debian buster [2] which has enough support to build > > > everything ... for now. As noted this doesn't really solve the > > > problem indefinitely, but certainly buys some time if you run dib > > > out of that container (we could, of course, make a separate dib > > > container; but it would be basically the same just without nodepool > > > in it). This is what OpenDev production is using now, and all the > > > CI is ultimately based on this container environment. > > > > > > - As clarkb has mentioned, probably the most promising alternative is > > > > > > to use the upstream container images as the basis for the initial > > > chroot environments. jeblair has done most of this work with [3]. > > > I'm fiddling with it to merge to master and see what's up ... I feel > > > like maybe there were bootloader issues, although the basic > > > extraction was working. This will allow the effort put into > > > existing elements to not be lost. > > > > > > If I had to pick; I'd probably say that using the nodepool-builder > > > container is the best path. That has the most momentum behind it > > > because it's used for the OpenDev image builds. As we work on the > > > container-image base elements, this work will be deployed into the > > > container (meaning the container is less reliant on the underlying > > > version of Debian) and you can switch to them as appropriate. > > > > I have to mention at this point, at risk of reharshing old debates, that > > an > > alternative in various scenarios (maybe not all) is the usage of > > libguestfs > > and its tools which modifies an existing base image. > > > > https://libguestfs.org/ > > > > We switched to it in Sahara for most of the guest images and that saved > > some headaches when building from a different host. > > https://docs.openstack.org/sahara/latest/user/building-guest-images.html > > > > > > I'd like to mention that libguestfs has been carrying a virt-dib tool for > > a > > while, but it has been tested only back to a certain version of dib: > > https://libguestfs.org/virt-dib.1.html > > The major issues with libguestfs is that it seem primarily used to modify > existing images. This has all of the problems that ianw points out above > with size and stability. I'm sure you can bootstrap a base image to use > with it too, but would you end up in the same toolchain problem as with dib > if you did that? > > Also, to be clear DIB supports the same use case of starting from an > existing image (but using different tools) if you want to avoid the > headaches with bootstrapping images yourself. > > I think the more specific issue we're trying to figure out is "how do you > bootstrap images from scratch for one distro on top of another if they use > different bootstrapping tools". I don't think libguestfs helps with that > much. And if you are starting from an existing image then DIB or libguestfs > should work fine. Uhm, maybe unattended virt-install could help there? -- Luigi From fungi at yuggoth.org Thu Oct 8 16:04:45 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Oct 2020 16:04:45 +0000 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <4058831.ejJDZkT8p0@whitebase.usersys.redhat.com> References: <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> <33d43b9f-89e2-4c24-baf2-729fbad10ad7@www.fastmail.com> <4058831.ejJDZkT8p0@whitebase.usersys.redhat.com> Message-ID: <20201008160444.uug66lb5ihsc3qx3@yuggoth.org> On 2020-10-08 17:57:55 +0200 (+0200), Luigi Toscano wrote: [...] > Uhm, maybe unattended virt-install could help there? I expect that would be almost unworkable from within a virtual machine instance without nested virt acceleration. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Thu Oct 8 16:36:02 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 8 Oct 2020 10:36:02 -0600 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: +1 On Thu, Oct 8, 2020 at 8:04 AM John Fulton wrote: > On Thu, Oct 8, 2020 at 9:47 AM Marios Andreou wrote: > > On Thu, Oct 8, 2020 at 3:34 PM Emilien Macchi > wrote: > >> > >> Hi folks, > >> > >> In our long term goal to simplify TripleO and deprecate the services > that aren't used by our community anymore, I propose that we deprecate > Mistral services. > >> Mistral was used on the Undercloud in the previous cycles but not > anymore. While the service could be deployed on the Overcloud, we haven't > seen any of our users doing it. If that would be the case, please let us > know as soon as possible. > >> Removing it from TripleO will help us with maintenance (container > images, THT/puppet integration, CI, etc). > >> Maybe we could deprecate it in Victoria and remove it in Wallaby? > > > > > > +1 - at least for tripleo CI we aren't running mistral on anything newer > than train > > +1 In main branch (to be Victoria) Ceph and Derived parameters are > not using Mistral anymore. > > > > > > > > >> > >> > >> Thanks, > >> -- > >> Emilien Macchi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Oct 8 18:54:04 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 8 Oct 2020 11:54:04 -0700 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> References: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> Message-ID: This came up last time too lol. -Kendall (diablo_rojo) On Thu, Oct 8, 2020 at 7:38 AM Jeremy Stanley wrote: > On 2020-10-08 10:24:17 -0400 (-0400), Artom Lifshitz wrote: > > Can I make a feature request? Would it be possible to get the calendar > > in iCal format (or whatever we can import/embed in our calendaring > > tool of choice), ideally with one calendar per project/track? It would > > help deal with time zones, and would be *really* appreciated. > [...] > > The live schedule at http://ptg.openstack.org/ptg.html is > continuously updated by an IRC bot, and its source code lives here: > > https://opendev.org/openstack/ptgbot > > If someone wants, it would probably not be hard to integrate or lift > some of the code from this tool: > > https://opendev.org/opendev/yaml2ical > > Just have the bot emit a ptg.ical file the same way it does its > ptg.json file and then add a link for it in the ptg.html file it > builds. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 8 19:45:30 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 8 Oct 2020 21:45:30 +0200 Subject: [neutron] Drivers meeting - 09.10.2020 Message-ID: <20201008194530.myypcxymz3fo633x@p1.internet.domowy> Hi, I don't have any new or updated RFEs to discuss for drivers team for this week. So lets cancel the meeting this week. Have a great weekend and see You all next week :) -- Slawek Kaplonski Principal Software Engineer Red Hat From pierre at stackhpc.com Thu Oct 8 19:56:12 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 8 Oct 2020 21:56:12 +0200 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: References: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> Message-ID: Sean Mooney wrote something last time which can probably be reused by just updating the csv: http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014217.html On Thu, 8 Oct 2020 at 20:55, Kendall Nelson wrote: > > This came up last time too lol. > > -Kendall (diablo_rojo) > > On Thu, Oct 8, 2020 at 7:38 AM Jeremy Stanley wrote: >> >> On 2020-10-08 10:24:17 -0400 (-0400), Artom Lifshitz wrote: >> > Can I make a feature request? Would it be possible to get the calendar >> > in iCal format (or whatever we can import/embed in our calendaring >> > tool of choice), ideally with one calendar per project/track? It would >> > help deal with time zones, and would be *really* appreciated. >> [...] >> >> The live schedule at http://ptg.openstack.org/ptg.html is >> continuously updated by an IRC bot, and its source code lives here: >> >> https://opendev.org/openstack/ptgbot >> >> If someone wants, it would probably not be hard to integrate or lift >> some of the code from this tool: >> >> https://opendev.org/opendev/yaml2ical >> >> Just have the bot emit a ptg.ical file the same way it does its >> ptg.json file and then add a link for it in the ptg.html file it >> builds. >> -- >> Jeremy Stanley From walsh277072 at gmail.com Fri Oct 9 00:29:55 2020 From: walsh277072 at gmail.com (WALSH CHANG) Date: Fri, 9 Oct 2020 00:29:55 +0000 Subject: [ceilometer] How to install older version of gnocchi Message-ID: I use Ubuntu 18.04 to install https://docs.openstack.org/ceilometer/stein/install/install-base-ubuntu.html (ceilometer) When I run service gnocchi-api restart, I get Failed to restart gnocchi-api.service: Unit gnocchi-api.service not found. I can't find any solution to address this issue. Someone said the new version doesn't include the gnocchi-api, so I am trying to install the previous version, but I don't know where I can find the old version of gnocchi-api https://stackoverflow.com/questions/47520779/service-gnocchi-api-not-found Can anyone help? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Oct 9 02:39:16 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 9 Oct 2020 11:39:16 +0900 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: +1 Will we deprecate Zaqar as well ? AFAIK Zaqar is a part of deployment method by Mistral, As I've never seen users have Zaqar deployed in overcloud I think we are good to deprecate it following Mistral. On Fri, Oct 9, 2020 at 1:38 AM Wesley Hayutin wrote: > +1 > > On Thu, Oct 8, 2020 at 8:04 AM John Fulton wrote: > >> On Thu, Oct 8, 2020 at 9:47 AM Marios Andreou wrote: >> > On Thu, Oct 8, 2020 at 3:34 PM Emilien Macchi >> wrote: >> >> >> >> Hi folks, >> >> >> >> In our long term goal to simplify TripleO and deprecate the services >> that aren't used by our community anymore, I propose that we deprecate >> Mistral services. >> >> Mistral was used on the Undercloud in the previous cycles but not >> anymore. While the service could be deployed on the Overcloud, we haven't >> seen any of our users doing it. If that would be the case, please let us >> know as soon as possible. >> >> Removing it from TripleO will help us with maintenance (container >> images, THT/puppet integration, CI, etc). >> >> Maybe we could deprecate it in Victoria and remove it in Wallaby? >> > >> > >> > +1 - at least for tripleo CI we aren't running mistral on anything >> newer than train >> >> +1 In main branch (to be Victoria) Ceph and Derived parameters are >> not using Mistral anymore. >> >> > >> > >> > >> >> >> >> >> >> Thanks, >> >> -- >> >> Emilien Macchi >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Fri Oct 9 02:54:12 2020 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 9 Oct 2020 08:24:12 +0530 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: On Thu, Oct 8, 2020 at 6:08 PM Emilien Macchi wrote: > Hi folks, > > In our long term goal to simplify TripleO and deprecate the services that > aren't used by our community anymore, I propose that we deprecate Mistral > services. > Mistral was used on the Undercloud in the previous cycles but not anymore. > While the service could be deployed on the Overcloud, we haven't seen any > of our users doing it. If that would be the case, please let us know as > soon as possible. > I would be surprised if no one in the community has been deploying mistral on overcloud with TripleO. Though it's good to support as many _active_ services available in OpenStack, probably not worth the effort to maintain stuff that no one uses. > Removing it from TripleO will help us with maintenance (container images, > THT/puppet integration, CI, etc). > Maybe we could deprecate it in Victoria and remove it in Wallaby? > > Thanks, > -- > Emilien Macchi > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Oct 9 09:11:03 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 9 Oct 2020 17:11:03 +0800 Subject: help with Openstack magnum in train In-Reply-To: References: Message-ID: Hi Rob, I didn't see any replies to your message. I am also trying to use Magnum and having issues. These labels are allowing me to deploy k8 cluster (although I have another issue with regards to storage/flavours which I'm coming back to soon) cinder_csi_enabled=trueavailability_zone=AZ_1 cloud_provider_enabled=true heat_container_agent_tag=train-stable-3 Some time back when I asked the group here for help, I was informed about a bug and they suggested using "heat_container_agent_tag=train-stable-3". I have multi AZ so I needed to specify one. The other two have been found on the back of some research with one of our internal developers for something else but just mentioning here in case it helps. Good luck with this and I'd be grateful for your feedback - this has caused me much pain to get working. The storage issue I currently have is with regards to the flavour being used for the cluster instances. I need to use 0MB disk so that all of the instance storage is set up on externally integrated array (Cinder / cinder iscsi storage driver). However, using this flavour causes the k8 create to fail because it errors on the 0MB. I have some steps to try from the community but have not been able to get to this yet. Regards, Tony Pearce On Thu, 8 Oct 2020 at 20:44, Robert Duncan wrote: > Hi, > > I have openstack train deployed by kolla-ansible and am trying to deploy a > k8s cluster on Fedora Atomic 27 with magnum > it seems there is no podman binary in the Atomic 27 image > specifically, Fedora-Atomic-27-20180419.0.x86_64.qcow2 > > > I have set the label *use_podman=false* > > however, the template seems to ignore that which results in this error in > the heat agent log: > > WARNING Attempt 12: Trying to install kubectl. Sleeping 5s > > - i=12 > - '[' 12 -gt 60 ']' > - echo 'WARNING Attempt 12: Trying to install kubectl. Sleeping 5s' > - sleep 5s > - ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run > --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm > --user root --volume /srv/magnum/bin:/host/srv/magnum/bin > k8s.gcr.io/hyperkube:v1.15.7 -c '''cp /usr/local/bin/kubectl > /host/srv/magnum/bin/kubectl'''' > bash: /usr/bin/podman: No such file or directory > > [fedora at test-quznqfqfa5ld-master-0 ~]$ which podman > */usr/bin/which: no podman in > (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/fedora/.local/bin:/home/fedora/bin)* > > > *I have tried a later version of atomic but the dependencies are closely > coupled and it seems I must use version 27 - what am I missing?* > > *I'm following along with the Train > documentation https://docs.openstack.org/magnum/train/install/launch-instance.html > * > > and was able to deploy a docker swarm cluster. > > > thanks, > > Rob. > > > > > > ------------------------------ > > The information contained and transmitted in this e-mail is confidential > information, and is intended only for the named recipient to which it is > addressed. The content of this e-mail may not have been sent with the > authority of National College of Ireland. Any views or opinions presented > are solely those of the author and do not necessarily represent those of > National College of Ireland. If the reader of this message is not the named > recipient or a person responsible for delivering it to the named recipient, > you are notified that the review, dissemination, distribution, > transmission, printing or copying, forwarding, or any other use of this > message or any part of it, including any attachments, is strictly > prohibited. If you have received this communication in error, please delete > the e-mail and destroy all record of this communication. Thank you for your > assistance. > ------------------------------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Fri Oct 9 09:30:04 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 9 Oct 2020 11:30:04 +0200 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances In-Reply-To: <4b1d09a9-0f75-0f8e-1b14-d54b0b088cc3@dhbw-mannheim.de> References: <4b1d09a9-0f75-0f8e-1b14-d54b0b088cc3@dhbw-mannheim.de> Message-ID: <20201009093004.wjs7xz4up3azzpzl@localhost> On 07/10, Oliver Wenz wrote: > Hi, > I've deployed OpenStack successfully using openstack-ansible. I use > cinder with LVM backend and can create volumes. However, when I attach > them to an instance, they stay detached (though there's no Error > Message) both using CLI and the Dashboard. > > Looking for a solution I read that the cinder logs might contain > relevant information but in Ussuri they don't seem to be present under > /var/log/cinder... > > Here's the part of my openstack_user_config.yml regarding Cinder: > > ``` > storage_hosts: > lvm-storage1: > ip: 192.168.110.202 > container_vars: > cinder_backends: > lvm: > volume_backend_name: LVM_iSCSI > volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver > volume_group: cinder-volumes > iscsi_ip_address: 10.0.3.202 > limit_container_types: cinder_volume > ``` > > I've created cinder-volumes with vgcreate before the installation and > all cinder services are up: > > # openstack volume service list > +------------------+--------------------------------------+------+---------+-------+----------------------------+ > | Binary | Host | Zone | > Status | State | Updated At | > +------------------+--------------------------------------+------+---------+-------+----------------------------+ > | cinder-backup | bc1bl10 | nova | > enabled | up | 2020-10-07T11:24:10.000000 | > | cinder-volume | bc1bl10 at lvm | nova | > enabled | up | 2020-10-07T11:24:05.000000 | > | cinder-scheduler | infra1-cinder-api-container-1dacc920 | nova | > enabled | up | 2020-10-07T11:24:05.000000 | > +------------------+--------------------------------------+------+---------+-------+----------------------------+ > Hi, Configuration option iscsi_ip_address was removed a long time ago in Cinder, the new one is target_ip_address (I don't know if the playbook maps it or what). I recommend you run the attach request with the --debug flag to get the request id, that way you can easily track the request and see where it failed. Then you check the logs like Dmitriy mentions and see where things failed. It can fail on: - cinder-volume: if it cannot map the volume (unlikely) - nova-compute: on os-brick, so you'll have a traceback It's important that the target_ip_address can be accessed from the Nova compute using the interface for the IP defined in my_ip in nova.conf Assuming that iscsi_ip_address is not doing anything, then the LVM driver will probably use the one defined in ip (192.168.110.202). If my_ip is not defined in nova.conf, then you can see the default for Nova running in that compute node: python -c 'from oslo_utils import netutils; print(netutils.get_my_ipv4())' So make sure you can actually access from that interface the IP in Cinder. I wouldn't bother with all that myself, I would just set debug log levels in cinder-volume and check the initialize_connection call in the logs to see the parameters in the entry call that Nova is sending (the IP it is going to be connecting from) and the return value where we can see the IP of the iSCSI target. Hope that helps. Cheers, Gorka. > > Thanks in advance! > > Kind regards, > Oliver > From bharat at stackhpc.com Fri Oct 9 09:40:44 2020 From: bharat at stackhpc.com (Bharat Kunwar) Date: Fri, 9 Oct 2020 10:40:44 +0100 Subject: help with Openstack magnum in train In-Reply-To: References: Message-ID: Please use fedora coreos, not coreos which is also eol now. Sent from my iPhone > On 8 Oct 2020, at 15:36, Robert Duncan wrote: > >  > Thanks Bharat, yes Fedora 29 works!!, I will give coreos a try also - however the documentation says that the heat templates and machine images are closely coupled and the coreos links are broken, do you know specifically which version or coreos for magnum train release is correct? > > e.g. - this is broken link on Magnum user guide page > http://beta.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2 > > and the support matrix lists coreos version 4.3.6 > but I can't find a coreos with anything like that version now > > Rob. > From: Bharat Kunwar > Sent: Thursday 8 October 2020 15:04 > To: Robert Duncan > Cc: OpenStack Discuss > Subject: Re: help with Openstack magnum in train > > Use fedora atomic 29 if you have to but it is end of life and no longer security patched. We recommend using fedora coreos instead. > > Sent from my iPhone > >>> On 8 Oct 2020, at 13:44, Robert Duncan wrote: >>> >>  >> Hi, >> >> I have openstack train deployed by kolla-ansible and am trying to deploy a k8s cluster on Fedora Atomic 27 with magnum >> it seems there is no podman binary in the Atomic 27 image >> specifically, Fedora-Atomic-27-20180419.0.x86_64.qcow2 >> >> >> I have set the label use_podman=false >> >> however, the template seems to ignore that which results in this error in the heat agent log: >> >> WARNING Attempt 12: Trying to install kubectl. Sleeping 5s >> >> i=12 >> '[' 12 -gt 60 ']' >> echo 'WARNING Attempt 12: Trying to install kubectl. Sleeping 5s' >> sleep 5s >> ssh -F /srv/magnum/.ssh/config root at localhost '/usr/bin/podman run --entrypoint /bin/bash --name install-kubectl --net host --privileged --rm --user root --volume /srv/magnum/bin:/host/srv/magnum/bin k8s.gcr.io/hyperkube:v1.15.7 -c '''cp /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'''' >> bash: /usr/bin/podman: No such file or directory >> [fedora at test-quznqfqfa5ld-master-0 ~]$ which podman >> /usr/bin/which: no podman in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/fedora/.local/bin:/home/fedora/bin) >> >> >> >> I have tried a later version of atomic but the dependencies are closely coupled and it seems I must use version 27 - what am I missing? >> >> I'm following along with the Train documentation https://docs.openstack.org/magnum/train/install/launch-instance.html >> >> and was able to deploy a docker swarm cluster. >> >> >> >> thanks, >> >> Rob. >> >> >> >> >> >> >> The information contained and transmitted in this e-mail is confidential information, and is intended only for the named recipient to which it is addressed. The content of this e-mail may not have been sent with the authority of National College of Ireland. Any views or opinions presented are solely those of the author and do not necessarily represent those of National College of Ireland. If the reader of this message is not the named recipient or a person responsible for delivering it to the named recipient, you are notified that the review, dissemination, distribution, transmission, printing or copying, forwarding, or any other use of this message or any part of it, including any attachments, is strictly prohibited. If you have received this communication in error, please delete the e-mail and destroy all record of this communication. Thank you for your assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Fri Oct 9 10:18:17 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 9 Oct 2020 12:18:17 +0200 Subject: [cloudkitty] Core team cleanup In-Reply-To: References: Message-ID: Not having heard back from zhangguoqing, I have removed them from the cloudkitty-core group. On Fri, 2 Oct 2020 at 18:39, Rafael Weingärtner wrote: > > Sounds good to me. > > On Fri, Oct 2, 2020 at 1:38 PM Pierre Riteau wrote: >> >> I should have said that I will wait until the end of next week before >> doing the cleanup. >> >> On Fri, 2 Oct 2020 at 18:36, Rafael Weingärtner >> wrote: >> > >> > I guess it is fine to do the cleanup. Maybe, we could wait 24/48 hours before doing so; just to give enough time for the person to respond to the e-mail (if they are still active in the community somehow). >> > >> > On Fri, Oct 2, 2020 at 1:18 PM Pierre Riteau wrote: >> >> >> >> Hello, >> >> >> >> In the cloudkitty-core team there is zhangguoqing, whose email address >> >> (zhang.guoqing at 99cloud.net) is bouncing with "551 5.1.1 recipient is >> >> not exist". >> >> I propose to remove them from the core team. >> >> >> >> zhangguoqing, if you read us and want to stay in the team, please contact me. >> >> >> >> Best wishes, >> >> Pierre Riteau (priteau) >> >> >> > >> > >> > -- >> > Rafael Weingärtner > > > > -- > Rafael Weingärtner From rafaelweingartner at gmail.com Fri Oct 9 10:47:41 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Fri, 9 Oct 2020 07:47:41 -0300 Subject: [cloudkitty] Core team cleanup In-Reply-To: References: Message-ID: Ok, thanks. Em sex, 9 de out de 2020 07:18, Pierre Riteau escreveu: > Not having heard back from zhangguoqing, I have removed them from the > cloudkitty-core group. > > On Fri, 2 Oct 2020 at 18:39, Rafael Weingärtner > wrote: > > > > Sounds good to me. > > > > On Fri, Oct 2, 2020 at 1:38 PM Pierre Riteau > wrote: > >> > >> I should have said that I will wait until the end of next week before > >> doing the cleanup. > >> > >> On Fri, 2 Oct 2020 at 18:36, Rafael Weingärtner > >> wrote: > >> > > >> > I guess it is fine to do the cleanup. Maybe, we could wait 24/48 > hours before doing so; just to give enough time for the person to respond > to the e-mail (if they are still active in the community somehow). > >> > > >> > On Fri, Oct 2, 2020 at 1:18 PM Pierre Riteau > wrote: > >> >> > >> >> Hello, > >> >> > >> >> In the cloudkitty-core team there is zhangguoqing, whose email > address > >> >> (zhang.guoqing at 99cloud.net) is bouncing with "551 5.1.1 recipient is > >> >> not exist". > >> >> I propose to remove them from the core team. > >> >> > >> >> zhangguoqing, if you read us and want to stay in the team, please > contact me. > >> >> > >> >> Best wishes, > >> >> Pierre Riteau (priteau) > >> >> > >> > > >> > > >> > -- > >> > Rafael Weingärtner > > > > > > > > -- > > Rafael Weingärtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Fri Oct 9 11:41:58 2020 From: mkopec at redhat.com (Martin Kopec) Date: Fri, 9 Oct 2020 13:41:58 +0200 Subject: [tempest][nova][interop] Deletion of test_reboot_server_soft from Tempest Message-ID: Hello, we'd like to inform you that we are going to remove the following compute test by this commit [1]: test_reboot_server_soft with id 4640e3ef-a5df-482e-95a1-ceeeb0faa84d The reasons are: - Nova has switched to hard reboot if the guest is not responding, there is no way to see the difference from the API - minimum scenario test uses soft reboot and the nova functional test also covers reboot - the test has been skipped on Tempest side for more than 6 years - the test is not part of any latest guideline of interop [2] [1] https://review.opendev.org/#/c/647718 [2] http://codesearch.openstack.org/?q=test_reboot_server_soft&i=nope&files=&repos= Please let us know in case of any objection on this test removal. Regards, -- Martin Kopec Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From Akshay.346 at hsc.com Fri Oct 9 04:58:55 2020 From: Akshay.346 at hsc.com (Akshay 346) Date: Fri, 9 Oct 2020 04:58:55 +0000 Subject: OpenStack Ironic Issue Message-ID: Hello Team, I hope you all are good. I am using openstack ironic deployment and have some issues and some observations. These are: Issue: At the time of "openstack server create" for launching baremetal node, I came across the following multiple observations: - Sometimes when I launch baremetal node on openstack, after one time pxe booting, the baremetal node goes down again and then comes up and goes into second time booting and gets stuck there in "Probing" state ( Seen on node's console) BUT according to openstack horizon, it is up and running and according to "openstack baremetal node show", it is in "Active" state. - And sometimes when i launch baremetal node on openstack, after one time pxe booting, the baremetal node goes down again and then comes up, the "spawning" state on openstack horizon goes into ERROR. Error seen in "nova-compute-ironic-0" container is : "ERROR nova.compute.manager [instance: edd447c6-12ac-49ba-b0bc-f419aff4892a] nova.exception.InstanceDeployFailure: Failed to provision instance edd447c6-12ac-49ba-b0bc-f419aff4892a: Timeout reached while waiting for callback for node 75210cc4-ad98-442d-ace1-89ce69467580" - The baremetal node always takes near about 2 hours to be in "available" state from "cleaning" and "clean-wait". Is it correct behaviour ? Please guide me how to resolve this. Regards Akshay DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From preetr463 at gmail.com Fri Oct 9 09:32:30 2020 From: preetr463 at gmail.com (Raman Preet) Date: Fri, 9 Oct 2020 15:02:30 +0530 Subject: VIM in Pending issue for VNF Deployment. Message-ID: Hey. I am facing exactly the same issue as mentioned in this link. but on stein version of openstack http://opendiscussione.blogspot.com/2018/08/re-openstack-tacker-vim-status-pending.html Do you have any resolution for this.? Thanks. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.schulze at tu-berlin.de Fri Oct 9 12:50:26 2020 From: t.schulze at tu-berlin.de (thoralf schulze) Date: Fri, 9 Oct 2020 14:50:26 +0200 Subject: [sdk] empty image dict if volume is attached Message-ID: <4d8436f7-8580-3e2a-1b2e-ca5d85ad64a2@tu-berlin.de> hi there, openstack instances that have a cinder volume attached as their primary block device won't return any information regarding the image that has been used during their creation: openstack server show -f shell -c image -c volumes_attached test1 image="" volumes_attached="id='c0650208-f39c-4427-a73b-e1aa88531016'" vs. openstack server show -f shell -c image -c volumes_attached test2 image="Ubuntu 20.04 LTS (5cf81857-0dba-497e-ad9a-23a88dd97506)" volumes_attached="" as a result, this information won't be available in ansible inventories created with the relevant plugin¹ either … this is unfortunate, since i need to set certain variables for ansible based on the image resp. the operating system an instance uses. now, the necessary information seems to be available as a volume's metadata: openstack volume show -f shell -c volume_image_metadata c0650208-f39c-4427-a73b-e1aa88531016 volume_image_metadata="{'signature_verified': 'False', 'hw_rng_model': 'virtio', 'hypervisor_type': 'qemu', 'os_distro': 'centos', 'os_type': 'linux', 'image_id': '1572c62f-438c-4a70-be84-973fef0d3c77', 'image_name': 'CentOS 8', 'checksum': 'd89eb49f2c264d29225cecf2b6c83322', 'container_format': 'bare', 'disk_format': 'qcow2', 'min_disk': '10', 'min_ram': '1024', 'size': '716176896'}" … would it be a good idea to extend openstacksdk to look for image_name in the volumes attached to an instance and use this value for the image key? thank you very much & with kind regards, t. ¹ - https://docs.ansible.com/ansible/latest/collections/openstack/cloud/openstack_inventory.html , this plugin makes use of openstacksdk -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x0B3511419EA8A168.asc Type: application/pgp-keys Size: 3913 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From arne.wiebalck at cern.ch Fri Oct 9 13:02:46 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 9 Oct 2020 15:02:46 +0200 Subject: [ironic] Re: OpenStack Ironic Issue In-Reply-To: References: Message-ID: <2335781e-0984-2a42-bcd6-a7e5c967a731@cern.ch> Hi Akshay, On 09.10.20 06:58, Akshay 346 wrote: > Hello Team, > > I hope you all are good. > > I am using openstack ironic deployment and have some issues and some > observations. These are: > > Issue:  At the time of "openstack server create" for launching baremetal > node, I came across the following multiple observations: > > - Sometimes when I launch baremetal node on openstack, after one time > pxe booting, the baremetal node goes down again and then comes up and > goes into second time booting and gets stuck there in "Probing" state ( > Seen on node's console) BUT according to openstack horizon, it is up and > running and according to "openstack baremetal node show", it is in > "Active" state. Right: in order to deploy a node, Ironic will boot the node via PXE into a ramdisk (with the Ironic Python Agent) to download and install the user image. Once this is done, it boots the node from the just installed disk. These are the two boot events you see. At the moment when Ironic boots the node the second time, Ironic is done with the deployment. At this stage the node moves to active, which means there is now a user instance on this node. Whether or not the node is able to boot from this image does not affect this state. > > - And sometimes when i launch  baremetal node on openstack, after one > time pxe booting, the baremetal node goes down again and then comes up, > the "spawning" state on openstack horizon  goes into ERROR. > > Error seen in "nova-compute-ironic-0" container is : > > "ERROR nova.compute.manager [instance: > edd447c6-12ac-49ba-b0bc-f419aff4892a] > nova.exception.InstanceDeployFailure: Failed to provision instance > edd447c6-12ac-49ba-b0bc-f419aff4892a: Timeout reached while waiting for > callback for node 75210cc4-ad98-442d-ace1-89ce69467580" In this case, something went wrong during the deployment. The Ironic deploy logs will give some hint about the cause. The specific error you quote looks like Ironic timed out waiting for the node to call back. When the deployment fails, Ironic may try to clean the node and this is the second boot you see. > - The baremetal node always takes near about 2 hours to be in > "available" state from "cleaning" and "clean-wait". Is it correct > behaviour ? That depends on how you configured cleaning, but if Ironic, for instance, needs to erase all disks, cleaning can take a while. If you have added your keys to the IPA image, you can log into the node while it is cleaning and actually check what it is doing. HTH, Arne -- Arne Wiebalck CERN IT From marios at redhat.com Fri Oct 9 14:34:19 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Oct 2020 17:34:19 +0300 Subject: [tripleo][ptg] PTG reminder please add topics this week Message-ID: Hi tripleo friends, reminder: PTG is coming 26th October - if you'd like to discuss something there please add topics at https://etherpad.opendev.org/p/tripleo-wallaby-topics - at the moment it doesn't look like we'll need more than the first day to cover what we have there. Ideally I'd like to socialise a tentative schedule at the end of next week so we can make changes depending on folks availability. thanks for your help, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Oct 9 14:47:50 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Oct 2020 17:47:50 +0300 Subject: [tripleo][ptg] Wallaby PTG timezone - 1300-1700 UTC Message-ID: Hi folks, Unless there is disagreement and we can find a better time that suits all, the current plan is to use the same timezone as we had in the last virtual PTG - that is 1300-1700 UTC ( https://etherpad.opendev.org/p/tripleo-ptg-wallaby ). Please speak up if you would like to propose and discuss an alternative thanks, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Oct 9 15:03:02 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 09 Oct 2020 16:03:02 +0100 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: References: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> Message-ID: <26d8d7113181b676cf7565c433fbd94c6ab6e3cf.camel@redhat.com> On Thu, 2020-10-08 at 21:56 +0200, Pierre Riteau wrote: > Sean Mooney wrote something last time which can probably be reused by > just updating the csv: > http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014217.html ya i can proably try to udpdate it too. i havent looked at how the new adgenda is published but i can proably parse it and generate them if needed. where is teh data located that it is generated from? > > On Thu, 8 Oct 2020 at 20:55, Kendall Nelson > wrote: > > > > This came up last time too lol. > > > > -Kendall (diablo_rojo) > > > > On Thu, Oct 8, 2020 at 7:38 AM Jeremy Stanley > > wrote: > > > > > > On 2020-10-08 10:24:17 -0400 (-0400), Artom Lifshitz wrote: > > > > Can I make a feature request? Would it be possible to get the > > > > calendar > > > > in iCal format (or whatever we can import/embed in our > > > > calendaring > > > > tool of choice), ideally with one calendar per project/track? > > > > It would > > > > help deal with time zones, and would be *really* appreciated. > > > [...] > > > > > > The live schedule at http://ptg.openstack.org/ptg.html is > > > continuously updated by an IRC bot, and its source code lives > > > here: > > > > > >     https://opendev.org/openstack/ptgbot > > > > > > If someone wants, it would probably not be hard to integrate or > > > lift > > > some of the code from this tool: > > > > > >     https://opendev.org/opendev/yaml2ical > > > > > > Just have the bot emit a ptg.ical file the same way it does its > > > ptg.json file and then add a link for it in the ptg.html file it > > > builds. > > > -- > > > Jeremy Stanley > From fungi at yuggoth.org Fri Oct 9 15:32:49 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Oct 2020 15:32:49 +0000 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: <26d8d7113181b676cf7565c433fbd94c6ab6e3cf.camel@redhat.com> References: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> <26d8d7113181b676cf7565c433fbd94c6ab6e3cf.camel@redhat.com> Message-ID: <20201009153248.ierrgseagffe4ewa@yuggoth.org> On 2020-10-09 16:03:02 +0100 (+0100), Sean Mooney wrote: > On Thu, 2020-10-08 at 21:56 +0200, Pierre Riteau wrote: > > Sean Mooney wrote something last time which can probably be reused by > > just updating the csv: > > http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014217.html > > ya i can proably try to udpdate it too. > i havent looked at how the new adgenda is published but i can proably > parse it and generate them if needed. > > where is teh data located that it is generated from? [...] The http://ptg.openstack.org/ptg.json file is serialized persistent state which ptgbot reads at start and then replaces on disk as it receives commands updating the schedule. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Fri Oct 9 16:11:59 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 9 Oct 2020 09:11:59 -0700 Subject: [ironic] Re: OpenStack Ironic Issue In-Reply-To: <2335781e-0984-2a42-bcd6-a7e5c967a731@cern.ch> References: <2335781e-0984-2a42-bcd6-a7e5c967a731@cern.ch> Message-ID: On Fri, Oct 9, 2020 at 6:09 AM Arne Wiebalck wrote: > > Hi Akshay, > > On 09.10.20 06:58, Akshay 346 wrote: > > Hello Team, > > > > I hope you all are good. > > > > I am using openstack ironic deployment and have some issues and some > > observations. These are: > > > > Issue: At the time of "openstack server create" for launching baremetal > > node, I came across the following multiple observations: > > > > - Sometimes when I launch baremetal node on openstack, after one time > > pxe booting, the baremetal node goes down again and then comes up and > > goes into second time booting and gets stuck there in "Probing" state ( > > Seen on node's console) BUT according to openstack horizon, it is up and > > running and according to "openstack baremetal node show", it is in > > "Active" state. > > Right: in order to deploy a node, Ironic will boot the node via PXE > into a ramdisk (with the Ironic Python Agent) to download and install > the user image. Once this is done, it boots the node from the just > installed disk. These are the two boot events you see. > > At the moment when Ironic boots the node the second time, Ironic is done > with the deployment. At this stage the node moves to active, which means > there is now a user instance on this node. Whether or not the node is > able to boot from this image does not affect this state. > > > > > - And sometimes when i launch baremetal node on openstack, after one > > time pxe booting, the baremetal node goes down again and then comes up, > > the "spawning" state on openstack horizon goes into ERROR. > > > > Error seen in "nova-compute-ironic-0" container is : > > > > "ERROR nova.compute.manager [instance: > > edd447c6-12ac-49ba-b0bc-f419aff4892a] > > nova.exception.InstanceDeployFailure: Failed to provision instance > > edd447c6-12ac-49ba-b0bc-f419aff4892a: Timeout reached while waiting for > > callback for node 75210cc4-ad98-442d-ace1-89ce69467580" > One thing worth noting is callback timeout failures are typically a result of the physical networking or some process involving the physical infrastucture. A good first step is to watch the physical machine's console if you can and see if it network boots. The next step is to make sure it is actually able to perform it's lookup and heartbeat operation to the ironic API. Routing issues or firewall issues from your provisioning network to your API endpoints can cause deployments to fail like this. > In this case, something went wrong during the deployment. The Ironic > deploy logs will give some hint about the cause. The specific error > you quote looks like Ironic timed out waiting for the node to call > back. > When the deployment fails, Ironic may try to clean the node and this > is the second boot you see. > > > - The baremetal node always takes near about 2 hours to be in > > "available" state from "cleaning" and "clean-wait". Is it correct > > behaviour ? > > That depends on how you configured cleaning, but if Ironic, for > instance, needs to erase all disks, cleaning can take a while. > If you have added your keys to the IPA image, you can log into the > node while it is cleaning and actually check what it is doing. > > HTH, > Arne > > -- > Arne Wiebalck > CERN IT > From smooney at redhat.com Fri Oct 9 16:25:21 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 09 Oct 2020 17:25:21 +0100 Subject: vPTG Oct 2020 Registration & Schedule In-Reply-To: <20201009153248.ierrgseagffe4ewa@yuggoth.org> References: <20201008143707.eaklm73zjf4swjsa@yuggoth.org> <26d8d7113181b676cf7565c433fbd94c6ab6e3cf.camel@redhat.com> <20201009153248.ierrgseagffe4ewa@yuggoth.org> Message-ID: <9cf90be21270d5b1c1f73d116e5743e296ea0f7b.camel@redhat.com> On Fri, 2020-10-09 at 15:32 +0000, Jeremy Stanley wrote: > On 2020-10-09 16:03:02 +0100 (+0100), Sean Mooney wrote: > > On Thu, 2020-10-08 at 21:56 +0200, Pierre Riteau wrote: > > > Sean Mooney wrote something last time which can probably be > > > reused by > > > just updating the csv: > > > http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014217.html > > > > ya i can proably try to udpdate it too. > > i havent looked at how the new adgenda is published but i can > > proably > > parse it and generate them if needed. > > > > where is teh data located that it is generated from? > [...] > > The http://ptg.openstack.org/ptg.json file is serialized persistent > state which ptgbot reads at start and then replaces on disk as it > receives commands updating the schedule. thansk the json file will be much simpler to parse then the csv my script currently is set up for. ill try and see if i can update data it to work with that at teh weekend. From sean.mcginnis at gmx.com Fri Oct 9 16:25:22 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Oct 2020 11:25:22 -0500 Subject: [release] Release countdown for week R-0 Oct 12-16 Message-ID: <20201009162522.GA801888@sm-workstation> Development Focus ----------------- We will be releasing the coordinated OpenStack Victoria release next week, on October 14. Thanks to everyone involved in the Victoria cycle! We are now in pre-release freeze, so no new deliverable will be created until final release, unless a release-critical regression is spotted. Otherwise, teams attending the virtual PTG should start to plan what they will be discussing there, by creating and filling team etherpads. You can access the list of PTG etherpads at: http://ptg.openstack.org/etherpads.html General Information ------------------- On release day, the release team will produce final versions of deliverables following the cycle-with-rc release model, by re-tagging the commit used for the last RC. A patch doing just that will be proposed. PTLs and release liaisons should watch for that final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 16th, so that their approval is included in the metadata that goes onto the signed tag. Upcoming Deadlines & Dates -------------------------- Final Victoria release: October 14 Open Infra Summit: October 19-23 Wallaby PTG: October 26-30 From beagles at redhat.com Fri Oct 9 16:36:59 2020 From: beagles at redhat.com (Brent Eagles) Date: Fri, 9 Oct 2020 14:06:59 -0230 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: Hi, On Thu, Oct 8, 2020 at 10:06 AM Emilien Macchi wrote: > Hi folks, > > In our long term goal to simplify TripleO and deprecate the services that > aren't used by our community anymore, I propose that we deprecate Mistral > services. > Mistral was used on the Undercloud in the previous cycles but not anymore. > While the service could be deployed on the Overcloud, we haven't seen any > of our users doing it. If that would be the case, please let us know as > soon as possible. > Removing it from TripleO will help us with maintenance (container images, > THT/puppet integration, CI, etc). > Maybe we could deprecate it in Victoria and remove it in Wallaby? > > Thanks, > -- > Emilien Macchi > +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 9 16:53:34 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 9 Oct 2020 18:53:34 +0200 Subject: [Openstack][cinder] dell unity iscsi faulty devices Message-ID: Hello Stackers, I am using dell emc iscsi driver on my centos 7 queens openstack. It works and instances work as well but on compute nodes I got a lot a faulty device reported by multipath il comand. I do know why this happens, probably attacching and detaching volumes and live migrating instances do not close something well. I read this can cause serious performances problems on compute nodes. Please, any workaround and/or patch is suggested ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Fri Oct 9 17:27:56 2020 From: feilong at catalyst.net.nz (feilong) Date: Sat, 10 Oct 2020 06:27:56 +1300 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: <85b58630-9342-e0f2-c4fb-57d25fd0aebd@catalyst.net.nz> Hi Tony, Firstly, please reminder me what's your Magnum version. I would suggest use stable/victoria or at least stable/train. As for the output you posted: | eaccb23e-226b-4a36-a2a5-5b9d9bbb4fea | delta-23-cluster-p4khbhat4sgp-node-2 | ACTIVE | private=10.0.0.250, 192.168.20.71 | Fedora-Atomic-27 | kube_node_10gb | | c7d42436-7c27-4a11-897e-d09eb716e9b9 | delta-23-cluster-p4khbhat4sgp-node-0 | ACTIVE | private=10.0.0.88, 192.168.20.52 | Fedora-Atomic-27 | kube_node_10gb | Given you can still see the image name from the OpenStack instance list command, that means the config "boot_volume_size = 10" is not being used for some reasons. In other words, I think I'm confident that the config should resolve your local disk consuming issue. But there is another issue which is preventing it works. On 16/09/20 12:58 am, Tony Pearce wrote: > Hi Ionut, thank you for your reply. Do you know if this configuration > prevents consuming of local disk on the compute node for instance > storage eg OS or swap etc? > > Kind regards > > On Tue, 15 Sep 2020, 20:53 Ionut Biru, > wrote: > > Hi, > > To boot minions or master from volume, I use the following labels: > > boot_volume_size = 20  > boot_volume_type = ssd  > availability_zone = nova                   > > volume type and zone might differ on your setup. > >                    > > On Tue, Sep 15, 2020 at 11:23 AM Tony Pearce > wrote: > > Hi Feilong, I hope you are keeping well.  > > Thank you for sticking with me on this issue to try and help > me here. I really appreciate it!  > > I tried creating a new flavour like you suggested and using > 10GB for root volume [1]. The cluster does start to be created > (no error about 0mb disk) but while being created, I can check > the compute node and see that the instance disk is being > provisioned on the compute node [2]. I assume that this is the > 10GB root volume that is specified in the flavour.  > > When I list the volumes which have been created, I do not see > the 10GB disk allocated on the compute node, but I do see the > iSCSI network volume that has been created and attached to the > instance (eg master) [3]. This is 15GB volume and this 15GB is > coming from the kubernetes cluster template, under "Docker > Volume Size (GB)" in the "node spec" section. There is very > little data written to this volume at the time of master > instance booted.  > > Eventually, kube cluster failed to create with error "Status > Create_Failed: Resource CREATE failed: Error: > resources.kube_minions.resources[0].resources.node_config_deployment: > Deployment to server failed: deploy_status_code: Deployment > exited with non-zero status code: 1". I'll try and find the > root cause of this later.   > > What are your thoughts on this outcome? Is it possible to > avoid consuming compute node disk? I require it because it > cannot scale. > > [1] http://paste.openstack.org/show/797862/ > [2] http://paste.openstack.org/show/797865/ > [3] http://paste.openstack.org/show/797863/ > > Kind regards, > Tony > > Tony Pearce > > > > On Mon, 14 Sep 2020 at 17:44, feilong > wrote: > > Hi Tony, > > Does your Magnum support this config > https://github.com/openstack/magnum/blob/master/magnum/conf/cinder.py#L47 > can you try to change it from 0 to 10? 10 means the root > disk volume size for the k8s node. By default the 0 means > the node will be based on image instead of volume. > > > On 14/09/20 9:37 pm, Tony Pearce wrote: >> Hi Feilong, sure. The flavour I used has 2 CPU and 2GB >> memory. All other values either unset or 0mb.  >> I also used the same fedora 27 image that is being used >> for the kubernetes cluster.  >> >> Thank you >> Tony >> >> On Mon, 14 Sep 2020, 17:20 feilong, >> > > wrote: >> >> Hi Tony, >> >> Could you please let me know  your flavor details? I >> would like to test it in my devstack environment >> (based on LVM). Thanks. >> >> >> On 14/09/20 8:27 pm, Tony Pearce wrote: >>> Hi feilong, hope you are keeping well. Thank you for >>> the info!   >>> >>> For issue 1. Maybe this should be with the >>> kayobe/kolla-ansible team. Thanks for the insight :)  >>> >>> For the 2nd one, I was able to run the HOT template >>> in your link. There's no issues at all running that >>> multiple times concurrently while using the 0MB disk >>> flavour. I tried four times with the last three >>> executing one after the other so that they ran >>> parallelly.  All were successful and completed and >>> did not complain about the 0MB disk issue.  >>> >>> Does this conclude that the error and create-failed >>> issue relates to Magnum or could you suggest other >>> steps to test on my side?  >>> >>> Best regards, >>> >>> Tony Pearce >>> >>> >>> >>> >>> On Thu, 10 Sep 2020 at 16:01, feilong >>> >> > wrote: >>> >>> Hi Tony, >>> >>> Sorry for the late response for your thread. >>> >>> For you HTTPS issue, we (Catalyst Cloud) are >>> using Magnum with HTTPS and it works. >>> >>> For the 2nd issue, I think we were >>> misunderstanding the nodes disk capacity. I was >>> assuming you're talking about the k8s nodes, but >>> seems you're talking about the physical compute >>> host. I still don't think it's a Magnum issue >>> because a k8s master/worker nodes are just >>> normal Nova instances and managed by Heat. So I >>> would suggest you use a simple HOT to test it, >>> you can use this >>> https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 >>> >>> Most of the cloud providers or organizations who >>> have adopted Magnum are using Ceph as far as I >>> know, just FYI. >>> >>> >>> On 10/09/20 4:35 pm, Tony Pearce wrote: >>>> Hi all, hope you are all keeping safe and well. >>>> I am looking for information on the following >>>> two issues that I have which surrounds Magnum >>>> project: >>>> >>>> 1. Magnum does not support Openstack API with HTTPS >>>> 2. Magnum forces compute nodes to consume disk >>>> capacity for instance data >>>> >>>> My environment: Openstack Train deployed using >>>> Kayobe (Kolla-ansible).  >>>> >>>> With regards to the HTTPS issue, Magnum stops >>>> working after enabling HTTPS because the >>>> certificate / CA certificate is not trusted by >>>> Magnum. The certificate which I am using is one >>>> that was purchased from GoDaddy and is trusted >>>> in web browsers (and is valid), just not >>>> trusted by the Magnum component.  >>>> >>>> Regarding compute node disk consumption issue - >>>> I'm at a loss with regards to this and so I'm >>>> looking for more information about why this is >>>> being done and is there any way that I could >>>> avoid it?  I have storage provided by a Cinder >>>> integration and so the consumption of compute >>>> node disk for instance data I need to avoid.  >>>> >>>> Any information the community could provide to >>>> me with regards to the above would be much >>>> appreciated. I would very much like to use the >>>> Magnum project in this deployment for >>>> Kubernetes deployment within projects.  >>>> >>>> Thanks in advance,  >>>> >>>> Regards, >>>> >>>> Tony >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> ------------------------------------------------------ >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> ------------------------------------------------------ >>> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > > > -- > Ionut Biru - https://fleio.com > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Fri Oct 9 19:55:21 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Fri, 9 Oct 2020 14:55:21 -0500 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: +1 On Thu, Oct 8, 2020 at 07:37 Emilien Macchi wrote: > Hi folks, > > In our long term goal to simplify TripleO and deprecate the services that > aren't used by our community anymore, I propose that we deprecate Mistral > services. > Mistral was used on the Undercloud in the previous cycles but not anymore. > While the service could be deployed on the Overcloud, we haven't seen any > of our users doing it. If that would be the case, please let us know as > soon as possible. > Removing it from TripleO will help us with maintenance (container images, > THT/puppet integration, CI, etc). > Maybe we could deprecate it in Victoria and remove it in Wallaby? > > Thanks, > > -- > Emilien Macchi > -- Kevin Carter IRC: Cloudnull -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Oct 10 00:02:40 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Oct 2020 19:02:40 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal : Final update Message-ID: <1750fd10ff4.11eeb40d62187.4995733906513686670@ghanshyammann.com> Hello Everyone, With the Integration testing moved to Ubuntu Focal, I am marking the 'Ubuntu Focal migration' community goal as complete. Tracking: https://storyboard.openstack.org/#!/story/2007865 Summary: ======== * Lot of work was involved in this. There are many changes required for this migration as compare to previous migration from Xenial to Bionic. ** Many dependencies lower constraint compatible with py3.8 and Focal distro had to be updated. Almost in all the repos. ** Mysql 8.0 caused many incompatible DB issues. * I missed the original deadline of m-2 to complete this goal due to falling gates and to avoid any gate block for any projects. * More than 300 repo had been fixed or tested in advance before migration happened. This helped a lot to keep gate green during this migration. * We had to keep few jobs in Bionic due to time constraints and move the rest of the other projects jobs on Focal. These are under work by the respective team. ** Manila, freezer-tempest, magnum-tempest-plugin-tests-api, ironic-python-agent-builder due to diskimage-builder[1] etc * Deployment projects are communicated (openstack-helm, tripleo if needed) or already in-progress (puppet-openstack) for this migration and the respective team need to check if any updates needed. I would like to convey special Thanks to everyone who helped in this goal and made it possible to complete it in Victoria cycle itself. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-October/017843.html -gmann From Arkady.Kanevsky at dell.com Sat Oct 10 19:32:23 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sat, 10 Oct 2020 19:32:23 +0000 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal : Final update In-Reply-To: <1750fd10ff4.11eeb40d62187.4995733906513686670@ghanshyammann.com> References: <1750fd10ff4.11eeb40d62187.4995733906513686670@ghanshyammann.com> Message-ID: Great job Ghanshyam and everybody. Thank you! -----Original Message----- From: Ghanshyam Mann Sent: Friday, October 9, 2020 7:03 PM To: openstack-discuss Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal : Final update [EXTERNAL EMAIL] Hello Everyone, With the Integration testing moved to Ubuntu Focal, I am marking the 'Ubuntu Focal migration' community goal as complete. Tracking: https://storyboard.openstack.org/#!/story/2007865 Summary: ======== * Lot of work was involved in this. There are many changes required for this migration as compare to previous migration from Xenial to Bionic. ** Many dependencies lower constraint compatible with py3.8 and Focal distro had to be updated. Almost in all the repos. ** Mysql 8.0 caused many incompatible DB issues. * I missed the original deadline of m-2 to complete this goal due to falling gates and to avoid any gate block for any projects. * More than 300 repo had been fixed or tested in advance before migration happened. This helped a lot to keep gate green during this migration. * We had to keep few jobs in Bionic due to time constraints and move the rest of the other projects jobs on Focal. These are under work by the respective team. ** Manila, freezer-tempest, magnum-tempest-plugin-tests-api, ironic-python-agent-builder due to diskimage-builder[1] etc * Deployment projects are communicated (openstack-helm, tripleo if needed) or already in-progress (puppet-openstack) for this migration and the respective team need to check if any updates needed. I would like to convey special Thanks to everyone who helped in this goal and made it possible to complete it in Victoria cycle itself. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-October/017843.html -gmann From ekultails at gmail.com Sat Oct 10 22:27:10 2020 From: ekultails at gmail.com (Luke Short) Date: Sat, 10 Oct 2020 16:27:10 -0600 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: +1 I believe it is good for us to focus on the core OpenStack services that still have an active community and usage in production. For better or worse, Mistral has dropped in popularity due to other similar swiss-army-knife workflow automation tools. On Thu, Oct 8, 2020, 6:34 AM Emilien Macchi wrote: > Hi folks, > > In our long term goal to simplify TripleO and deprecate the services that > aren't used by our community anymore, I propose that we deprecate Mistral > services. > Mistral was used on the Undercloud in the previous cycles but not anymore. > While the service could be deployed on the Overcloud, we haven't seen any > of our users doing it. If that would be the case, please let us know as > soon as possible. > Removing it from TripleO will help us with maintenance (container images, > THT/puppet integration, CI, etc). > Maybe we could deprecate it in Victoria and remove it in Wallaby? > > Thanks, > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.wittling at gmail.com Sat Oct 10 22:25:39 2020 From: mark.wittling at gmail.com (Mark Wittling) Date: Sat, 10 Oct 2020 18:25:39 -0400 Subject: Fwd: DPDK+OVS with OpenStack In-Reply-To: References: Message-ID: Looking for someone who knows OpenStack with OpenVSwitch, and in addition to that, DPDK with OpenStack and OVS.I am using OpenStack Queens, with OpenVSwitch. The architecture I am using is documented here: https://docs.openstack.org/neutron/queens/admin/deploy-ovs-provider.htmlThe OVS I am using on the Compute Node, is compiled with DPDK, and I have enabled the datapath to netdev (DPDK) on br-prv (provider network bridge), and br-tun (tunneling bridge). But these two bridges, br-tun and br-prv, are patched into another OpenStack bridge, called br-int. I wasn’t actually sure about whether to tinker with this bridge, and wondered what datapath it was using.Then, I realized there is a parameter in the openvswitch_agent.ini file, which I will list here: # OVS datapath to use. 'system' is the default value and corresponds to the # kernel datapath. To enable the userspace datapath set this value to 'netdev'. # (string value) # Possible values: # system - # netdev - #datapath_type = system datapath_type = netdev So in tinkering with this, what I realized, is that when you set this datapath_type to system or netdev, it will adjust the br-int bridge to that datapath type.So here is my question. How can I launch a non-DPDK VM, if all of the bridges are using the netdev datapath type?Here is another question. What if one of the flavors don’t have the largepages property set on them? I assumed OpenStack would revert to a system datapath and not use DPDK for those VM interfaces. Well, I found out in testing, that is not the case. If you set all your bridges up for netdev, and you don’t set the property on the Flavor of the VM (largepages), the VM will launch, but it simply won’t work.Is there no way, on a specific Compute Host, to support both DPDK (netdev datapaths) and non-DPDK (system datapaths)?Either on a VM interface level (VM has one interface that is netdev DPDK and another that is system datapath non-DPDK)?Or on a VM by VM basis (VM 1 has 1 or more netdev datapath interfaces and VM 2 has 1 or more system datapath interfaces)?Am I right here? Once you set up a Compute Host for DPDK, it’s DPDK or nothing on that Compute Host? (edited) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Oct 11 23:50:16 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 11 Oct 2020 23:50:16 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Voting 48 Hours Remain Message-ID: <20201011235016.gpzhlukjqsq65ms6@yuggoth.org> We are coming down to the last hours for voting in the TC and Telemetry elections. Voting ends Oct 13, 2020 23:45 UTC. Search your Gerrit preferred email address[0] for the messages with subjects like... Poll: OpenStack Wallaby Cycle Technical Committee Election Poll That is your ballot and links you to the voting application. Please vote. If you have voted, please encourage your colleagues to vote. Candidate statements are linked to the names of all confirmed candidates: https://governance.openstack.org/election/ What to do if you don't see the email and have a commit in at least one of the official project teams' deliverable repositories[1]: * check the trash of your Gerrit Preferred Email address[0], in case it went into trash or spam * find the ID of at least one commit merged to an official deliverable repo[1] over the current or previous cycle, confirm you are an OpenStack Foundation Individual Member[2], and then email the election officials[3] or get in touch in the #openstack-elections channel on the Freenode IRC network. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Please vote! Thank you, [0] Sign into review.openstack.org and go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [1] https://opendev.org/openstack/governance/src/tag/0.8.0/reference/projects.yaml [2] https://www.openstack.org/profile/ [3] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dangerzonen at gmail.com Mon Oct 12 06:15:24 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Mon, 12 Oct 2020 14:15:24 +0800 Subject: [Openstack][Cinder] Message-ID: Hi Team, May I know if anyone has deployed Huawei oceanstor SAN storage during overcloud deployment? (i) Do you need a specific driver to define or download in order to deploy it? (ii) Can I deploy separately SAN storage after overcloud deployment, I mean after a month of openstack deployment i want to add SAN storage to my infrastructure. Is it possible? (iii) Please advise me how to deploy Huawei oceanstor SAN storage. Please advise further. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.schulze at tu-berlin.de Mon Oct 12 08:17:08 2020 From: t.schulze at tu-berlin.de (thoralf schulze) Date: Mon, 12 Oct 2020 10:17:08 +0200 Subject: [sdk] empty image dict if volume is attached In-Reply-To: <4d8436f7-8580-3e2a-1b2e-ca5d85ad64a2@tu-berlin.de> References: <4d8436f7-8580-3e2a-1b2e-ca5d85ad64a2@tu-berlin.de> Message-ID: <1e244c43-d8bb-7056-39c1-35932df6e515@tu-berlin.de> hi there, On 10/9/20 2:50 PM, thoralf schulze wrote: > openstack instances that have a cinder volume attached as their > primary block device won't return any information regarding the image > that has been used during their creation: […] > … would it be a good idea to extend openstacksdk to look for > image_name in the volumes attached to an instance and use this value > for the image key? before i start hacking together something: is this behaviour intended, and will implementing the change outlined above break anything else? thank you very much & with kind regards, t. -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x0B3511419EA8A168.asc Type: application/pgp-keys Size: 3913 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From zhangbailin at inspur.com Mon Oct 12 09:37:12 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Mon, 12 Oct 2020 09:37:12 +0000 Subject: =?gb2312?B?tPC4tDogW2xpc3RzLm9wZW5zdGFjay5vcme0+reiXVJlOiBbcGxhY2VtZW50?= =?gb2312?B?XVtub3ZhXVtjaW5kZXJdW25ldXRyb25dW2JsYXphcl1bdGNdW3p1bl0gUGxh?= =?gb2312?Q?cement_governance_switch(back)?= In-Reply-To: References: <51776398bc77d6ec24a230ab4c2a5913@sslemail.net> Message-ID: <3e09b1015433411e9752d3543d63d331@inspur.com> Hi, gibi, I am a contributor for nova and placement, and do many feature from Stein release, I would like to take on the role of *Release liaison*[1] in Placement to help more people. Brin Zhang -----邮件原件----- 发件人: Balázs Gibizer [mailto:balazs.gibizer at est.tech] 发送时间: 2020年10月5日 23:40 收件人: Luigi Toscano 抄送: openstack-discuss 主题: [lists.openstack.org代发]Re: [placement][nova][cinder][neutron][blazar][tc][zun] Placement governance switch(back) On Mon, Sep 28, 2020 at 13:04, Balázs Gibizer wrote: > > > On Thu, Sep 24, 2020 at 18:12, Luigi Toscano > wrote: >> On Thursday, 24 September 2020 17:23:36 CEST Stephen Finucane wrote: >> >>> Assuming no one steps forward for the Placement PTL role, it would >>> appear to me that we have two options. Either we look at >>> transitioning Placement to a PTL-less project, or we move it back >>> under nova governance. To be honest, given how important placement >>> is to nova and other projects now, I'm uncomfortable with the >>> idea of not having a point person who is ultimately responsible for >>> things like cutting a release (yes, delegation is encouraged but >>> someone needs to herd the cats). At the same time, I do realize >>> that placement is used by more that nova now so nova cores and >>> what's left of the separate placement core team shouldn't be the >>> only ones making this decision. >>> >>> So, assuming the worst happens and placement is left without a PTL >>> for Victoria, what do we want to do? >> >> I mentioned this on IRC, but just for completeness, there is another >> option: >> have the Nova candidate PTL (I assume there is just one) also apply >> for Placement PTL, and handle the 2 realms in a personal union. > > As far as I know I'm the only nova PTL candidate so basically you > asking me to take the Placement PTL role as well. This is a valid > option. Still, first, I would like to give a chance to the DPL concept > in Placement in a way yoctozepto suggested. Bump. Do we 2-3 developers interested in running the Placement project in distributed project leadership[1] mode in Wallaby? [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > Cheers, > gibi > >> >> Ciao >> -- >> Luigi >> >> >> >> >> > > > From hberaud at redhat.com Mon Oct 12 11:21:01 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 12 Oct 2020 13:21:01 +0200 Subject: [release][ptg] Wallaby PTG planning Message-ID: Hello, As you know our PTG starts on 26th October, we already have some topics to bring there, if you'd like to discuss something else during the session please add topics at https://etherpad.opendev.org/p/relmgmt-wallaby-ptg . If you plan to assist our meeting don't hesitate to add your name to the attendee list. I already proposed a date but if it doesn't fit your availability don't hesitate to propose another date. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Mon Oct 12 11:28:10 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 12 Oct 2020 13:28:10 +0200 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances In-Reply-To: References: Message-ID: > I recommend you run the attach request with the --debug flag to get the > request id, that way you can easily track the request and see where it > failed. > > Then you check the logs like Dmitriy mentions and see where things > failed. Running with the --debug flag returned the following ``` Starting new HTTP connection (1): 192.168.110.201:8774 http://192.168.110.201:8774 "GET /v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54 HTTP/1.1" 200 1781 RESP: [200] Connection: close Content-Length: 1781 Content-Type: application/json OpenStack-API-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-56b2bf8f-f338-4d25-a54e-ce9cfb4e71d6 x-openstack-request-id: req-56b2bf8f-f338-4d25-a54e-ce9cfb4e71d6 RESP BODY: {"server": {"id": "2fd482ac-7a93-4626-955f-77b78f783d54", "name": "ubuntu_test", "status": "ACTIVE", "tenant_id": "5fbba868d04b47fa87a72b9dd821ee12", "user_id": "f78c008ff22d40d2870f1c34919c93ad", "metadata": {}, "hostId": "7d7e25a2fe22479152026feff6bd71cf32a46ea2ee78ec841c5973f8", "image": {"id": "84e94029-c737-4bb6-84a7-195002e6dbe9", "links": [{"rel": "bookmark", "href": "http://192.168.110.201:8774/images/84e94029-c737-4bb6-84a7-195002e6dbe9"}]}, "flavor": {"id": "94b75111-dfa2-4565-af5b-64e25e220861", "links": [{"rel": "bookmark", "href": "http://192.168.110.201:8774/flavors/94b75111-dfa2-4565-af5b-64e25e220861"}]}, "created": "2020-10-07T09:48:09Z", "updated": "2020-10-07T10:03:23Z", "addresses": {"test-001": [{"version": 4, "addr": "192.168.32.170", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:9b:a5:fe"}, {"version": 4, "addr": "192.168.113.29", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:9b:a5:fe"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54"}, {"rel": "bookmark", "href": "http://192.168.110.201:8774/servers/2fd482ac-7a93-4626-955f-77b78f783d54"}], "OS-DCF:diskConfig": "AUTO", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": "hiwi-PC", "OS-SRV-USG:launched_at": "2020-10-07T10:03:22.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "bc1blade14", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "bc1blade14.openstack.local", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} GET call to compute for http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54 used request id req-56b2bf8f-f338-4d25-a54e-ce9cfb4e71d6 REQ: curl -g -i -X GET http://192.168.110.201:8776/v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d -H "Accept: application/json" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA256}7fa6e3ac57b32176603c393dd97a5ea04813739efb0d1d4bff4f9680ede427c1" Starting new HTTP connection (1): 192.168.110.201:8776 http://192.168.110.201:8776 "GET /v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d HTTP/1.1" 200 1033 RESP: [200] Connection: close Content-Length: 1033 Content-Type: application/json OpenStack-API-Version: volume 3.0 Vary: OpenStack-API-Version x-compute-request-id: req-3d35fab2-88b2-46cf-a39a-f8942f3ee1c1 x-openstack-request-id: req-3d35fab2-88b2-46cf-a39a-f8942f3ee1c1 RESP BODY: {"volume": {"id": "a9e188a4-7981-4174-a31d-a258de2a7b3d", "status": "available", "size": 30, "availability_zone": "nova", "created_at": "2020-10-07T11:38:01.000000", "updated_at": "2020-10-12T09:57:53.000000", "attachments": [], "name": "test_volume_002", "description": "", "volume_type": "lvm", "snapshot_id": null, "source_volid": null, "metadata": {}, "links": [{"rel": "self", "href": "http://192.168.110.201:8776/v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d"}, {"rel": "bookmark", "href": "http://192.168.110.201:8776/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d"}], "user_id": "f78c008ff22d40d2870f1c34919c93ad", "bootable": "false", "encrypted": false, "replication_status": null, "consistencygroup_id": null, "multiattach": false, "migration_status": null, "os-vol-tenant-attr:tenant_id": "5fbba868d04b47fa87a72b9dd821ee12", "os-vol-host-attr:host": "bc1bl10 at lvm#LVM_iSCSI", "os-vol-mig-status-attr:migstat": null, "os-vol-mig-status-attr:name_id": null}} GET call to volumev3 for http://192.168.110.201:8776/v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d used request id req-3d35fab2-88b2-46cf-a39a-f8942f3ee1c1 REQ: curl -g -i -X POST http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54/os-volume_attachments -H "Accept: application/json" -H "Content-Type: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7fa6e3ac57b32176603c393dd97a5ea04813739efb0d1d4bff4f9680ede427c1" -H "X-OpenStack-Nova-API-Version: 2.1" -d '{"volumeAttachment": {"volumeId": "a9e188a4-7981-4174-a31d-a258de2a7b3d", "device": "/dev/vdb"}}' Resetting dropped connection: 192.168.110.201 http://192.168.110.201:8774 "POST /v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54/os-volume_attachments HTTP/1.1" 200 194 RESP: [200] Connection: close Content-Length: 194 Content-Type: application/json OpenStack-API-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-87ba31ae-3511-41be-a2b9-f6a0ccdb092a x-openstack-request-id: req-87ba31ae-3511-41be-a2b9-f6a0ccdb092a RESP BODY: {"volumeAttachment": {"id": "a9e188a4-7981-4174-a31d-a258de2a7b3d", "serverId": "2fd482ac-7a93-4626-955f-77b78f783d54", "volumeId": "a9e188a4-7981-4174-a31d-a258de2a7b3d", "device": "/dev/vdb"}} POST call to compute for http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54/os-volume_attachments used request id req-87ba31ae-3511-41be-a2b9-f6a0ccdb092a clean_up AddServerVolume: END return value: 0 ``` However, I could not read the logs the way Dmitriy suggested, it simply returns -- Logs begin at Mon 2020-10-05 14:30:22 UTC, end at Mon 2020-10-12 11:17:20 UTC. -- -- No entries -- so I couldn't investigate the request. ( I tried this on both utility and cinder api containers on the management node and on the storage and respective compute node to be sure) > It's important that the target_ip_address can be accessed from the Nova > compute using the interface for the IP defined in my_ip in nova.conf my_ip in nova.conf seems to be the address of the nova_api container on the management node, target_ip_address from cinder.conf seems to be the (storage-net) address of the cinder-api container on the management node (the value is different from the iscsi_ip_address specified in the playbook). > IIRC, for lvm storage, cinder-volumes should be launched on every nova > compute node in order to attach volumes > to instances, since it's not shared storage. I did not try this yet since the documentation didn't mention it and for our current production system (mitaka, which someone else set up ages ago) this doesn't seem to be the case but volumes can still be attached. Kind regards, Oliver From smooney at redhat.com Mon Oct 12 12:03:40 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 12 Oct 2020 13:03:40 +0100 Subject: Fwd: DPDK+OVS with OpenStack In-Reply-To: References: Message-ID: On Sat, 2020-10-10 at 18:25 -0400, Mark Wittling wrote: > Looking for someone who knows OpenStack with OpenVSwitch, and in > addition > to that, DPDK with OpenStack and OVS.I am using OpenStack Queens, > with > OpenVSwitch. The architecture I am using is documented here: > https://docs.openstack.org/neutron/queens/admin/deploy-ovs-provider.htmlThe > OVS I am using on the Compute Node, is compiled with DPDK, and I have > enabled the datapath to netdev (DPDK) on br-prv (provider network > bridge), > and br-tun (tunneling bridge). But these two bridges, br-tun and br- > prv, > are patched into another OpenStack bridge, called br-int. I wasn’t > actually > sure about whether to tinker with this bridge, and wondered what > datapath > it was using.Then, I realized there is a parameter in the > openvswitch_agent.ini file, which I will list here: > > # OVS datapath to use. 'system' is the default value and corresponds > to the > # kernel datapath. To enable the userspace datapath set this value to > 'netdev'. > # (string value) > # Possible values: > # system - > # netdev - > #datapath_type = system > datapath_type = netdev > > So in tinkering with this, what I realized, is that when you set this > datapath_type to system or netdev, it will adjust the br-int bridge > to that > datapath type.So here is my question. How can I launch a non-DPDK VM, > if > all of the bridges are using the netdev datapath type? you cant we intentionally do not support miking the kernel datapath and dpdk datapath on the same host. incidentally patch port only function between bridge of the same data path type. so the br-int, br-tun and br-prv shoudl all be set to netdev. also if you wnat dpdk to process your tunnel trafic you need to assign the tunnel local endpoint ip to the br-prv assuming that is where the dpdk physical interface is. if you do not do this tunnel traffic e.g. vxlan will be processed by the ovs main thread without any dpdk or kernel acceleration. > Here is another > question. What if one of the flavors don’t have the largepages > property set > on them? they will not get network connectivity. vhost-user requires mmapped=shared memory with an open file decriptor that is pre mapped and contiguious. in nova you can only get this one of two ways. 1 use hugpages or 2 use file backed memory. the second approch while it shoudl work has never acutlly been tested in nova with ovs-dpdk but it was added to libvirt for ovs-dpdk without hugepages and was added to nova for security tools that scan vm memory externally looking for active viruses and other threats. > I assumed OpenStack would revert to a system datapath and not use > DPDK for those VM interfaces. no that would break the operation of all patch ports so it cant. > Well, I found out in testing, that is not the > case. If you set all your bridges up for netdev, and you don’t set > the > property on the Flavor of the VM (largepages), the VM will launch, > but it > simply won’t work. yes without file backed memroy that is created as i said above dpdk will not be able to map the virtio rings form the guest into its process space to tx/rx packets. > Is there no way, on a specific Compute Host, to support > both DPDK (netdev datapaths) and non-DPDK (system datapaths)?Either > on a VM > interface level (VM has one interface that is netdev DPDK and another > that > is system datapath non-DPDK)? correct there is no way to support both on the same host with openstack.until relitivly recently it was not supported by the ovs comunity either. i could be configured but it was not tested, supported or recommented by the ovs comunity and its is not supported with openstack. > Or on a VM by VM basis (VM 1 has 1 or more > netdev datapath interfaces and VM 2 has 1 or more system datapath > interfaces)?Am I right here? Once you set up a Compute Host for DPDK, > it’s > DPDK or nothing on that Compute Host? (edited) yes you can mix in the same cloud just on on the same host. for example if you are not using dvr we generally recommend dpdk on only the compute host and using kernel ovs on the contoller/networking nodes where the l3 agents are running. From renat.akhmerov at gmail.com Mon Oct 12 12:54:09 2020 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 12 Oct 2020 19:54:09 +0700 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: Hi, Although the decision was made long ago (about 1.5 years AFAIK) it’s still disappointing for me to see it happening and I feel like saying something. It may sound weird but, in the best attempt to be technically honest, I actually agree with the decision to remove Mistral from TripleO. As far as I know, TripleO never used the strongest distinguishing sides of Mistral (like running workflows at scale, using it for very long running workflows etc). Mistral was mostly used just to extend configurability customizability of TripleO. Red Hat’s engineers who I had a great pleasure to work with on Mistral a few times told me “Hey, we are allocated to Mistral but we just don’t know what else to work on that our company would need. It just works for us. Works very well for the case we use it. Bugs are very rare, maintenance doesn’t require 4 people from Red Hat. So we’ll be shrinking our presence.” So it happened, as expected. I realized there was no point in trying to keep them on the project. Then some new engineers came and said: “Why can’t we use something else, more well-known, like Ansible?” So, honestly, for this use case, yes. However, I heard many times that Ansible is essentially the same thing as Mistral but it is more mature, well maintained, etc etc. And every time I had to explain why these two technologies are fundamentally different, have different purposes, different method of solving problems. I’m saying this now because I feel it is my fault that I failed to explain these differences clearly when we started actively promoting Mistral years ago. And it would be bad if misunderstanding of the technology was the real reason behind this decision. For what it’s worth, if you ever want me to elaborate on that, let me know. This thread is not a good place for that, I apologize. And finally, I want to reassure you that the project is still maintained. More than that, it keeps evolving in a number of ways and new functionality is consistently being added. The number of active contributors is now lower than in our best times (Also true I believe for OpenStack in general) but it’s now promising to change. Again, my apologies for writing it here. Thanks Renat Akhmerov @Nokia 11 окт. 2020 г., 05:28 +0700, Luke Short , писал: > +1 I believe it is good for us to focus on the core OpenStack services that still have an active community and usage in production. For better or worse, Mistral has dropped in popularity due to other similar swiss-army-knife workflow automation tools. > > > On Thu, Oct 8, 2020, 6:34 AM Emilien Macchi wrote: > > > Hi folks, > > > > > > In our long term goal to simplify TripleO and deprecate the services that aren't used by our community anymore, I propose that we deprecate Mistral services. > > > Mistral was used on the Undercloud in the previous cycles but not anymore. While the service could be deployed on the Overcloud, we haven't seen any of our users doing it. If that would be the case, please let us know as soon as possible. > > > Removing it from TripleO will help us with maintenance (container images, THT/puppet integration, CI, etc). > > > Maybe we could deprecate it in Victoria and remove it in Wallaby? > > > > > > Thanks, > > > -- > > > Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Mon Oct 12 13:02:12 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 12 Oct 2020 15:02:12 +0200 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances In-Reply-To: References: Message-ID: > However, I could not read the logs the way Dmitriy suggested, it simply > returns > > -- Logs begin at Mon 2020-10-05 14:30:22 UTC, end at Mon 2020-10-12 > 11:17:20 UTC. -- > -- No entries -- > > so I couldn't investigate the request. ( I tried this on both utility > and cinder api containers on the management node and on the storage and > respective compute node to be sure) ... it seems that I've missed the storage node, since I just tried again and got something: Oct 12 12:57:09 bc1bl10 cinder-volume[30198]: 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server [req-e9175ddf-8ee4-4135-b89b-6e9c61f63999 2223bca5520044b7b7646e2f8141a440 5fbba868d04b47fa87a72b9dd821ee12 - default default] Exception during message handling: cinder.exception.InvalidInput: Invalid input received: Connector doesn't have required information: initiator 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/cinder/volume/manager.py", line 4471, in _connection_create 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server self.driver.validate_connector(connector) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/cinder/volume/drivers/lvm.py", line 859, in validate_connector 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server return self.target_driver.validate_connector(connector) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/cinder/volume/targets/iscsi.py", line 297, in validate_connector 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server raise exception.InvalidConnectorException(missing='initiator') 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server cinder.exception.InvalidConnectorException: Connector doesn't have required information: initiator 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred: 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 276, in dispatch 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 196, in _do_dispatch 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/cinder/volume/manager.py", line 4554, in attachment_update 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server connector) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server File "/openstack/venvs/cinder-21.0.1/lib/python3.6/site-packages/cinder/volume/manager.py", line 4473, in _connection_create 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server raise exception.InvalidInput(reason=six.text_type(err)) 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server cinder.exception.InvalidInput: Invalid input received: Connector doesn't have required information: initiator 2020-10-12 12:57:09.554 30198 ERROR oslo_messaging.rpc.server Kind regards, Oliver From juliaashleykreger at gmail.com Mon Oct 12 16:20:41 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Oct 2020 09:20:41 -0700 Subject: [ironic] Meetings cancelled Week of Summit and PTG Message-ID: Greetings fellow engineers and providers of irony... err I mean Ironic community members! We agreed in our weekly meeting this morning that we will cancel our weekly meetings for the next two weeks and resume November due to overlapping schedules and ultimately the pre-existing commitment to meet during the PTG. If you have any questions, please let us know! -Julia From nate.johnston at redhat.com Mon Oct 12 17:17:45 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 12 Oct 2020 13:17:45 -0400 Subject: [neutron] bug deputy report Message-ID: <20201012171745.5h3jksxcjb77jgid@firewall> Nate bug deputy notes - 2020-10-05 to 2020-10-12 ------------------------------------------------ Overall there were a stready stream of bugs this week, but also a lot of activity in fixing them. I would like to point out the 4 unowned High/Critical bugs, three of which are tagged "gate-failure" as a focus for special attention. Critical: - "Creation of the QoS policy takes ages" - URL: https://bugs.launchpad.net/bugs/1898748 - State: Confirmed - Assignee: none - Version: stable/train - Tags: api gate-failure qos - "OVN based scenario jobs failing 100% of times" - URL: https://bugs.launchpad.net/bugs/1898863 - Status: Confirmed - Assignee: none - Tags: gate-failure ovn - "[OVN][QoS] "qos-fip" extension always loaded even without ML2 "qos", error while processing extensions" - URL: https://bugs.launchpad.net/bugs/1898842 - State: Fix Released - Assignee: ralonsoh - Fix: https://review.opendev.org/756483 - Tags: ovn qos - "[ovn] Failure in post_fork_initialize leaves worker processes without OVN IDL connections" - URL: https://bugs.launchpad.net/bugs/1898882 - State: In Progress - Assignee: jlibosva - Fix: https://review.opendev.org/756523 - Tags: ovn High: - "bandwidth resource allocation is deleted from placement during unrelated port update" - URL: https://bugs.launchpad.net/bugs/1898994 - Status: In Progress - Assignee: lajoskatona - Tags: qos placement - "Job neutron-ovn-tempest-ovs-release-ipv6-only is failing 100% of times" - URL: https://bugs.launchpad.net/bugs/1898862 - Status: Confirmed - Assignee: none - Tags: gate-failure ovn - "Wrong mac address of ARP entry setting for allowed_address_pairs in DVR router" - URL: https://bugs.launchpad.net/bugs/1899006 - State: Triaged - Version: stable/stein - Assignee: none - Tags: l3-dvr-backlog Medium: - "[OVN][Docs] admin/config-dns-res.html should be updated for OVN" - URL: https://bugs.launchpad.net/neutron/+bug/1899207 - Status: Triaged - Assigned: none - Tags: ovn doc - "formatting error causes exception in running agent" - URL: https://bugs.launchpad.net/bugs/1898789 - Status: Fix Released - Assigned: njohnston - Fix: https://review.opendev.org/756610 - thanks to the reporter for sending in a patch - "BGP peer is not working" - URL: https://bugs.launchpad.net/bugs/1898634 - Status: In Progress - Version: stable/ussuri - Project: neutron-dynamic-routing - Assignee: njohnston - Fix: https://review.opendev.org/757178 - "[OVN] "test_agent_show" failing, agent not found" - URL: https://bugs.launchpad.net/bugs/1899004 - Status: In Progress - Assignee: ralonsoh - Fix: https://review.opendev.org/756668 - Tags: ovn - "ML2OVN migration plugin does not support SR-IOV" - URL: https://bugs.launchpad.net/bugs/1899009 - Status: In Progress - Assignee: rsafrono - Fix: https://review.opendev.org/756678 - Tags: ovn sriov-pci-pt - "Linuxbridge agent NetlinkError: (13, 'Permission denied') after Stein upgrade" - URL: https://bugs.launchpad.net/bugs/1899141 - Status: In Progress - Assignee: ralonsoh - Fix: https://review.opendev.org/757107 - Tags: linuxbridge - "router_centralized_snat ports do not have project_id" - URL: https://bugs.launchpad.net/bugs/1899502 - Status: New - Assignee: arnaud-morin - Fix: https://review.opendev.org/757599 - Tags: l3-dvr-backlog - Note: possibly invalid, see Liu Yulong's comment on the bug Reassigned: - LB health monitor deletion fails with exception "Server-side error: "'NoneType' object has no attribute 'load_balancer_id'" - URL: https://bugs.launchpad.net/neutron/+bug/1898657 - Reassigned to vmware-nsx after checking with the octavia team From Adam.Schappell at SteelToadConsulting.com Mon Oct 12 13:53:22 2020 From: Adam.Schappell at SteelToadConsulting.com (Adam Schappell) Date: Mon, 12 Oct 2020 13:53:22 +0000 Subject: Cinder Issues Message-ID: <0FEE4FB7-F1F8-463E-BA95-25409B972102@steeltoadconsulting.com> Hello Everyone. I am having a ton of trouble trying to get cinder working. Here is output of command with –debug enabled. hon-cinderclient" -H "X-Auth-Token: {SHA1}129e96e53eae89cfda31be3c0fcec26477597615" Starting new HTTP connection (1): 10.10.1.53 http://10.10.1.53:8776 "GET /v2/7929351d491347788f5de228e135e67c/os-services HTTP/1.1" 503 218 RESP: [503] Connection: keep-alive Content-Length: 218 Content-Type: application/json Date: Mon, 12 Oct 2020 13:50:26 GMT X-Openstack-Request-Id: req-1db26997-7450-4586-a106-70fc9804ee82 RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.

\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} GET call to volumev2 for http://10.10.1.53:8776/v2/7929351d491347788f5de228e135e67c/os-services used request id req-1db26997-7450-4586-a106-70fc9804ee82 The server is currently unavailable. Please try again at a later time.

The Keystone service is temporarily unavailable. (HTTP 503) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/site-packages/cliff/display.py", line 116, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", line 71, in take_action parsed_args.service) File "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line 47, in list return self._list(url, "services") File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line 84, in _list resp, body = self.api.client.get(url) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 200, in get return self._cs_request(url, 'GET', **kwargs) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 191, in _cs_request return self.request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 177, in request raise exceptions.from_response(resp, body) ClientException: The server is currently unavailable. Please try again at a later time.

The Keystone service is temporarily unavailable. (HTTP 503) clean_up ListService: The server is currently unavailable. Please try again at a later time.

The Keystone service is temporarily unavailable. (HTTP 503) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 135, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 281, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 175, in run_subcommand ret_value = super(OpenStackShell, self).run_subcommand(argv) File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/site-packages/cliff/display.py", line 116, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", line 71, in take_action parsed_args.service) File "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line 47, in list return self._list(url, "services") File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line 84, in _list resp, body = self.api.client.get(url) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 200, in get return self._cs_request(url, 'GET', **kwargs) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 191, in _cs_request return self.request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 177, in request raise exceptions.from_response(resp, body) ClientException: The server is currently unavailable. Please try again at a later time.

The Keystone service is temporarily unavailable. (HTTP 503) END return value: 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ChenSa at radware.com Mon Oct 12 15:37:27 2020 From: ChenSa at radware.com (Chen Sagi) Date: Mon, 12 Oct 2020 15:37:27 +0000 Subject: external network on rhosp 16.1 all in one installation Message-ID: Hi all, I have an All in one installation of RHOSP for evaluation purposes and I am trying to understand external network and the Octavia service. I have a VM with the openstack installed on, with 2 NICs in the management network as required by Red hat. I tried using the RHOSP Octavia guide, and also tried to create an external network but I seem to be missing something. Notice: the network that I am using is a flat network and is not configured for VLANs at all, but is a routable address. Thanks in advance, Chen Sagi. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Oct 12 20:32:47 2020 From: eblock at nde.ag (Eugen Block) Date: Mon, 12 Oct 2020 20:32:47 +0000 Subject: Cinder Issues In-Reply-To: <0FEE4FB7-F1F8-463E-BA95-25409B972102@steeltoadconsulting.com> Message-ID: <20201012203247.Horde.UkdTt4ZjZVjzEJueeieqZQF@webmail.nde.ag> Hi, did it ever work? Check the cinder logs, the service is apparently not running. Double check your cinder configuration and keystone endpoint for cinder. And I think python2 is deprecated, are you trying to install an older version? Regards Eugen Zitat von Adam Schappell : > Hello Everyone. > > I am having a ton of trouble trying to get cinder working. Here is > output of command with –debug enabled. > hon-cinderclient" -H "X-Auth-Token: > {SHA1}129e96e53eae89cfda31be3c0fcec26477597615" > Starting new HTTP connection (1): 10.10.1.53 > http://10.10.1.53:8776 "GET > /v2/7929351d491347788f5de228e135e67c/os-services HTTP/1.1" 503 218 > RESP: [503] Connection: keep-alive Content-Length: 218 Content-Type: > application/json Date: Mon, 12 Oct 2020 13:50:26 GMT > X-Openstack-Request-Id: req-1db26997-7450-4586-a106-70fc9804ee82 > RESP BODY: {"message": "The server is currently unavailable. Please > try again at a later time.

\nThe Keystone service is > temporarily unavailable.\n\n", "code": "503 Service Unavailable", > "title": "Service Unavailable"} > GET call to volumev2 for > http://10.10.1.53:8776/v2/7929351d491347788f5de228e135e67c/os-services used > request id req-1db26997-7450-4586-a106-70fc9804ee82 > The server is currently unavailable. Please try again at a later > time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in > run_subcommand > result = cmd.run(parsed_args) > File > "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > 41, in run > return super(Command, self).run(parsed_args) > File "/usr/lib/python2.7/site-packages/cliff/display.py", line 116, in run > column_names, data = self.take_action(parsed_args) > File > "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", > line 71, in take_action > parsed_args.service) > File > "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line > 47, in list > return self._list(url, "services") > File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line > 84, in _list > resp, body = self.api.client.get(url) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 200, in get > return self._cs_request(url, 'GET', **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 191, in _cs_request > return self.request(url, method, **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 177, in request > raise exceptions.from_response(resp, body) > ClientException: The server is currently unavailable. Please try > again at a later time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > clean_up ListService: The server is currently unavailable. Please > try again at a later time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 135, in run > ret_val = super(OpenStackShell, self).run(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 281, in run > result = self.run_subcommand(remainder) > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line > 175, in run_subcommand > ret_value = super(OpenStackShell, self).run_subcommand(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in > run_subcommand > result = cmd.run(parsed_args) > File > "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > 41, in run > return super(Command, self).run(parsed_args) > File "/usr/lib/python2.7/site-packages/cliff/display.py", line 116, in run > column_names, data = self.take_action(parsed_args) > File > "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", > line 71, in take_action > parsed_args.service) > File > "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line > 47, in list > return self._list(url, "services") > File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line > 84, in _list > resp, body = self.api.client.get(url) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 200, in get > return self._cs_request(url, 'GET', **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 191, in _cs_request > return self.request(url, method, **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 177, in request > raise exceptions.from_response(resp, body) > ClientException: The server is currently unavailable. Please try > again at a later time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > > END return value: 1 From sean.mcginnis at gmx.com Mon Oct 12 21:47:23 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 12 Oct 2020 16:47:23 -0500 Subject: Cinder Issues In-Reply-To: <20201012203247.Horde.UkdTt4ZjZVjzEJueeieqZQF@webmail.nde.ag> References: <20201012203247.Horde.UkdTt4ZjZVjzEJueeieqZQF@webmail.nde.ag> Message-ID: <91161ad2-0f69-bd27-d436-4b4ab01ab6a7@gmx.com> > did it ever work? Check the cinder logs, the service is apparently not > running. Double check your cinder configuration and keystone endpoint > for cinder. Based on these logs, it actually appears to be an issue with the keystone deployment, not Cinder. So not to say there isn't something going on with Cinder too, but from this output it is never even getting there since it doesn't get past the first step of authenticating. >> RESP: [503] Connection: keep-alive Content-Length: 218 Content-Type: >> application/json Date: Mon, 12 Oct 2020 13:50:26 GMT >> X-Openstack-Request-Id: req-1db26997-7450-4586-a106-70fc9804ee82 >> RESP BODY: {"message": "The server is currently unavailable. Please >> try again at a later time.

\nThe Keystone service is >> temporarily unavailable.\n\n", "code": "503 Service Unavailable", >> "title": "Service Unavailable"} >> GET call to volumev2 for >> http://10.10.1.53:8776/v2/7929351d491347788f5de228e135e67c/os-services >> used request id req-1db26997-7450-4586-a106-70fc9804ee82 >> The server is currently unavailable. Please try again at a later >> time.

>> The Keystone service is temporarily unavailable. >> >> (HTTP 503) From fungi at yuggoth.org Mon Oct 12 23:48:44 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Oct 2020 23:48:44 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Voting 24 Hours Remain Message-ID: <20201012234844.xepjqeykilzvzf77@yuggoth.org> We are coming down to the last hours for voting in the TC and Telemetry elections. Voting ends Oct 13, 2020 23:45 UTC. Search your Gerrit preferred email address[0] for the messages with subjects like... Poll: OpenStack Wallaby Cycle Technical Committee Election Poll That is your ballot and links you to the voting application. Please vote. If you have voted, please encourage your colleagues to vote. Candidate statements are linked to the names of all confirmed candidates: https://governance.openstack.org/election/ What to do if you don't see the email and have a commit in at least one of the official project teams' deliverable repositories[1]: * check the trash of your Gerrit Preferred Email address[0], in case it went into trash or spam * find the ID of at least one commit merged to an official deliverable repo[1] over the current or previous cycle, confirm you are an OpenStack Foundation Individual Member[2], and then email the election officials[3] or get in touch in the #openstack-elections channel on the Freenode IRC network. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Please vote! Thank you, [0] Sign into review.openstack.org and go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [1] https://opendev.org/openstack/governance/src/tag/0.8.0/reference/projects.yaml [2] https://www.openstack.org/profile/ [3] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Tue Oct 13 02:11:46 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Oct 2020 22:11:46 -0400 Subject: [cinder] reminder: type-checking hackfest on Wednesday Message-ID: <1fbb8734-6212-e379-972b-f68513e27d9e@gmail.com> This is a reminder that the Cinder team will celebrate the Victoria release on Wednesday by holding a hackfest to review the patches Eric has posted for using mypy for type checking in the cinder code, and to add type annotations elsewhere in the code. We will be meeting from 1300-1600 UTC on Wednesday 14 October in the opendev Jitsi: https://meetpad.opendev.org/cinder-type-checking-hackfest If you want an advance look, you can start here: - https://review.opendev.org/#/c/733620/ - https://review.opendev.org/#/c/733621/ For general information: - https://docs.python.org/3/library/typing.html - https://www.python.org/dev/peps/pep-3107/ - http://mypy-lang.org/index.html See you on Wednesday! brian From mark at stackhpc.com Tue Oct 13 09:08:38 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 13 Oct 2020 10:08:38 +0100 Subject: [release][networking-ansible][vmware-nsx][kolla] Projects lacking stable branches Message-ID: Hi, In the Kolla project we're trying to switch our dependencies to stable/victoria, and I noticed we're lacking stable branches for the following (unofficial) projects: networking-ansible vmware-nsx vmware-nsxlib Cheers, Mark From thierry at openstack.org Tue Oct 13 09:34:39 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Oct 2020 11:34:39 +0200 Subject: =?UTF-8?B?UmU6IOetlOWkjTogW2xpc3RzLm9wZW5zdGFjay5vcmfku6Plj5FdUmU6?= =?UTF-8?B?IFtwbGFjZW1lbnRdW25vdmFdW2NpbmRlcl1bbmV1dHJvbl1bYmxhemFyXVt0Y11b?= =?UTF-8?Q?zun=5d_Placement_governance_switch=28back=29?= In-Reply-To: <3e09b1015433411e9752d3543d63d331@inspur.com> References: <51776398bc77d6ec24a230ab4c2a5913@sslemail.net> <3e09b1015433411e9752d3543d63d331@inspur.com> Message-ID: <56f987f0-20a5-7163-062e-9e02a0f99057@openstack.org> Brin Zhang(张百林) wrote: > Hi, gibi, I am a contributor for nova and placement, and do many feature from Stein release, I would like to take on the role of *Release liaison*[1] in Placement to help more people. Thanks Brin Zhang for volunteering! We still need (at least) one volunteer for security liaison and one volunteer for TaCT SIG (infra) liaison. Anyone else interested in helping? -- Thierry Carrez (ttx) From dcha94 at dcn.ssu.ac.kr Tue Oct 13 11:01:59 2020 From: dcha94 at dcn.ssu.ac.kr (=?UTF-8?B?7LCo64+Z7ZeM?=) Date: Tue, 13 Oct 2020 20:01:59 +0900 Subject: [Vitrage] I have a problems with alertmanager's alarm in vitrage Message-ID: Hi, I have been testing the Vitrage with prometheus and alertmanager. but i have a little problem with it. When I triggered the alarm with alertmanager on service version Ussuri and Master(also devstack), Vitrage api couldn't get the alarm of alertmanager because of auth issue error 401. However it works properly devstack and Vitrage versions are Rocky. In my personal opinion, upgrading the devstack version changes the endpoint of vitrage service http://localhost:8999/v1/event to http://localhost/rca/v1/event, but Vitrage services are still pointed at the past endpoint. Can I get a solution or tips for this problem? Thank you. Regards, Donghun ========================================================== Donghun Cha (차동헌) DCN Lab - Distributed Cloud and Network Research Lab 46, Sangdo-ro, Dongjak-gu, 07027, Seoul, Republic of Korea Tel : +82-2-820-0841 Mobile : +82-10-4121-5662 E-mail : dcha94 at dcn.ssu.ac.kr ========================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Tue Oct 13 12:09:58 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Tue, 13 Oct 2020 14:09:58 +0200 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances (Oliver Wenz) In-Reply-To: References: Message-ID: I solved the problem by connecting the compute nodes to the iscsi controller (without mapping to the block storage). The fact that in the old production system our compute nodes used a shared file system separate independent from cinder led me to believe that I don't have to connect them to the iscsi controller at all since I abandoned the shared storage. Thank you Dmitriy and Gorka! Kind regards, Oliver From emilien at redhat.com Tue Oct 13 12:53:43 2020 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 13 Oct 2020 08:53:43 -0400 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: On Mon, Oct 12, 2020 at 8:54 AM Renat Akhmerov wrote: > Hi, > > Although the decision was made long ago (about 1.5 years AFAIK) it’s still > disappointing for me to see it happening and I feel like saying something. > > It may sound weird but, in the best attempt to be technically honest, I > actually agree with the decision to remove Mistral from TripleO. As far as > I know, TripleO never used the strongest distinguishing sides of Mistral > (like running workflows at scale, using it for very long running workflows > etc). Mistral was mostly used just to extend configurability > customizability of TripleO. Red Hat’s engineers who I had a great pleasure > to work with on Mistral a few times told me “Hey, we are allocated to > Mistral but we just don’t know what else to work on that our company would > need. It just works for us. Works very well for the case we use it. Bugs > are very rare, maintenance doesn’t require 4 people from Red Hat. So we’ll > be shrinking our presence.” So it happened, as expected. I realized there > was no point in trying to keep them on the project. Then some new engineers > came and said: “Why can’t we use something else, more well-known, like > Ansible?” So, honestly, for this use case, yes. However, I heard many times > that Ansible is essentially the same thing as Mistral but it is more > mature, well maintained, etc etc. And every time I had to explain why these > two technologies are fundamentally different, have different purposes, > different method of solving problems. > > I’m saying this now because I feel it is my fault that I failed to explain > these differences clearly when we started actively promoting Mistral years > ago. And it would be bad if misunderstanding of the technology was the real > reason behind this decision. For what it’s worth, if you ever want me to > elaborate on that, let me know. This thread is not a good place for that, I > apologize. > > And finally, I want to reassure you that the project is still maintained. > More than that, it keeps evolving in a number of ways and new functionality > is consistently being added. The number of active contributors is now lower > than in our best times (Also true I believe for OpenStack in general) but > it’s now promising to change. > > Again, my apologies for writing it here. > Renat, I don't think you need to apologize here. Your team has done excellent work at maintaining Mistral over the years. For us, the major reason to not use Mistral anymore is that we have no UI anymore; which was the main reason why we wanted to use Mistral workflows to control the deployment from both UI & CLI with unified experience. Our deployment framework has shifted toward Ansible, and without UI we rewrote our workflows in pure Python, called by Ansible modules via playbooks. Again, your message is appreciated, thanks for the clarification on your side as well! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Adam.Schappell at SteelToadConsulting.com Tue Oct 13 13:39:49 2020 From: Adam.Schappell at SteelToadConsulting.com (Adam Schappell) Date: Tue, 13 Oct 2020 13:39:49 +0000 Subject: Cinder Issues In-Reply-To: <20201012203247.Horde.UkdTt4ZjZVjzEJueeieqZQF@webmail.nde.ag> References: <0FEE4FB7-F1F8-463E-BA95-25409B972102@steeltoadconsulting.com> <20201012203247.Horde.UkdTt4ZjZVjzEJueeieqZQF@webmail.nde.ag> Message-ID: <7F2F7D2F-8F61-4ECD-92CD-0B127259C615@steeltoadconsulting.com> So I was able to get Keystone working. I had to add v3 to the end of the urls and changed the service project....Confusing to me. Now I see this in the cinder volume logs on the storage node. No idea where the error is coming from: 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume Traceback (most recent call last): 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/cmd/volume.py", line 104, in _launch_service 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume cluster=cluster) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/service.py", line 392, in create 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume cluster=cluster, **kwargs) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/service.py", line 155, in __init__ 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume *args, **kwargs) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 194, in __init__ 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume *args, **kwargs) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/manager.py", line 183, in __init__ 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI() 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/rpc.py", line 207, in __init__ 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume obj_version_cap = self.determine_obj_version_cap() 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/rpc.py", line 260, in determine_obj_version_cap 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume cinder.context.get_admin_context(), cls.BINARY) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/context.py", line 251, in get_admin_context 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume overwrite=False) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/cinder/context.py", line 101, in __init__ 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume super(RequestContext, self).__init__(is_admin=is_admin, **kwargs) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 108, in inner 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume return wrapped(*args, **kwargs) 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume TypeError: __init__() got an unexpected keyword argument 'project_id' 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume 2020-10-13 09:37:49.095 15002 ERROR cinder.cmd.volume [-] No volume service(s) started successfully, terminating. On 10/12/20, 4:33 PM, "Eugen Block" wrote: Hi, did it ever work? Check the cinder logs, the service is apparently not running. Double check your cinder configuration and keystone endpoint for cinder. And I think python2 is deprecated, are you trying to install an older version? Regards Eugen Zitat von Adam Schappell : > Hello Everyone. > > I am having a ton of trouble trying to get cinder working. Here is > output of command with –debug enabled. > hon-cinderclient" -H "X-Auth-Token: > {SHA1}129e96e53eae89cfda31be3c0fcec26477597615" > Starting new HTTP connection (1): 10.10.1.53 > http://10.10.1.53:8776 "GET > /v2/7929351d491347788f5de228e135e67c/os-services HTTP/1.1" 503 218 > RESP: [503] Connection: keep-alive Content-Length: 218 Content-Type: > application/json Date: Mon, 12 Oct 2020 13:50:26 GMT > X-Openstack-Request-Id: req-1db26997-7450-4586-a106-70fc9804ee82 > RESP BODY: {"message": "The server is currently unavailable. Please > try again at a later time.

\nThe Keystone service is > temporarily unavailable.\n\n", "code": "503 Service Unavailable", > "title": "Service Unavailable"} > GET call to volumev2 for > http://10.10.1.53:8776/v2/7929351d491347788f5de228e135e67c/os-services used > request id req-1db26997-7450-4586-a106-70fc9804ee82 > The server is currently unavailable. Please try again at a later > time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in > run_subcommand > result = cmd.run(parsed_args) > File > "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > 41, in run > return super(Command, self).run(parsed_args) > File "/usr/lib/python2.7/site-packages/cliff/display.py", line 116, in run > column_names, data = self.take_action(parsed_args) > File > "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", > line 71, in take_action > parsed_args.service) > File > "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line > 47, in list > return self._list(url, "services") > File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line > 84, in _list > resp, body = self.api.client.get(url) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 200, in get > return self._cs_request(url, 'GET', **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 191, in _cs_request > return self.request(url, method, **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 177, in request > raise exceptions.from_response(resp, body) > ClientException: The server is currently unavailable. Please try > again at a later time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > clean_up ListService: The server is currently unavailable. Please > try again at a later time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 135, in run > ret_val = super(OpenStackShell, self).run(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 281, in run > result = self.run_subcommand(remainder) > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line > 175, in run_subcommand > ret_value = super(OpenStackShell, self).run_subcommand(argv) > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in > run_subcommand > result = cmd.run(parsed_args) > File > "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > 41, in run > return super(Command, self).run(parsed_args) > File "/usr/lib/python2.7/site-packages/cliff/display.py", line 116, in run > column_names, data = self.take_action(parsed_args) > File > "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", > line 71, in take_action > parsed_args.service) > File > "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line > 47, in list > return self._list(url, "services") > File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line > 84, in _list > resp, body = self.api.client.get(url) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 200, in get > return self._cs_request(url, 'GET', **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 191, in _cs_request > return self.request(url, method, **kwargs) > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > line 177, in request > raise exceptions.from_response(resp, body) > ClientException: The server is currently unavailable. Please try > again at a later time.

> The Keystone service is temporarily unavailable. > > (HTTP 503) > > END return value: 1 From elfosardo at gmail.com Tue Oct 13 13:59:52 2020 From: elfosardo at gmail.com (Riccardo Pittau) Date: Tue, 13 Oct 2020 15:59:52 +0200 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <20201008160444.uug66lb5ihsc3qx3@yuggoth.org> References: <2889623.ktpJ11cQ8Q@whitebase.usersys.redhat.com> <33d43b9f-89e2-4c24-baf2-729fbad10ad7@www.fastmail.com> <4058831.ejJDZkT8p0@whitebase.usersys.redhat.com> <20201008160444.uug66lb5ihsc3qx3@yuggoth.org> Message-ID: Hello again! Thanks everyone for joining the discussion and for the advice, much appreciated. To summarize the current situation: - at the moment it's not possible to build any rpm-based image on ubuntu focal using diskimage-builder with the centos-minimal element; - so far, we've successfully built the centos8 based ipa-ramdisk using diskimage-builder in ironic-python-agent-builder on centos-8 nodeset, but using the same nodeset also for testing the image (so running devstack) would be very time expensive because of the changes involved; - during the last ironic meeting, it has been decided to switch to the centos element and give that a try using the ubuntu focal nodeset; main concern about this approach is the final size of the ipa ramdisk, tests are ongoing and it looks promising: https://review.opendev.org/757808 https://review.opendev.org/757811 https://review.opendev.org/757812 Thanks again! Riccardo On Thu, Oct 8, 2020 at 6:11 PM Jeremy Stanley wrote: > On 2020-10-08 17:57:55 +0200 (+0200), Luigi Toscano wrote: > [...] > > Uhm, maybe unattended virt-install could help there? > > I expect that would be almost unworkable from within a virtual > machine instance without nested virt acceleration. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Oct 13 15:03:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 13 Oct 2020 17:03:25 +0200 Subject: [neutron] team meeting in next 2 weeks Message-ID: <3996474.jyjY3xaTxG@p1> Hi, As we agreed during today meeting, team meetings in next 2 weeks are cancelled due to OpenInfra summit and PTG virtual events. We will have our next team meeting on Tuesday 3.11.2020. -- Slawek Kaplonski Principal Software Engineer Red Hat From eblock at nde.ag Tue Oct 13 15:07:09 2020 From: eblock at nde.ag (Eugen Block) Date: Tue, 13 Oct 2020 15:07:09 +0000 Subject: Cinder Issues In-Reply-To: <7F2F7D2F-8F61-4ECD-92CD-0B127259C615@steeltoadconsulting.com> References: <0FEE4FB7-F1F8-463E-BA95-25409B972102@steeltoadconsulting.com> <20201012203247.Horde.UkdTt4ZjZVjzEJueeieqZQF@webmail.nde.ag> <7F2F7D2F-8F61-4ECD-92CD-0B127259C615@steeltoadconsulting.com> Message-ID: <20201013150709.Horde.ggBkTMRSrP21zot5CT9tOzC@webmail.nde.ag> Could you share your endpoint list? Zitat von Adam Schappell : > So I was able to get Keystone working. I had to add v3 to the end of > the urls and changed the service project....Confusing to me. > > Now I see this in the cinder volume logs on the storage node. No > idea where the error is coming from: > > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume Traceback > (most recent call last): > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/cmd/volume.py", line 104, > in _launch_service > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume cluster=cluster) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/service.py", line 392, in > create > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume > cluster=cluster, **kwargs) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/service.py", line 155, in > __init__ > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume *args, **kwargs) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line > 194, in __init__ > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume *args, **kwargs) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/manager.py", line 183, in > __init__ > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume > self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI() > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/rpc.py", line 207, in > __init__ > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume > obj_version_cap = self.determine_obj_version_cap() > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/rpc.py", line 260, in > determine_obj_version_cap > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume > cinder.context.get_admin_context(), cls.BINARY) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/context.py", line 251, in > get_admin_context > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume overwrite=False) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/cinder/context.py", line 101, in > __init__ > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume > super(RequestContext, self).__init__(is_admin=is_admin, **kwargs) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume File > "/usr/lib/python2.7/site-packages/positional/__init__.py", line 108, > in inner > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume return > wrapped(*args, **kwargs) > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume TypeError: > __init__() got an unexpected keyword argument 'project_id' > 2020-10-13 09:37:49.091 15002 ERROR cinder.cmd.volume > 2020-10-13 09:37:49.095 15002 ERROR cinder.cmd.volume [-] No volume > service(s) started successfully, terminating. > > On 10/12/20, 4:33 PM, "Eugen Block" wrote: > > Hi, > > did it ever work? Check the cinder logs, the service is apparently not > running. Double check your cinder configuration and keystone endpoint > for cinder. > And I think python2 is deprecated, are you trying to install an > older version? > > Regards > Eugen > > > Zitat von Adam Schappell : > > > Hello Everyone. > > > > I am having a ton of trouble trying to get cinder working. Here is > > output of command with –debug enabled. > > hon-cinderclient" -H "X-Auth-Token: > > {SHA1}129e96e53eae89cfda31be3c0fcec26477597615" > > Starting new HTTP connection (1): 10.10.1.53 > > http://10.10.1.53:8776 "GET > > /v2/7929351d491347788f5de228e135e67c/os-services HTTP/1.1" 503 218 > > RESP: [503] Connection: keep-alive Content-Length: 218 Content-Type: > > application/json Date: Mon, 12 Oct 2020 13:50:26 GMT > > X-Openstack-Request-Id: req-1db26997-7450-4586-a106-70fc9804ee82 > > RESP BODY: {"message": "The server is currently unavailable. Please > > try again at a later time.

\nThe Keystone service is > > temporarily unavailable.\n\n", "code": "503 Service Unavailable", > > "title": "Service Unavailable"} > > GET call to volumev2 for > > > http://10.10.1.53:8776/v2/7929351d491347788f5de228e135e67c/os-services > used > > request id req-1db26997-7450-4586-a106-70fc9804ee82 > > The server is currently unavailable. Please try again at a later > > time.

> > The Keystone service is temporarily unavailable. > > > > (HTTP 503) > > Traceback (most recent call last): > > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in > > run_subcommand > > result = cmd.run(parsed_args) > > File > > "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > > 41, in run > > return super(Command, self).run(parsed_args) > > File "/usr/lib/python2.7/site-packages/cliff/display.py", > line 116, in run > > column_names, data = self.take_action(parsed_args) > > File > > > "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", > > line 71, in take_action > > parsed_args.service) > > File > > "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line > > 47, in list > > return self._list(url, "services") > > File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line > > 84, in _list > > resp, body = self.api.client.get(url) > > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > > line 200, in get > > return self._cs_request(url, 'GET', **kwargs) > > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > > line 191, in _cs_request > > return self.request(url, method, **kwargs) > > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > > line 177, in request > > raise exceptions.from_response(resp, body) > > ClientException: The server is currently unavailable. Please try > > again at a later time.

> > The Keystone service is temporarily unavailable. > > > > (HTTP 503) > > clean_up ListService: The server is currently unavailable. Please > > try again at a later time.

> > The Keystone service is temporarily unavailable. > > > > (HTTP 503) > > Traceback (most recent call last): > > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", > line 135, in run > > ret_val = super(OpenStackShell, self).run(argv) > > File "/usr/lib/python2.7/site-packages/cliff/app.py", line > 281, in run > > result = self.run_subcommand(remainder) > > File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line > > 175, in run_subcommand > > ret_value = super(OpenStackShell, self).run_subcommand(argv) > > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 402, in > > run_subcommand > > result = cmd.run(parsed_args) > > File > > "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > > 41, in run > > return super(Command, self).run(parsed_args) > > File "/usr/lib/python2.7/site-packages/cliff/display.py", > line 116, in run > > column_names, data = self.take_action(parsed_args) > > File > > > "/usr/lib/python2.7/site-packages/openstackclient/volume/v2/service.py", > > line 71, in take_action > > parsed_args.service) > > File > > "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line > > 47, in list > > return self._list(url, "services") > > File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line > > 84, in _list > > resp, body = self.api.client.get(url) > > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > > line 200, in get > > return self._cs_request(url, 'GET', **kwargs) > > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > > line 191, in _cs_request > > return self.request(url, method, **kwargs) > > File "/usr/lib/python2.7/site-packages/cinderclient/client.py", > > line 177, in request > > raise exceptions.from_response(resp, body) > > ClientException: The server is currently unavailable. Please try > > again at a later time.

> > The Keystone service is temporarily unavailable. > > > > (HTTP 503) > > > > END return value: 1 From bjoernputtmann at netprojects.de Tue Oct 13 15:30:12 2020 From: bjoernputtmann at netprojects.de (bjoernputtmann at netprojects.de) Date: Tue, 13 Oct 2020 17:30:12 +0200 Subject: Problem with octavia LBaaS and nova availability zones Message-ID: <468d79df2c8ba8f3b9ae29c515f231e8@netprojects.de> Hi one and all! Currently, we have an ussuri openstack installation via kolla-ansible. The installation consists of three control nodes and two availability zones (az-1, az-2), each with three compute nodes: - os-cpt-10[1-3] > az-1 - os-cpt-20[1-3] > az-2 az-1 is the default compute availability zone. The compute nodes are hardwarewise equipped exactly the same. As storage system a ceph cluster is used. We wanted to also use Octavia LBaaS and got it up and running after some experimentation. We also wanted to be able to choose the availibility zones when starting a loadbalancer. We added the az info with: openstack --os-cloud service_octavia loadbalancer availabilityzoneprofile create --name az-1 --provider amphora --availability-zone-data '{"compute_zone": "az-1"}' openstack --os-cloud service_octavia loadbalancer availabilityzoneprofile create --name az-2 --provider amphora --availability-zone-data '{"compute_zone": "az-2"}' openstack --os-cloud service_octavia loadbalancer availabilityzone create --name az-1 --availabilityzoneprofile az-1 openstack --os-cloud service_octavia loadbalancer availabilityzone create --name az-2 --availabilityzoneprofile az-2 Creating a loadbalancer via cli: openstack --os-cloud $LB_PROJECT loadbalancer create --flavor $LB_FLAVOR --name $LB_NAME --vip-subnet-id $(openstack --os-cloud $LB_PROJECT subnet list --name $LB_SUBNET -f value -c ID) --availability-zone $LB_AZ works, as long as $LB_AZ == az-1. If we want to start the loadbalancer in az-2, this fails with an error in octavia-worker.log: ... 2020-09-30 07:21:01.482 34 ERROR oslo_messaging.rpc.server octavia.common.exceptions.ComputeBuildException: Failed to build compute instance due to: {'code': 500, 'created': '2020-09-30T07:20:55Z', 'message': 'No valid host was found. There are not enough hosts available.', 'details': 'Traceback (most recent call last):\n File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1463, in schedule_and_build_instances\n instance_uuids, return_alternates=True)\n File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 870, in _schedule_instances\n return_alternates=return_alternates)\n File "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations\n instance_uuids, return_objects, return_alternates)\n File "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations\n return cctxt.call(ctxt, \'select_destinations\', **msg_args)\n File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 181, in call\n transport_options=self.transport_options)\n File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 129, in _send\n transport_options=transport_options)\n File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send\n transport_options=transport_options)\n File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 644, in _send\n raise result\nnova.exception_Remote.NoValidHost_Remote: No valid host was found. There are not enough hosts available.\nTraceback (most recent call last):\n\n File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 241, in inner\n return func(*args, **kwargs)\n\n File "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line 215, in select_destinations\n allocation_request_version, return_alternates)\n\n File "/usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations\n allocation_request_version, return_alternates)\n\n File "/usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule\n claimed_instance_uuids)\n\n File "/usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts\n raise exception.NoValidHost(reason=reason)\n\nnova.exception.NoValidHost: No valid host was found. There are not enough hosts available.\n\n'} ... We enabled debug logging for nova and found: ... 2020-10-13 13:50:49.193 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "7b9846a9-f8be-4d60-92ad-0fb531e48e64" acquired by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 2020-10-13 13:50:49.196 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "7b9846a9-f8be-4d60-92ad-0fb531e48e64" released by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" :: held 0.003s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 2020-10-13 13:50:49.217 33 DEBUG oslo_db.sqlalchemy.engines [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/engines.py:304 2020-10-13 13:50:49.245 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "('os-cpt-103', 'os-cpt-103')" acquired by "nova.scheduler.host_manager.HostState.update.._locked_update" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 2020-10-13 13:50:49.246 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state from compute node: ComputeNode(cpu_allocation_ratio=4.0,cpu_info='{"arch": "x86_64", "model": "EPYC-IBPB", "vendor": "AMD", "topology": {"cells": 1, "sockets": 1, "cores": 24, "threads": 2}, "features": ["bmi1", "smep", "sha-ni", "fsgsbase", "adx", "cmov", "invtsc", "fpu", "mmx", "sep", "abm", "pni", "msr", "xsavec", "f16c", "fma", "nx", "pat", "sse4.1", "rdrand", "wbnoinvd", "vme", "lahf_lm", "cr8legacy", "xsave", "bmi2", "clzero", "mtrr", "arat", "amd-ssbd", "aes", "avx", "avx2", "cx8", "umip", "de", "ibpb", "misalignsse", "osvw", "perfctr_core", "pse36", "mce", "skinit", "syscall", "sse2", "apic", "fxsr_opt", "pse", "ht", "pdpe1gb", "rdtscp", "xsaves", "clflushopt", "pclmuldq", "sse4.2", "movbe", "smap", "ibs", "clwb", "xgetbv1", "sse4a", "cx16", "extapic", "wdt", "perfctr_nb", "tsc", "mca", "topoext", "pae", "fxsr", "lm", "cmp_legacy", "monitor", "3dnowprefetch", "ssse3", "pge", "popcnt", "rdseed", "mmxext", "tce", "clflush", "xsaveopt", "svm", "sse"]}',created_at=2020-09-08T09:22:00Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=94,free_disk_gb=99,free_ram_mb=206453,host='os-cpt-103',host_ip=172.20.1.13,hypervisor_hostname='os-cpt-103',hypervisor_type='QEMU',hypervisor_version=4002000,id=9,local_gb=99,local_gb_used=0,mapped=0,memory_mb=257653,memory_mb_used=51200,metrics='[{"name": "cpu.user.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.kernel.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.iowait.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.kernel.time", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 1373295770000000}, {"name": "cpu.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.frequency", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 1796}, {"name": "cpu.user.time", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 386827280000000}, {"name": "cpu.idle.time", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 139696145240000000}, {"name": "cpu.idle.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 0.98}, {"name": "cpu.iowait.time", "timestamp": "2020-10-13T13:50:07.846401", "source": "libvirt.LibvirtDriver", "value": 3390650000000}]',numa_topology='{"nova_object.name": "NUMATopology", "nova_object.namespace": "nova", "nova_object.version": "1.2", "nova_object.data": {"cells": [{"nova_object.name": "NUMACell", "nova_object.namespace": "nova", "nova_object.version": "1.4", "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "memory": 257653, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], "siblings": [[12, 36], [10, 34], [2, 26], [1, 25], [9, 33], [0, 24], [8, 32], [11, 35], [13, 37], [19, 43], [21, 45], [18, 42], [38, 14], [46, 22], [4, 28], [20, 44], [15, 39], [17, 41], [23, 47], [16, 40], [6, 30], [7, 31], [3, 27], [5, 29]], "mempages": [{"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4, "total": 65959381, "used": 0, "reserved": 0}, "nova_object.changes": ["used", "reserved", "size_kb", "total"]}, {"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["used", "reserved", "size_kb", "total"]}, {"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576, "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["used", "reserved", "size_kb", "total"]}], "network_metadata": {"nova_object.name": "NetworkMetadata", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": {"physnets": [], "tunneled": false}, "nova_object.changes": ["physnets", "tunneled"]}}, "nova_object.changes": ["mempages", "cpu_usage", "memory", "memory_usage", "id", "pinned_cpus", "pcpuset", "network_metadata", "siblings", "cpuset"]}]}, "nova_object.changes": ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.2,running_vms=3,service_id=None,stats={failed_builds='0',io_workload='0',num_instances='3',num_os_type_None='3',num_proj_e06101e290f24563a836e16909146ea0='3',num_task_None='3',num_vm_active='3'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2020-10-13T13:50:07Z,uuid=34993097-d180-49a0-ae3c-9111e6aa8968,vcpus=48,vcpus_used=7) _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:172 2020-10-13 13:50:49.250 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with aggregates: [Aggregate(created_at=2020-09-08T14:01:05Z,deleted=False,deleted_at=None,hosts=['os-cpt-101','os-cpt-102','os-cpt-103'],id=3,metadata={availability_zone='az-1'},name='az-1',updated_at=None,uuid=8f37e9f3-506d-40da-8997-38915f1dfe67)] _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:175 2020-10-13 13:50:49.251 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with service dict: {'id': 27, 'uuid': '62ea00b1-e61d-4f84-8b35-c1d3d7a333e7', 'host': 'os-cpt-103', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 294508, 'disabled': False, 'disabled_reason': None, 'last_seen_up': datetime.datetime(2020, 10, 13, 13, 50, 45, tzinfo=), 'forced_down': False, 'version': 51, 'created_at': datetime.datetime(2020, 9, 8, 9, 22, tzinfo=), 'updated_at': datetime.datetime(2020, 10, 13, 13, 50, 45, tzinfo=), 'deleted_at': None, 'deleted': False} _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:178 2020-10-13 13:50:49.251 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with instances: ['c88b77f7-2c59-497b-9633-139e62ba2bb1', '635fbd95-dd64-4e0a-9767-a0e0a6958610', 'a02369ab-9002-4bf8-b65c-03e93cbc7df5'] _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:182 2020-10-13 13:50:49.252 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "('os-cpt-103', 'os-cpt-103')" released by "nova.scheduler.host_manager.HostState.update.._locked_update" :: held 0.007s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 2020-10-13 13:50:49.253 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "('os-cpt-101', 'os-cpt-101')" acquired by "nova.scheduler.host_manager.HostState.update.._locked_update" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 2020-10-13 13:50:49.253 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state from compute node: ComputeNode(cpu_allocation_ratio=4.0,cpu_info='{"arch": "x86_64", "model": "EPYC-IBPB", "vendor": "AMD", "topology": {"cells": 1, "sockets": 1, "cores": 24, "threads": 2}, "features": ["3dnowprefetch", "fpu", "sse4.1", "fma", "arat", "avx2", "bmi1", "sse2", "extapic", "adx", "aes", "sha-ni", "tce", "osvw", "invtsc", "sse", "xsave", "de", "movbe", "pse", "pdpe1gb", "clflush", "mmx", "wdt", "cmov", "perfctr_core", "skinit", "umip", "bmi2", "cx8", "amd-ssbd", "cmp_legacy", "perfctr_nb", "rdrand", "ibpb", "monitor", "mtrr", "clflushopt", "smap", "msr", "sep", "f16c", "pat", "avx", "xsavec", "mca", "apic", "pni", "xsaves", "cr8legacy", "popcnt", "svm", "clzero", "pae", "lm", "pclmuldq", "pge", "rdseed", "xgetbv1", "sse4.2", "ht", "rdtscp", "fxsr", "lahf_lm", "vme", "sse4a", "tsc", "misalignsse", "abm", "fxsr_opt", "mce", "syscall", "ssse3", "cx16", "ibs", "smep", "fsgsbase", "topoext", "wbnoinvd", "xsaveopt", "mmxext", "nx", "pse36", "clwb"]}',created_at=2020-09-08T09:22:00Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=64,free_disk_gb=79,free_ram_mb=207477,host='os-cpt-101',host_ip=172.20.1.11,hypervisor_hostname='os-cpt-101',hypervisor_type='QEMU',hypervisor_version=4002000,id=6,local_gb=99,local_gb_used=20,mapped=0,memory_mb=257653,memory_mb_used=50176,metrics='[{"name": "cpu.iowait.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.idle.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 0.98}, {"name": "cpu.frequency", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 1796}, {"name": "cpu.idle.time", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 139958595980000000}, {"name": "cpu.user.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.kernel.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.user.time", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 388866090000000}, {"name": "cpu.kernel.time", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 1118743070000000}, {"name": "cpu.iowait.time", "timestamp": "2020-10-13T13:50:21.645565", "source": "libvirt.LibvirtDriver", "value": 3637460000000}]',numa_topology='{"nova_object.name": "NUMATopology", "nova_object.namespace": "nova", "nova_object.version": "1.2", "nova_object.data": {"cells": [{"nova_object.name": "NUMACell", "nova_object.namespace": "nova", "nova_object.version": "1.4", "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "memory": 257653, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], "siblings": [[12, 36], [10, 34], [2, 26], [1, 25], [9, 33], [0, 24], [8, 32], [11, 35], [13, 37], [19, 43], [21, 45], [18, 42], [38, 14], [46, 22], [4, 28], [20, 44], [15, 39], [17, 41], [23, 47], [16, 40], [6, 30], [7, 31], [3, 27], [5, 29]], "mempages": [{"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4, "total": 65959383, "used": 0, "reserved": 0}, "nova_object.changes": ["reserved", "used", "total", "size_kb"]}, {"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["reserved", "used", "total", "size_kb"]}, {"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576, "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["reserved", "used", "total", "size_kb"]}], "network_metadata": {"nova_object.name": "NetworkMetadata", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": {"physnets": [], "tunneled": false}, "nova_object.changes": ["physnets", "tunneled"]}}, "nova_object.changes": ["id", "cpu_usage", "pcpuset", "memory", "cpuset", "siblings", "mempages", "network_metadata", "memory_usage", "pinned_cpus"]}]}, "nova_object.changes": ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.2,running_vms=4,service_id=None,stats={failed_builds='0',io_workload='0',num_instances='4',num_os_type_None='4',num_proj_05af2e78169748f69938d01b5238fc8b='1',num_proj_882ea7b2a43a41819ef796f797cdbe82='1',num_proj_d52e2dccbc7e46d8b518120fa7c8753a='1',num_proj_e06101e290f24563a836e16909146ea0='1',num_task_None='4',num_vm_active='4'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2020-10-13T13:50:21Z,uuid=5ab97e91-6b03-48f3-a320-c8cbb032cd3a,vcpus=48,vcpus_used=7) _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:172 2020-10-13 13:50:49.257 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with aggregates: [Aggregate(created_at=2020-09-08T14:01:05Z,deleted=False,deleted_at=None,hosts=['os-cpt-101','os-cpt-102','os-cpt-103'],id=3,metadata={availability_zone='az-1'},name='az-1',updated_at=None,uuid=8f37e9f3-506d-40da-8997-38915f1dfe67)] _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:175 2020-10-13 13:50:49.257 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with service dict: {'id': 21, 'uuid': 'c178b502-9b8c-43f3-9053-e3d0e2405c89', 'host': 'os-cpt-101', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 294504, 'disabled': False, 'disabled_reason': None, 'last_seen_up': datetime.datetime(2020, 10, 13, 13, 50, 43, tzinfo=), 'forced_down': False, 'version': 51, 'created_at': datetime.datetime(2020, 9, 8, 9, 22, tzinfo=), 'updated_at': datetime.datetime(2020, 10, 13, 13, 50, 43, tzinfo=), 'deleted_at': None, 'deleted': False} _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:178 2020-10-13 13:50:49.258 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with instances: ['96f4c101-d75b-44b6-8af9-dd75068d207d', '121c5df5-2d8b-48e0-a270-9f6f79193dd6', 'e235063d-f7a8-4f38-8294-d7a289167348', '425b53c3-f6f5-43e9-beeb-25c9b6ead1f9'] _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:182 2020-10-13 13:50:49.258 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "('os-cpt-101', 'os-cpt-101')" released by "nova.scheduler.host_manager.HostState.update.._locked_update" :: held 0.005s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 2020-10-13 13:50:49.259 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "('os-cpt-102', 'os-cpt-102')" acquired by "nova.scheduler.host_manager.HostState.update.._locked_update" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 2020-10-13 13:50:49.260 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state from compute node: ComputeNode(cpu_allocation_ratio=4.0,cpu_info='{"arch": "x86_64", "model": "EPYC-IBPB", "vendor": "AMD", "topology": {"cells": 1, "sockets": 1, "cores": 24, "threads": 2}, "features": ["lahf_lm", "clflush", "osvw", "extapic", "vme", "wdt", "monitor", "msr", "adx", "pse36", "sse4a", "fma", "pat", "mce", "sse2", "nx", "f16c", "mca", "xsaveopt", "avx", "syscall", "rdrand", "clwb", "ssse3", "xsavec", "invtsc", "bmi2", "fpu", "movbe", "aes", "bmi1", "cr8legacy", "mmxext", "ibpb", "amd-ssbd", "skinit", "topoext", "umip", "cmp_legacy", "arat", "lm", "svm", "fxsr", "ibs", "pae", "misalignsse", "mtrr", "sep", "ht", "smap", "xgetbv1", "clzero", "pdpe1gb", "apic", "abm", "pge", "pni", "tsc", "xsaves", "wbnoinvd", "sse4.2", "cmov", "cx8", "pse", "rdtscp", "cx16", "sse", "sse4.1", "fxsr_opt", "popcnt", "sha-ni", "perfctr_core", "fsgsbase", "avx2", "mmx", "rdseed", "clflushopt", "pclmuldq", "perfctr_nb", "smep", "de", "3dnowprefetch", "xsave", "tce"]}',created_at=2020-09-08T09:21:58Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=93,free_disk_gb=99,free_ram_mb=216693,host='os-cpt-102',host_ip=172.20.1.12,hypervisor_hostname='os-cpt-102',hypervisor_type='QEMU',hypervisor_version=4002000,id=3,local_gb=99,local_gb_used=0,mapped=0,memory_mb=257653,memory_mb_used=40960,metrics='[{"name": "cpu.user.time", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 161246250000000}, {"name": "cpu.kernel.percent", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.user.percent", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.idle.percent", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 0.99}, {"name": "cpu.percent", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.idle.time", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 140551238730000000}, {"name": "cpu.frequency", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 3343}, {"name": "cpu.kernel.time", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 366274340000000}, {"name": "cpu.iowait.time", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 3829690000000}, {"name": "cpu.iowait.percent", "timestamp": "2020-10-13T13:50:03.450583", "source": "libvirt.LibvirtDriver", "value": 0.0}]',numa_topology='{"nova_object.name": "NUMATopology", "nova_object.namespace": "nova", "nova_object.version": "1.2", "nova_object.data": {"cells": [{"nova_object.name": "NUMACell", "nova_object.namespace": "nova", "nova_object.version": "1.4", "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "memory": 257653, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], "siblings": [[12, 36], [10, 34], [2, 26], [1, 25], [9, 33], [0, 24], [8, 32], [11, 35], [13, 37], [19, 43], [21, 45], [18, 42], [38, 14], [46, 22], [4, 28], [20, 44], [15, 39], [17, 41], [23, 47], [16, 40], [6, 30], [7, 31], [3, 27], [5, 29]], "mempages": [{"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4, "total": 65959382, "used": 0, "reserved": 0}, "nova_object.changes": ["total", "used", "reserved", "size_kb"]}, {"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["total", "used", "reserved", "size_kb"]}, {"nova_object.name": "NUMAPagesTopology", "nova_object.namespace": "nova", "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576, "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["total", "used", "reserved", "size_kb"]}], "network_metadata": {"nova_object.name": "NetworkMetadata", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": {"physnets": [], "tunneled": false}, "nova_object.changes": ["physnets", "tunneled"]}}, "nova_object.changes": ["cpu_usage", "pinned_cpus", "mempages", "memory", "siblings", "network_metadata", "cpuset", "memory_usage", "id", "pcpuset"]}]}, "nova_object.changes": ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.2,running_vms=0,service_id=None,stats={failed_builds='0'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2020-10-13T13:50:03Z,uuid=81c2e888-56e4-4469-ab93-f204fa85a5c5,vcpus=48,vcpus_used=2) _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:172 2020-10-13 13:50:49.263 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with aggregates: [Aggregate(created_at=2020-09-08T14:01:05Z,deleted=False,deleted_at=None,hosts=['os-cpt-101','os-cpt-102','os-cpt-103'],id=3,metadata={availability_zone='az-1'},name='az-1',updated_at=None,uuid=8f37e9f3-506d-40da-8997-38915f1dfe67)] _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:175 2020-10-13 13:50:49.264 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with service dict: {'id': 18, 'uuid': '342a2d9b-4b32-45a2-ba16-b808343b3b8f', 'host': 'os-cpt-102', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 294494, 'disabled': False, 'disabled_reason': None, 'last_seen_up': datetime.datetime(2020, 10, 13, 13, 50, 49, tzinfo=), 'forced_down': False, 'version': 51, 'created_at': datetime.datetime(2020, 9, 8, 9, 21, 58, tzinfo=), 'updated_at': datetime.datetime(2020, 10, 13, 13, 50, 49, tzinfo=), 'deleted_at': None, 'deleted': False} _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:178 2020-10-13 13:50:49.264 33 DEBUG nova.scheduler.host_manager [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Update host state with instances: [] _locked_update /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:182 2020-10-13 13:50:49.265 33 DEBUG oslo_concurrency.lockutils [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Lock "('os-cpt-102', 'os-cpt-102')" released by "nova.scheduler.host_manager.HostState.update.._locked_update" :: held 0.005s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 2020-10-13 13:50:49.266 33 DEBUG nova.filters [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Starting with 3 host(s) get_filtered_objects /usr/lib/python3.6/site-packages/nova/filters.py:70 2020-10-13 13:50:49.266 33 DEBUG nova.scheduler.filters.availability_zone_filter [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Availability Zone 'az-2' requested. (os-cpt-101, os-cpt-101) ram: 207477MB disk: 65536MB io_ops: 0 instances: 4 has AZs: {'az-1'} host_passes /usr/lib/python3.6/site-packages/nova/scheduler/filters/availability_zone_filter.py:61 2020-10-13 13:50:49.267 33 DEBUG nova.scheduler.filters.availability_zone_filter [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Availability Zone 'az-2' requested. (os-cpt-102, os-cpt-102) ram: 216693MB disk: 95232MB io_ops: 0 instances: 0 has AZs: {'az-1'} host_passes /usr/lib/python3.6/site-packages/nova/scheduler/filters/availability_zone_filter.py:61 2020-10-13 13:50:49.267 33 DEBUG nova.scheduler.filters.availability_zone_filter [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Availability Zone 'az-2' requested. (os-cpt-103, os-cpt-103) ram: 206453MB disk: 96256MB io_ops: 0 instances: 3 has AZs: {'az-1'} host_passes /usr/lib/python3.6/site-packages/nova/scheduler/filters/availability_zone_filter.py:61 2020-10-13 13:50:49.268 33 INFO nova.filters [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Filter AvailabilityZoneFilter returned 0 hosts 2020-10-13 13:50:49.269 33 DEBUG nova.filters [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Filtering removed all hosts for the request with instance ID 'd18bf174-9ccb-4721-b7d8-36cd31834f62'. Filter results: [('AvailabilityZoneFilter', None)] get_filtered_objects /usr/lib/python3.6/site-packages/nova/filters.py:115 2020-10-13 13:50:49.269 33 INFO nova.filters [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Filtering removed all hosts for the request with instance ID 'd18bf174-9ccb-4721-b7d8-36cd31834f62'. Filter results: ['AvailabilityZoneFilter: (start: 3, end: 0)'] 2020-10-13 13:50:49.270 33 DEBUG nova.scheduler.filter_scheduler [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] Filtered [] _get_sorted_hosts /usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py:443 2020-10-13 13:50:49.270 33 DEBUG nova.scheduler.filter_scheduler [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - default default] There are 0 hosts available but 1 instances requested to build. _ensure_sufficient_hosts /usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py:300 ... So nova seems to only consider compute nodes in az-1. The amphora image is available in both availability zones. Starting an instance from the image directly in az-2 works. Does anybody have any idea on how to debug this any further? Any hints, tips or ideas would be greatly appreciated! With kind regards, Björn Puttmann From hberaud at redhat.com Tue Oct 13 16:13:28 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 13 Oct 2020 18:13:28 +0200 Subject: [oslo] Project leadership In-Reply-To: <2710df64-1e16-631f-640b-48b6d81769bc@openstack.org> References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> <2710df64-1e16-631f-640b-48b6d81769bc@openstack.org> Message-ID: Hello, According to our latest meeting/discussions I submitted a patch [1] to move oslo under the DPL governance model. Indeed a few weeks ago Ben officially announced his stepping down from its role of oslo PTL [2], at this point nobody volunteered to become the new PTL of oslo during Wallaby. Accordingly to our latest discussions [3] and our latest meeting [4] we decided to adopt the DPL governance model [5]. Accordingly to this governance model we assigned the required roles [6][7]. During the lastest oslo meeting we decided to create groups of paired liaison [7], especially for release liaison role and the TaCT SIG liaison role [6][7]. To continue the DPL process this patch [1] should be validated by the current PTL (Ben) and all liaison people [7]. Final note, I would like to personally thank Ben for assuming the PTL role during the previous cycles and for the works he have done at this position and for oslo in general, especially during the previous cycle where he was a volunteer even if oslo wasn't its main topic at work. Best regards [1] https://review.opendev.org/757906 [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017491.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017692.html [4] http://eavesdrop.openstack.org/meetings/oslo/2020/oslo.2020-10-12-15.00.log.txt [5] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html [6] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#required-roles [7] https://wiki.openstack.org/wiki/Oslo#Project_Leadership_Liaisons Le jeu. 1 oct. 2020 à 11:54, Thierry Carrez a écrit : > Ben Nemec wrote: > > The general consensus from the people I've talked to seems to be a > > distributed model. To kick off that discussion, here's a list of roles > > that I think should be filled in some form: > > > > * Release liaison > > * Security point-of-contact > > * TC liaison > > * Cross-project point-of-contact > > * PTG/Forum coordinator > > * Meeting chair > > * Community goal liaison (almost forgot this one since I haven't > > actually been doing it ;-). > > Note that only three roles absolutely need to be filled, per the TC > resolution[1]: > > - Release liaison > - tact-sig liaison (historically named the “infra Liaison”) > - Security point of contact > > The others are just recommended. > > [1] > > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > -- > Thierry > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Tue Oct 13 16:50:24 2020 From: abishop at redhat.com (Alan Bishop) Date: Tue, 13 Oct 2020 09:50:24 -0700 Subject: [Openstack][Cinder] In-Reply-To: References: Message-ID: On Sun, Oct 11, 2020 at 11:20 PM dangerzone ar wrote: > Hi Team, May I know if anyone has deployed Huawei oceanstor SAN storage > during overcloud deployment? > Hi, Your use of the term "overcloud" suggests you are using TripleO. My response assumes that is true, and you should probably ignore my response if you're not using TripleO. I have not deployed TripleO with a Huawei SAN for the cinder backend, but it should be possible. > (i) Do you need a specific driver to define or download in order to deploy > it? > See [1] for many details related to deploying the Huawei cinder driver. [1] https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html > (ii) Can I deploy separately SAN storage after overcloud deployment, I > mean after a month of openstack deployment i want to add SAN storage to my > infrastructure. Is it possible? > The short answer is yes. TripleO has the ability to deploy additional cinder storage backends via what's known as a stack (i.e. the overcloud) update. The initial overcloud deployment can be done using another cinder backend X, and later you can add a Huawei backend so the overcloud has two backends (X + Huawei). > (iii) Please advise me how to deploy Huawei oceanstor SAN storage. > > TripleO does not have specific support for deploying a Huawei SAN, but you can still deploy one by following [2]. That doc describes the technique for how to deploy the Huawei SAN as a "custom" block storage device. The document provides an example for deploying two NetApp backends, but the concept will be the same for you to deploy a single Huawei backend. The key will be crafting the TripleO environment file to configure the Huawei settings described in [1]. [2] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cinder_custom_backend.html One additional thing to note is that I see the Huawei driver requires access to an XML file that contains additional configuration settings. You'll need to get this file onto the overcloud node(s) where the cinder-volume service runs. And if your overcloud services run in containers (as modern TripleO releases do), then you'll need to provide a way for the containerized cinder-volume service to have access to the XML file. Fortunately this can be achieved using a TripleO parameter by including something like this in one of the overcloud deployment's env file: parameter_defaults: CinderVolumeOptVolumes: - /etc/cinder/cinder_huawei_conf.xml:/etc/cinder/cinder_huawei_conf.xml:ro That will allow the /etc/cinder/cinder_huawei_conf.xml file installed on the overcloud host to be visible to the cinder-volume service running inside a container. Alan > Please advise further. Thank you > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Tue Oct 13 16:52:46 2020 From: abishop at redhat.com (Alan Bishop) Date: Tue, 13 Oct 2020 09:52:46 -0700 Subject: [Openstack][Cinder] In-Reply-To: References: Message-ID: On Tue, Oct 13, 2020 at 9:50 AM Alan Bishop wrote: > > > On Sun, Oct 11, 2020 at 11:20 PM dangerzone ar > wrote: > >> Hi Team, May I know if anyone has deployed Huawei oceanstor SAN storage >> during overcloud deployment? >> > > Hi, > > Your use of the term "overcloud" suggests you are using TripleO. My > response assumes that is true, and you should probably ignore my response > if you're not using TripleO. > > I have not deployed TripleO with a Huawei SAN for the cinder backend, but > it should be possible. > > >> (i) Do you need a specific driver to define or download in order to >> deploy it? >> > > See [1] for many details related to deploying the Huawei cinder driver. > > [1] > https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html > That's the Rocky documentation. I don't know if the documentation has changed, but here's a link to the latest version: https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/huawei-storage-driver.html > >> (ii) Can I deploy separately SAN storage after overcloud deployment, I >> mean after a month of openstack deployment i want to add SAN storage to my >> infrastructure. Is it possible? >> > > The short answer is yes. TripleO has the ability to deploy additional > cinder storage backends via what's known as a stack (i.e. the overcloud) > update. The initial overcloud deployment can be done using another cinder > backend X, and later you can add a Huawei backend so the overcloud has two > backends (X + Huawei). > > >> (iii) Please advise me how to deploy Huawei oceanstor SAN storage. >> >> > TripleO does not have specific support for deploying a Huawei SAN, but you > can still deploy one by following [2]. That doc describes the technique for > how to deploy the Huawei SAN as a "custom" block storage device. The > document provides an example for deploying two NetApp backends, but the > concept will be the same for you to deploy a single Huawei backend. The key > will be crafting the TripleO environment file to configure the Huawei > settings described in [1]. > > [2] > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cinder_custom_backend.html > > One additional thing to note is that I see the Huawei driver requires > access to an XML file that contains additional configuration settings. > You'll need to get this file onto the overcloud node(s) where the > cinder-volume service runs. And if your overcloud services run in > containers (as modern TripleO releases do), then you'll need to provide a > way for the containerized cinder-volume service to have access to the XML > file. Fortunately this can be achieved using a TripleO parameter by > including something like this in one of the overcloud deployment's env file: > > parameter_defaults: > CinderVolumeOptVolumes: > - > /etc/cinder/cinder_huawei_conf.xml:/etc/cinder/cinder_huawei_conf.xml:ro > > That will allow the /etc/cinder/cinder_huawei_conf.xml file installed on > the overcloud host to be visible to the cinder-volume service running > inside a container. > > Alan > > >> Please advise further. Thank you >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Oct 13 20:48:30 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 13 Oct 2020 16:48:30 -0400 Subject: [cinder][manila][swift] Edge discussions at the upcoming PTG In-Reply-To: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> References: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> Message-ID: On 10/7/20 11:27 AM, Ildiko Vancsa wrote: > Hi, > > We’ve started to have discussions in the area of object storage needs and solutions for edge use cases at the last PTG in June. I’m reaching out with the intention to continue this chat at the upcoming PTG in a few weeks. > > The OSF Edge Computing Group is meeting during the first three days of the PTG like last time. We are planning to have edge reference architecture models and testing type of discussions in the first two days (October 26-27) and have a cross-project and cross-community day on Wednesday (October 28). We would like to have a dedicated section for storage either on Monday or Tuesday. > > I think it might also be time to revisit other storage options as well if there’s interest. > > What do people think? I asked around the Cinder community a bit, and we don't have any particular topics to discuss at this point. But if you scheduled the storage discussion on Monday, some of us would be interested in attending just to hear what edge people are currently talking about storage-wise. (Cinder is meeting Tuesday-Friday.) If something does come up that the Edge group would like to talk over with the Cinder team, we can make time for that on Wednesday. cheers, brian > > For reference: > * Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 > * Notes from the previous PTG is here: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 > > Thanks, > Ildikó > > > From mnaser at vexxhost.com Tue Oct 13 21:24:40 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 13 Oct 2020 17:24:40 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Adopting the DPL governance model for oslo https://review.opendev.org/757906 - Add election schedule exceptions in charter https://review.opendev.org/751941 - Clarify impact on releases for SIGs https://review.opendev.org/752699 - Add assert:supports-standalone https://review.opendev.org/722399 - Select Prvisep as the Wallaby Goal https://review.opendev.org/755590 ## Project Updates - Add Ironic charms to OpenStack charms https://review.opendev.org/754099 # Other Reminders - PTG Brainstorming: https://etherpad.opendev.org/p/tc-wallaby-ptg - PTG Registration: https://october2020ptg.eventbrite.com Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From johnsomor at gmail.com Tue Oct 13 22:06:20 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 13 Oct 2020 15:06:20 -0700 Subject: Problem with octavia LBaaS and nova availability zones In-Reply-To: <468d79df2c8ba8f3b9ae29c515f231e8@netprojects.de> References: <468d79df2c8ba8f3b9ae29c515f231e8@netprojects.de> Message-ID: Hi Björn, Yeah, I don't see a reason in that nova log snippet, so I can 't point to the exact cause. It might be higher in the logs than the snippet included. That said, it might be that the AZ definition in the AZ profile does not include the appropriate lb-mgmt-net ID for az-2. Without that defined, Octavia will use the default lb-mgmt-net ID from the configuration file, which is likely only available in az-1. I would try defining the management_network and valid_vip_networks for az-2. Michael On Tue, Oct 13, 2020 at 8:48 AM wrote: > > Hi one and all! > > Currently, we have an ussuri openstack installation via kolla-ansible. > The installation consists of three control nodes and two availability > zones (az-1, az-2), each with three compute nodes: > - os-cpt-10[1-3] > az-1 > - os-cpt-20[1-3] > az-2 > > az-1 is the default compute availability zone. > > The compute nodes are hardwarewise equipped exactly the same. > As storage system a ceph cluster is used. > > We wanted to also use Octavia LBaaS and got it up and running after some > experimentation. > We also wanted to be able to choose the availibility zones when starting > a loadbalancer. We added the az info with: > > openstack --os-cloud service_octavia loadbalancer > availabilityzoneprofile create --name az-1 --provider amphora > --availability-zone-data '{"compute_zone": "az-1"}' > openstack --os-cloud service_octavia loadbalancer > availabilityzoneprofile create --name az-2 --provider amphora > --availability-zone-data '{"compute_zone": "az-2"}' > openstack --os-cloud service_octavia loadbalancer availabilityzone > create --name az-1 --availabilityzoneprofile az-1 > openstack --os-cloud service_octavia loadbalancer availabilityzone > create --name az-2 --availabilityzoneprofile az-2 > > Creating a loadbalancer via cli: > > openstack --os-cloud $LB_PROJECT loadbalancer create --flavor $LB_FLAVOR > --name $LB_NAME --vip-subnet-id $(openstack --os-cloud $LB_PROJECT > subnet list --name $LB_SUBNET -f value -c ID) --availability-zone $LB_AZ > > works, as long as $LB_AZ == az-1. > > If we want to start the loadbalancer in az-2, this fails with an error > in octavia-worker.log: > ... > 2020-09-30 07:21:01.482 34 ERROR oslo_messaging.rpc.server > octavia.common.exceptions.ComputeBuildException: Failed to build compute > instance due to: {'code': 500, 'created': '2020-09-30T07:20:55Z', > 'message': 'No valid host was found. There are not enough hosts > available.', 'details': 'Traceback (most recent call last):\n File > "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1463, > in schedule_and_build_instances\n instance_uuids, > return_alternates=True)\n File > "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 870, > in _schedule_instances\n return_alternates=return_alternates)\n File > "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line > 42, in select_destinations\n instance_uuids, return_objects, > return_alternates)\n File > "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, > in select_destinations\n return cctxt.call(ctxt, > \'select_destinations\', **msg_args)\n File > "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line > 181, in call\n transport_options=self.transport_options)\n File > "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line > 129, in _send\n transport_options=transport_options)\n File > "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 654, in send\n transport_options=transport_options)\n File > "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 644, in _send\n raise > result\nnova.exception_Remote.NoValidHost_Remote: No valid host was > found. There are not enough hosts available.\nTraceback (most recent > call last):\n\n File > "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line > 241, in inner\n return func(*args, **kwargs)\n\n File > "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line 215, > in select_destinations\n allocation_request_version, > return_alternates)\n\n File > "/usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py", > line 96, in select_destinations\n allocation_request_version, > return_alternates)\n\n File > "/usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py", > line 265, in _schedule\n claimed_instance_uuids)\n\n File > "/usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py", > line 302, in _ensure_sufficient_hosts\n raise > exception.NoValidHost(reason=reason)\n\nnova.exception.NoValidHost: No > valid host was found. There are not enough hosts available.\n\n'} > ... > > We enabled debug logging for nova and found: > > ... > 2020-10-13 13:50:49.193 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "7b9846a9-f8be-4d60-92ad-0fb531e48e64" acquired by > "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" > :: waited 0.000s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 > 2020-10-13 13:50:49.196 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "7b9846a9-f8be-4d60-92ad-0fb531e48e64" released by > "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" > :: held 0.003s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 > 2020-10-13 13:50:49.217 33 DEBUG oslo_db.sqlalchemy.engines > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] MySQL server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode > /usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/engines.py:304 > 2020-10-13 13:50:49.245 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "('os-cpt-103', 'os-cpt-103')" acquired by > "nova.scheduler.host_manager.HostState.update.._locked_update" > :: waited 0.000s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 > 2020-10-13 13:50:49.246 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state from compute node: > ComputeNode(cpu_allocation_ratio=4.0,cpu_info='{"arch": "x86_64", > "model": "EPYC-IBPB", "vendor": "AMD", "topology": {"cells": 1, > "sockets": 1, "cores": 24, "threads": 2}, "features": ["bmi1", "smep", > "sha-ni", "fsgsbase", "adx", "cmov", "invtsc", "fpu", "mmx", "sep", > "abm", "pni", "msr", "xsavec", "f16c", "fma", "nx", "pat", "sse4.1", > "rdrand", "wbnoinvd", "vme", "lahf_lm", "cr8legacy", "xsave", "bmi2", > "clzero", "mtrr", "arat", "amd-ssbd", "aes", "avx", "avx2", "cx8", > "umip", "de", "ibpb", "misalignsse", "osvw", "perfctr_core", "pse36", > "mce", "skinit", "syscall", "sse2", "apic", "fxsr_opt", "pse", "ht", > "pdpe1gb", "rdtscp", "xsaves", "clflushopt", "pclmuldq", "sse4.2", > "movbe", "smap", "ibs", "clwb", "xgetbv1", "sse4a", "cx16", "extapic", > "wdt", "perfctr_nb", "tsc", "mca", "topoext", "pae", "fxsr", "lm", > "cmp_legacy", "monitor", "3dnowprefetch", "ssse3", "pge", "popcnt", > "rdseed", "mmxext", "tce", "clflush", "xsaveopt", "svm", > "sse"]}',created_at=2020-09-08T09:22:00Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=94,free_disk_gb=99,free_ram_mb=206453,host='os-cpt-103',host_ip=172.20.1.13,hypervisor_hostname='os-cpt-103',hypervisor_type='QEMU',hypervisor_version=4002000,id=9,local_gb=99,local_gb_used=0,mapped=0,memory_mb=257653,memory_mb_used=51200,metrics='[{"name": > "cpu.user.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.kernel.percent", > "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.iowait.percent", > "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.kernel.time", > "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 1373295770000000}, {"name": > "cpu.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.frequency", > "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 1796}, {"name": "cpu.user.time", > "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 386827280000000}, {"name": > "cpu.idle.time", "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 139696145240000000}, {"name": > "cpu.idle.percent", "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": 0.98}, {"name": "cpu.iowait.time", > "timestamp": "2020-10-13T13:50:07.846401", "source": > "libvirt.LibvirtDriver", "value": > 3390650000000}]',numa_topology='{"nova_object.name": "NUMATopology", > "nova_object.namespace": "nova", "nova_object.version": "1.2", > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell", > "nova_object.namespace": "nova", "nova_object.version": "1.4", > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, > 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, > 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, > 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, > 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "memory": > 257653, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], > "siblings": [[12, 36], [10, 34], [2, 26], [1, 25], [9, 33], [0, 24], [8, > 32], [11, 35], [13, 37], [19, 43], [21, 45], [18, 42], [38, 14], [46, > 22], [4, 28], [20, 44], [15, 39], [17, 41], [23, 47], [16, 40], [6, 30], > [7, 31], [3, 27], [5, 29]], "mempages": [{"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4, > "total": 65959381, "used": 0, "reserved": 0}, "nova_object.changes": > ["used", "reserved", "size_kb", "total"]}, {"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048, > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["used", > "reserved", "size_kb", "total"]}, {"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576, > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["used", > "reserved", "size_kb", "total"]}], "network_metadata": > {"nova_object.name": "NetworkMetadata", "nova_object.namespace": "nova", > "nova_object.version": "1.0", "nova_object.data": {"physnets": [], > "tunneled": false}, "nova_object.changes": ["physnets", "tunneled"]}}, > "nova_object.changes": ["mempages", "cpu_usage", "memory", > "memory_usage", "id", "pinned_cpus", "pcpuset", "network_metadata", > "siblings", "cpuset"]}]}, "nova_object.changes": > ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.2,running_vms=3,service_id=None,stats={failed_builds='0',io_workload='0',num_instances='3',num_os_type_None='3',num_proj_e06101e290f24563a836e16909146ea0='3',num_task_None='3',num_vm_active='3'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2020-10-13T13:50:07Z,uuid=34993097-d180-49a0-ae3c-9111e6aa8968,vcpus=48,vcpus_used=7) > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:172 > 2020-10-13 13:50:49.250 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with aggregates: > [Aggregate(created_at=2020-09-08T14:01:05Z,deleted=False,deleted_at=None,hosts=['os-cpt-101','os-cpt-102','os-cpt-103'],id=3,metadata={availability_zone='az-1'},name='az-1',updated_at=None,uuid=8f37e9f3-506d-40da-8997-38915f1dfe67)] > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:175 > 2020-10-13 13:50:49.251 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with service dict: {'id': 27, 'uuid': > '62ea00b1-e61d-4f84-8b35-c1d3d7a333e7', 'host': 'os-cpt-103', 'binary': > 'nova-compute', 'topic': 'compute', 'report_count': 294508, 'disabled': > False, 'disabled_reason': None, 'last_seen_up': datetime.datetime(2020, > 10, 13, 13, 50, 45, tzinfo=), 'forced_down': False, > 'version': 51, 'created_at': datetime.datetime(2020, 9, 8, 9, 22, > tzinfo=), 'updated_at': datetime.datetime(2020, 10, 13, 13, > 50, 45, tzinfo=), 'deleted_at': None, 'deleted': False} > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:178 > 2020-10-13 13:50:49.251 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with instances: > ['c88b77f7-2c59-497b-9633-139e62ba2bb1', > '635fbd95-dd64-4e0a-9767-a0e0a6958610', > 'a02369ab-9002-4bf8-b65c-03e93cbc7df5'] _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:182 > 2020-10-13 13:50:49.252 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "('os-cpt-103', 'os-cpt-103')" released by > "nova.scheduler.host_manager.HostState.update.._locked_update" > :: held 0.007s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 > 2020-10-13 13:50:49.253 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "('os-cpt-101', 'os-cpt-101')" acquired by > "nova.scheduler.host_manager.HostState.update.._locked_update" > :: waited 0.000s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 > 2020-10-13 13:50:49.253 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state from compute node: > ComputeNode(cpu_allocation_ratio=4.0,cpu_info='{"arch": "x86_64", > "model": "EPYC-IBPB", "vendor": "AMD", "topology": {"cells": 1, > "sockets": 1, "cores": 24, "threads": 2}, "features": ["3dnowprefetch", > "fpu", "sse4.1", "fma", "arat", "avx2", "bmi1", "sse2", "extapic", > "adx", "aes", "sha-ni", "tce", "osvw", "invtsc", "sse", "xsave", "de", > "movbe", "pse", "pdpe1gb", "clflush", "mmx", "wdt", "cmov", > "perfctr_core", "skinit", "umip", "bmi2", "cx8", "amd-ssbd", > "cmp_legacy", "perfctr_nb", "rdrand", "ibpb", "monitor", "mtrr", > "clflushopt", "smap", "msr", "sep", "f16c", "pat", "avx", "xsavec", > "mca", "apic", "pni", "xsaves", "cr8legacy", "popcnt", "svm", "clzero", > "pae", "lm", "pclmuldq", "pge", "rdseed", "xgetbv1", "sse4.2", "ht", > "rdtscp", "fxsr", "lahf_lm", "vme", "sse4a", "tsc", "misalignsse", > "abm", "fxsr_opt", "mce", "syscall", "ssse3", "cx16", "ibs", "smep", > "fsgsbase", "topoext", "wbnoinvd", "xsaveopt", "mmxext", "nx", "pse36", > "clwb"]}',created_at=2020-09-08T09:22:00Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=64,free_disk_gb=79,free_ram_mb=207477,host='os-cpt-101',host_ip=172.20.1.11,hypervisor_hostname='os-cpt-101',hypervisor_type='QEMU',hypervisor_version=4002000,id=6,local_gb=99,local_gb_used=20,mapped=0,memory_mb=257653,memory_mb_used=50176,metrics='[{"name": > "cpu.iowait.percent", "timestamp": "2020-10-13T13:50:21.645565", > "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": > "cpu.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.idle.percent", > "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 0.98}, {"name": "cpu.frequency", > "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 1796}, {"name": "cpu.idle.time", > "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 139958595980000000}, {"name": > "cpu.user.percent", "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.kernel.percent", > "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 0.01}, {"name": "cpu.user.time", > "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 388866090000000}, {"name": > "cpu.kernel.time", "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": 1118743070000000}, {"name": > "cpu.iowait.time", "timestamp": "2020-10-13T13:50:21.645565", "source": > "libvirt.LibvirtDriver", "value": > 3637460000000}]',numa_topology='{"nova_object.name": "NUMATopology", > "nova_object.namespace": "nova", "nova_object.version": "1.2", > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell", > "nova_object.namespace": "nova", "nova_object.version": "1.4", > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, > 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, > 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, > 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, > 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "memory": > 257653, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], > "siblings": [[12, 36], [10, 34], [2, 26], [1, 25], [9, 33], [0, 24], [8, > 32], [11, 35], [13, 37], [19, 43], [21, 45], [18, 42], [38, 14], [46, > 22], [4, 28], [20, 44], [15, 39], [17, 41], [23, 47], [16, 40], [6, 30], > [7, 31], [3, 27], [5, 29]], "mempages": [{"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4, > "total": 65959383, "used": 0, "reserved": 0}, "nova_object.changes": > ["reserved", "used", "total", "size_kb"]}, {"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048, > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": > ["reserved", "used", "total", "size_kb"]}, {"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576, > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": > ["reserved", "used", "total", "size_kb"]}], "network_metadata": > {"nova_object.name": "NetworkMetadata", "nova_object.namespace": "nova", > "nova_object.version": "1.0", "nova_object.data": {"physnets": [], > "tunneled": false}, "nova_object.changes": ["physnets", "tunneled"]}}, > "nova_object.changes": ["id", "cpu_usage", "pcpuset", "memory", > "cpuset", "siblings", "mempages", "network_metadata", "memory_usage", > "pinned_cpus"]}]}, "nova_object.changes": > ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.2,running_vms=4,service_id=None,stats={failed_builds='0',io_workload='0',num_instances='4',num_os_type_None='4',num_proj_05af2e78169748f69938d01b5238fc8b='1',num_proj_882ea7b2a43a41819ef796f797cdbe82='1',num_proj_d52e2dccbc7e46d8b518120fa7c8753a='1',num_proj_e06101e290f24563a836e16909146ea0='1',num_task_None='4',num_vm_active='4'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2020-10-13T13:50:21Z,uuid=5ab97e91-6b03-48f3-a320-c8cbb032cd3a,vcpus=48,vcpus_used=7) > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:172 > 2020-10-13 13:50:49.257 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with aggregates: > [Aggregate(created_at=2020-09-08T14:01:05Z,deleted=False,deleted_at=None,hosts=['os-cpt-101','os-cpt-102','os-cpt-103'],id=3,metadata={availability_zone='az-1'},name='az-1',updated_at=None,uuid=8f37e9f3-506d-40da-8997-38915f1dfe67)] > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:175 > 2020-10-13 13:50:49.257 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with service dict: {'id': 21, 'uuid': > 'c178b502-9b8c-43f3-9053-e3d0e2405c89', 'host': 'os-cpt-101', 'binary': > 'nova-compute', 'topic': 'compute', 'report_count': 294504, 'disabled': > False, 'disabled_reason': None, 'last_seen_up': datetime.datetime(2020, > 10, 13, 13, 50, 43, tzinfo=), 'forced_down': False, > 'version': 51, 'created_at': datetime.datetime(2020, 9, 8, 9, 22, > tzinfo=), 'updated_at': datetime.datetime(2020, 10, 13, 13, > 50, 43, tzinfo=), 'deleted_at': None, 'deleted': False} > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:178 > 2020-10-13 13:50:49.258 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with instances: > ['96f4c101-d75b-44b6-8af9-dd75068d207d', > '121c5df5-2d8b-48e0-a270-9f6f79193dd6', > 'e235063d-f7a8-4f38-8294-d7a289167348', > '425b53c3-f6f5-43e9-beeb-25c9b6ead1f9'] _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:182 > 2020-10-13 13:50:49.258 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "('os-cpt-101', 'os-cpt-101')" released by > "nova.scheduler.host_manager.HostState.update.._locked_update" > :: held 0.005s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 > 2020-10-13 13:50:49.259 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "('os-cpt-102', 'os-cpt-102')" acquired by > "nova.scheduler.host_manager.HostState.update.._locked_update" > :: waited 0.000s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:358 > 2020-10-13 13:50:49.260 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state from compute node: > ComputeNode(cpu_allocation_ratio=4.0,cpu_info='{"arch": "x86_64", > "model": "EPYC-IBPB", "vendor": "AMD", "topology": {"cells": 1, > "sockets": 1, "cores": 24, "threads": 2}, "features": ["lahf_lm", > "clflush", "osvw", "extapic", "vme", "wdt", "monitor", "msr", "adx", > "pse36", "sse4a", "fma", "pat", "mce", "sse2", "nx", "f16c", "mca", > "xsaveopt", "avx", "syscall", "rdrand", "clwb", "ssse3", "xsavec", > "invtsc", "bmi2", "fpu", "movbe", "aes", "bmi1", "cr8legacy", "mmxext", > "ibpb", "amd-ssbd", "skinit", "topoext", "umip", "cmp_legacy", "arat", > "lm", "svm", "fxsr", "ibs", "pae", "misalignsse", "mtrr", "sep", "ht", > "smap", "xgetbv1", "clzero", "pdpe1gb", "apic", "abm", "pge", "pni", > "tsc", "xsaves", "wbnoinvd", "sse4.2", "cmov", "cx8", "pse", "rdtscp", > "cx16", "sse", "sse4.1", "fxsr_opt", "popcnt", "sha-ni", "perfctr_core", > "fsgsbase", "avx2", "mmx", "rdseed", "clflushopt", "pclmuldq", > "perfctr_nb", "smep", "de", "3dnowprefetch", "xsave", > "tce"]}',created_at=2020-09-08T09:21:58Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=93,free_disk_gb=99,free_ram_mb=216693,host='os-cpt-102',host_ip=172.20.1.12,hypervisor_hostname='os-cpt-102',hypervisor_type='QEMU',hypervisor_version=4002000,id=3,local_gb=99,local_gb_used=0,mapped=0,memory_mb=257653,memory_mb_used=40960,metrics='[{"name": > "cpu.user.time", "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 161246250000000}, {"name": > "cpu.kernel.percent", "timestamp": "2020-10-13T13:50:03.450583", > "source": "libvirt.LibvirtDriver", "value": 0.0}, {"name": > "cpu.user.percent", "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.idle.percent", > "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 0.99}, {"name": "cpu.percent", > "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 0.0}, {"name": "cpu.idle.time", > "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 140551238730000000}, {"name": > "cpu.frequency", "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 3343}, {"name": "cpu.kernel.time", > "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 366274340000000}, {"name": > "cpu.iowait.time", "timestamp": "2020-10-13T13:50:03.450583", "source": > "libvirt.LibvirtDriver", "value": 3829690000000}, {"name": > "cpu.iowait.percent", "timestamp": "2020-10-13T13:50:03.450583", > "source": "libvirt.LibvirtDriver", "value": > 0.0}]',numa_topology='{"nova_object.name": "NUMATopology", > "nova_object.namespace": "nova", "nova_object.version": "1.2", > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell", > "nova_object.namespace": "nova", "nova_object.version": "1.4", > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, > 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, > 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, > 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, > 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], "memory": > 257653, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], > "siblings": [[12, 36], [10, 34], [2, 26], [1, 25], [9, 33], [0, 24], [8, > 32], [11, 35], [13, 37], [19, 43], [21, 45], [18, 42], [38, 14], [46, > 22], [4, 28], [20, 44], [15, 39], [17, 41], [23, 47], [16, 40], [6, 30], > [7, 31], [3, 27], [5, 29]], "mempages": [{"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4, > "total": 65959382, "used": 0, "reserved": 0}, "nova_object.changes": > ["total", "used", "reserved", "size_kb"]}, {"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048, > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["total", > "used", "reserved", "size_kb"]}, {"nova_object.name": > "NUMAPagesTopology", "nova_object.namespace": "nova", > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576, > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["total", > "used", "reserved", "size_kb"]}], "network_metadata": > {"nova_object.name": "NetworkMetadata", "nova_object.namespace": "nova", > "nova_object.version": "1.0", "nova_object.data": {"physnets": [], > "tunneled": false}, "nova_object.changes": ["physnets", "tunneled"]}}, > "nova_object.changes": ["cpu_usage", "pinned_cpus", "mempages", > "memory", "siblings", "network_metadata", "cpuset", "memory_usage", > "id", "pcpuset"]}]}, "nova_object.changes": > ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.2,running_vms=0,service_id=None,stats={failed_builds='0'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2020-10-13T13:50:03Z,uuid=81c2e888-56e4-4469-ab93-f204fa85a5c5,vcpus=48,vcpus_used=2) > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:172 > 2020-10-13 13:50:49.263 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with aggregates: > [Aggregate(created_at=2020-09-08T14:01:05Z,deleted=False,deleted_at=None,hosts=['os-cpt-101','os-cpt-102','os-cpt-103'],id=3,metadata={availability_zone='az-1'},name='az-1',updated_at=None,uuid=8f37e9f3-506d-40da-8997-38915f1dfe67)] > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:175 > 2020-10-13 13:50:49.264 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with service dict: {'id': 18, 'uuid': > '342a2d9b-4b32-45a2-ba16-b808343b3b8f', 'host': 'os-cpt-102', 'binary': > 'nova-compute', 'topic': 'compute', 'report_count': 294494, 'disabled': > False, 'disabled_reason': None, 'last_seen_up': datetime.datetime(2020, > 10, 13, 13, 50, 49, tzinfo=), 'forced_down': False, > 'version': 51, 'created_at': datetime.datetime(2020, 9, 8, 9, 21, 58, > tzinfo=), 'updated_at': datetime.datetime(2020, 10, 13, 13, > 50, 49, tzinfo=), 'deleted_at': None, 'deleted': False} > _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:178 > 2020-10-13 13:50:49.264 33 DEBUG nova.scheduler.host_manager > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Update host state with instances: [] _locked_update > /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:182 > 2020-10-13 13:50:49.265 33 DEBUG oslo_concurrency.lockutils > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Lock "('os-cpt-102', 'os-cpt-102')" released by > "nova.scheduler.host_manager.HostState.update.._locked_update" > :: held 0.005s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:370 > 2020-10-13 13:50:49.266 33 DEBUG nova.filters > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Starting with 3 host(s) get_filtered_objects > /usr/lib/python3.6/site-packages/nova/filters.py:70 > 2020-10-13 13:50:49.266 33 DEBUG > nova.scheduler.filters.availability_zone_filter > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Availability Zone 'az-2' requested. (os-cpt-101, > os-cpt-101) ram: 207477MB disk: 65536MB io_ops: 0 instances: 4 has AZs: > {'az-1'} host_passes > /usr/lib/python3.6/site-packages/nova/scheduler/filters/availability_zone_filter.py:61 > 2020-10-13 13:50:49.267 33 DEBUG > nova.scheduler.filters.availability_zone_filter > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Availability Zone 'az-2' requested. (os-cpt-102, > os-cpt-102) ram: 216693MB disk: 95232MB io_ops: 0 instances: 0 has AZs: > {'az-1'} host_passes > /usr/lib/python3.6/site-packages/nova/scheduler/filters/availability_zone_filter.py:61 > 2020-10-13 13:50:49.267 33 DEBUG > nova.scheduler.filters.availability_zone_filter > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Availability Zone 'az-2' requested. (os-cpt-103, > os-cpt-103) ram: 206453MB disk: 96256MB io_ops: 0 instances: 3 has AZs: > {'az-1'} host_passes > /usr/lib/python3.6/site-packages/nova/scheduler/filters/availability_zone_filter.py:61 > 2020-10-13 13:50:49.268 33 INFO nova.filters > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Filter AvailabilityZoneFilter returned 0 hosts > 2020-10-13 13:50:49.269 33 DEBUG nova.filters > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Filtering removed all hosts for the request with > instance ID 'd18bf174-9ccb-4721-b7d8-36cd31834f62'. Filter results: > [('AvailabilityZoneFilter', None)] get_filtered_objects > /usr/lib/python3.6/site-packages/nova/filters.py:115 > 2020-10-13 13:50:49.269 33 INFO nova.filters > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Filtering removed all hosts for the request with > instance ID 'd18bf174-9ccb-4721-b7d8-36cd31834f62'. Filter results: > ['AvailabilityZoneFilter: (start: 3, end: 0)'] > 2020-10-13 13:50:49.270 33 DEBUG nova.scheduler.filter_scheduler > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] Filtered [] _get_sorted_hosts > /usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py:443 > 2020-10-13 13:50:49.270 33 DEBUG nova.scheduler.filter_scheduler > [req-390e67a0-5078-4b5d-a2b5-0111e8a1527b > f181ae7dec9b493f9e9311f3c5cc60e2 882ea7b2a43a41819ef796f797cdbe82 - > default default] There are 0 hosts available but 1 instances requested > to build. _ensure_sufficient_hosts > /usr/lib/python3.6/site-packages/nova/scheduler/filter_scheduler.py:300 > ... > > So nova seems to only consider compute nodes in az-1. > > The amphora image is available in both availability zones. Starting an > instance from the image directly in az-2 works. > > Does anybody have any idea on how to debug this any further? Any hints, > tips or ideas would be greatly appreciated! > > With kind regards, > Björn Puttmann > > From zaitcev at redhat.com Tue Oct 13 23:23:50 2020 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 13 Oct 2020 18:23:50 -0500 Subject: [tripleo] Overcloud deploy fails with an unclear error Message-ID: <20201013182350.10685d44@suzdal.zaitcev.lan> Hello: I'm thinking about doing some work in TripleO, so to that end I started by trying to use it to install a small cluster, in VMs: 3 controllers and 1 compute. Cruised through preparation, images, undercloud, and introspection, but overcloud deploy fails like this: Ansible execution failed. playbook: /usr/share/ansible/tripleo-playbooks/cli-deploy-deployment-plan.yaml, Run Status: timeout, Return Code: 254 Exception occured while running the command Traceback (most recent call last): It looks like it tried to print the offending command, but tracebacked in the process. The ansible.log ends like this: 2020-10-09 19:40:22,540 p=986646 u=stack n=ansible | 2020-10-09 19:40:22.539658 | 52540071-b621-0e31-a2a5-000000000012 | TASK | Deploy Plan 2020-10-09 19:41:54,718 p=986646 u=stack n=ansible | 2020-10-09 19:41:54.716712 | 52540071-b621-0e31-a2a5-000000000012 | CHANGED | Deploy Plan | localhost 2020-10-09 19:41:54,734 p=986646 u=stack n=ansible | 2020-10-09 19:41:54.733720 | 52540071-b621-0e31-a2a5-000000000013 | TASK | Wait for stack status So, what now? Anyone has any ideas? Greetings, -- Pete From gmann at ghanshyammann.com Tue Oct 13 23:39:35 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Oct 2020 18:39:35 -0500 Subject: [oslo] Project leadership In-Reply-To: References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> <2710df64-1e16-631f-640b-48b6d81769bc@openstack.org> Message-ID: <17524555d4a.ed3be8d4141037.2056419423940737726@ghanshyammann.com> ---- On Tue, 13 Oct 2020 11:13:28 -0500 Herve Beraud wrote ---- > Hello, > According to our latest meeting/discussions I submitted a patch [1] to move oslo under the DPL governance model. > Indeed a few weeks ago Ben officially announced his stepping down from its role of oslo PTL [2], at this point nobody volunteered to become the new PTL of oslo during Wallaby. > Accordingly to our latest discussions [3] and our latest meeting [4] we decided to adopt the DPL governance model [5]. > Accordingly to this governance model we assigned the required roles [6][7]. > > During the lastest oslo meeting we decided to create groups of paired liaison [7], especially for release liaison role and the TaCT SIG liaison role [6][7]. > > To continue the DPL process this patch [1] should be validated by the current PTL (Ben) and all liaison people [7]. Thanks hberaud, we need few changes in governance tooling and schema to implement the DPL model. I have pushed the below change to do that, you can fill the liaisons for DPL model in the new subfield of 'Liaisons' field. https://review.opendev.org/#/c/757966/1 NOTE, we can make the projects data schema more strict but that can be done later and to facilitate the oslo change 757966 should be fine. -gmann > > Final note, I would like to personally thank Ben for assuming the PTL role during the previous cycles and for the works he have done at this position and for oslo in general, especially during the previous cycle where he was a volunteer even if oslo wasn't its main topic at work. > Best regards > > [1] https://review.opendev.org/757906[2] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017491.html[3] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017692.html > [4] http://eavesdrop.openstack.org/meetings/oslo/2020/oslo.2020-10-12-15.00.log.txt > [5] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > [6] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#required-roles > [7] https://wiki.openstack.org/wiki/Oslo#Project_Leadership_Liaisons > > Le jeu. 1 oct. 2020 à 11:54, Thierry Carrez a écrit : > Ben Nemec wrote: > > The general consensus from the people I've talked to seems to be a > > distributed model. To kick off that discussion, here's a list of roles > > that I think should be filled in some form: > > > > * Release liaison > > * Security point-of-contact > > * TC liaison > > * Cross-project point-of-contact > > * PTG/Forum coordinator > > * Meeting chair > > * Community goal liaison (almost forgot this one since I haven't > > actually been doing it ;-). > > Note that only three roles absolutely need to be filled, per the TC > resolution[1]: > > - Release liaison > - tact-sig liaison (historically named the “infra Liaison”) > - Security point of contact > > The others are just recommended. > > [1] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > -- > Thierry > > > > -- > Hervé BeraudSenior Software Engineer > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From fungi at yuggoth.org Wed Oct 14 00:30:24 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Oct 2020 00:30:24 +0000 Subject: [all][elections][ptl] Project Team Lead Election Conclusion and Results Message-ID: <20201014003023.bhkxr65a2337u7cs@yuggoth.org> Thank you to the electorate, to all those who voted and to all candidates who put their name forward for Project Team Lead (PTL) in this election. A healthy, open process breeds trust in our decision making capability thank you to all those who make this process possible. Now for the results of the PTL election process, please join me in extending congratulations to the following PTLs: * Adjutant : Adrian Turjak * Barbican : Douglas Mendizábal * Blazar : Pierre Riteau * Cinder : Brian Rosmaita * Cyborg : Yumeng Bao * Designate : Michael Johnson * Ec2 Api : Andrey Pavlov * Freezer : cai hui * Glance : Abhishek Kekane * Heat : Rico Lin * Horizon : Ivan Kolodyazhny * Ironic : Julia Kreger * Keystone : Kristi Nikolla * Kolla : Mark Goddard * Kuryr : Maysa de Macedo Souza * Magnum : Spyros Trigazis * Manila : Goutham Pacha Ravi * Masakari : Radosław Piliszek * Mistral : Renat Akhmerov * Monasca : Martin Chacon Piza * Murano : Rong Zhu * Neutron : Sławek Kapłoński * Nova : Balazs Gibizer * Openstack Chef : Lance Albertson * OpenStack Helm : Gage Hugo * OpenStackAnsible : Dmitriy Rabotyagov * OpenStackSDK : Artem Goncharov * Puppet OpenStack : Shengping Zhong * Quality Assurance : Masayuki Igawa * Rally : Andrey Kurilin * Release Management : Hervé Beraud * Requirements : Matthew Thode * Sahara : Jeremy Freudberg * Solum : Rong Zhu * Storlets : Takashi Kajinami * Swift : Tim Burke * Tacker : Yasufumi Ogawa * Telemetry : Matthias Runge * Tripleo : Marios Andreou * Trove : Lingxian Kong * Vitrage : Eyal Bar-Ilan * Watcher : canwei li * Winstackers : Lucian Petrut * Zaqar : wang hao * Zun : Feng Shengqin Elections: * Telemetry: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2428aa2ecc67cc17 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all involved in the PTL election process, -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Oct 14 00:30:40 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Oct 2020 00:30:40 +0000 Subject: [all][elections][tc] Technical Committee Election Results Message-ID: <20201014003040.davu3545i4anre5d@yuggoth.org> Please join me in congratulating the 4 newly elected members of the Technical Committe (TC). Dan Smith Ghanshyam Mann (gmann) Jay Bryant (jungleboyj) Kendall Nelson (diablo_rojo) Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f3d92f86f4254553 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voice is heard. Thank you for another great round. -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From renat.akhmerov at gmail.com Wed Oct 14 04:36:18 2020 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Wed, 14 Oct 2020 11:36:18 +0700 Subject: [tripleo] deprecating Mistral service In-Reply-To: References: Message-ID: <36c3ecdd-f571-432d-a647-fbc52e9a4319@Spark> > Renat, I don't think you need to apologize here. > Your team has done excellent work at maintaining Mistral over the years. > For us, the major reason to not use Mistral anymore is that we have no UI anymore; which was the main reason why we wanted to use Mistral workflows to control the deployment from both UI & CLI with unified experience. > Our deployment framework has shifted toward Ansible, and without UI we rewrote our workflows in pure Python, called by Ansible modules via playbooks. > > Again, your message is appreciated, thanks for the clarification on your side as well! Emilien, makes sense. Thanks and the best luck to you and your team :) Renat -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Oct 14 06:37:15 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 14 Oct 2020 08:37:15 +0200 Subject: [cinder][manila][swift] Edge discussions at the upcoming PTG In-Reply-To: References: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> Message-ID: <207BFEF5-C35F-4378-AA31-5D7641F2005D@gmail.com> Hi Brian, Sounds good, thank you for checking! I think it should work to schedule storage discussions for Monday initially, I will mark that on the etherpad. We will start the day with a bit of a reminder to where we left off in June and take the discussions forward from there. If any topic/question comes to mind till the PTG please don’t hesitate to add that to the etherpad. Thanks, Ildikó > On Oct 13, 2020, at 22:48, Brian Rosmaita wrote: > > On 10/7/20 11:27 AM, Ildiko Vancsa wrote: >> Hi, >> We’ve started to have discussions in the area of object storage needs and solutions for edge use cases at the last PTG in June. I’m reaching out with the intention to continue this chat at the upcoming PTG in a few weeks. >> The OSF Edge Computing Group is meeting during the first three days of the PTG like last time. We are planning to have edge reference architecture models and testing type of discussions in the first two days (October 26-27) and have a cross-project and cross-community day on Wednesday (October 28). We would like to have a dedicated section for storage either on Monday or Tuesday. >> I think it might also be time to revisit other storage options as well if there’s interest. >> What do people think? > > I asked around the Cinder community a bit, and we don't have any particular topics to discuss at this point. But if you scheduled the storage discussion on Monday, some of us would be interested in attending just to hear what edge people are currently talking about storage-wise. (Cinder is meeting Tuesday-Friday.) > > If something does come up that the Edge group would like to talk over with the Cinder team, we can make time for that on Wednesday. > > > cheers, > brian > >> For reference: >> * Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 >> * Notes from the previous PTG is here: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 >> Thanks, >> Ildikó > From ruslanas at lpic.lt Wed Oct 14 09:37:43 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 14 Oct 2020 11:37:43 +0200 Subject: [tripleo][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: Hi all, I have re-deployed undercloud with older *ironic* docker images. Still same issues. I am not sure how it is done and how all the things work, BUT. when doing "baremetal inspect NODE" it gives me: | last_error | ironic-inspector inspection failed: The PXE filter driver DnsmasqFilter, state=uninitialized: my fsm encountered an exception: Can not transition from state 'uninitialized' on event 'sync' (no defined transition) any hints? This is what I see in /var.log/containers/ironic-inspector/dnsmasq.log: Oct 14 11:01:15 dnsmasq[8]: started, version 2.79 DNS disabled Oct 14 11:01:15 dnsmasq[8]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 14 11:01:15 dnsmasq-dhcp[8]: DHCP, IP range 10.40.1.230 -- 10.40.1.249, lease time 10m Oct 14 11:01:15 dnsmasq-dhcp[8]: read /var/lib/ironic-inspector/dhcp-hostsdir/unknown_hosts_filter Oct 14 12:09:07 dnsmasq[8]: inotify, new or changed file /var/lib/ironic-inspector/dhcp-hostsdir/24:6e:96:66:34:2a Oct 14 12:09:07 dnsmasq-dhcp[8]: read /var/lib/ironic-inspector/dhcp-hostsdir/24:6e:96:66:34:2a Also, I think this part in log, might be interesting [1], fails on ironic_inspector.pxe_filter.dnsmasq with message: join() argument must be str or bytes, not 'NoneType'; resetting the filter: TypeError: join() argument must be str or bytes, not 'NoneType' [1] http://paste.openstack.org/show/AnBXBP2p8frdsHqzsBse/ On Wed, 7 Oct 2020 at 17:24, Julia Kreger wrote: > You're really in the territory of TripleO at this point. As such I'm > replying with an altered subject to get their attention. > > On Tue, Oct 6, 2020 at 7:57 AM Ruslanas Gžibovskis > wrote: > > > > I am curious, could I somehow use my last known working version? > > It was: > docker.io/tripleou/centos-binary-ironic-inspector at sha256:ad5d58c4cce48ed0c660a0be7fed69f53202a781e75b1037dcee96147e9b8c4b > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Oct 14 11:34:08 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 14 Oct 2020 13:34:08 +0200 Subject: [Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances In-Reply-To: References: Message-ID: <20201014113408.tkeks7myscm76xe6@localhost> On 12/10, Oliver Wenz wrote: > > I recommend you run the attach request with the --debug flag to get the > > request id, that way you can easily track the request and see where it > > failed. > > > > Then you check the logs like Dmitriy mentions and see where things > > failed. > > Running with the --debug flag returned the following > > ``` > Starting new HTTP connection (1): 192.168.110.201:8774 > http://192.168.110.201:8774 "GET > /v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54 HTTP/1.1" 200 1781 > RESP: [200] Connection: close Content-Length: 1781 Content-Type: > application/json OpenStack-API-Version: compute 2.1 Vary: > OpenStack-API-Version, X-OpenStack-Nova-API-Version > X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: > req-56b2bf8f-f338-4d25-a54e-ce9cfb4e71d6 x-openstack-request-id: > req-56b2bf8f-f338-4d25-a54e-ce9cfb4e71d6 > RESP BODY: {"server": {"id": "2fd482ac-7a93-4626-955f-77b78f783d54", > "name": "ubuntu_test", "status": "ACTIVE", "tenant_id": > "5fbba868d04b47fa87a72b9dd821ee12", "user_id": > "f78c008ff22d40d2870f1c34919c93ad", "metadata": {}, "hostId": > "7d7e25a2fe22479152026feff6bd71cf32a46ea2ee78ec841c5973f8", "image": > {"id": "84e94029-c737-4bb6-84a7-195002e6dbe9", "links": [{"rel": > "bookmark", "href": > "http://192.168.110.201:8774/images/84e94029-c737-4bb6-84a7-195002e6dbe9"}]}, > "flavor": {"id": "94b75111-dfa2-4565-af5b-64e25e220861", "links": > [{"rel": "bookmark", "href": > "http://192.168.110.201:8774/flavors/94b75111-dfa2-4565-af5b-64e25e220861"}]}, > "created": "2020-10-07T09:48:09Z", "updated": "2020-10-07T10:03:23Z", > "addresses": {"test-001": [{"version": 4, "addr": "192.168.32.170", > "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": > "fa:16:3e:9b:a5:fe"}, {"version": 4, "addr": "192.168.113.29", > "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": > "fa:16:3e:9b:a5:fe"}]}, "accessIPv4": "", "accessIPv6": "", "links": > [{"rel": "self", "href": > "http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54"}, > {"rel": "bookmark", "href": > "http://192.168.110.201:8774/servers/2fd482ac-7a93-4626-955f-77b78f783d54"}], > "OS-DCF:diskConfig": "AUTO", "progress": 0, > "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": > "hiwi-PC", "OS-SRV-USG:launched_at": "2020-10-07T10:03:22.000000", > "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": > "default"}], "OS-EXT-SRV-ATTR:host": "bc1blade14", > "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", > "OS-EXT-SRV-ATTR:hypervisor_hostname": "bc1blade14.openstack.local", > "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", > "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} > GET call to compute for > http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54 > used request id req-56b2bf8f-f338-4d25-a54e-ce9cfb4e71d6 > REQ: curl -g -i -X GET > http://192.168.110.201:8776/v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d > -H "Accept: application/json" -H "User-Agent: python-cinderclient" -H > "X-Auth-Token: > {SHA256}7fa6e3ac57b32176603c393dd97a5ea04813739efb0d1d4bff4f9680ede427c1" > Starting new HTTP connection (1): 192.168.110.201:8776 > http://192.168.110.201:8776 "GET > /v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d > HTTP/1.1" 200 1033 > RESP: [200] Connection: close Content-Length: 1033 Content-Type: > application/json OpenStack-API-Version: volume 3.0 Vary: > OpenStack-API-Version x-compute-request-id: > req-3d35fab2-88b2-46cf-a39a-f8942f3ee1c1 x-openstack-request-id: > req-3d35fab2-88b2-46cf-a39a-f8942f3ee1c1 > RESP BODY: {"volume": {"id": "a9e188a4-7981-4174-a31d-a258de2a7b3d", > "status": "available", "size": 30, "availability_zone": "nova", > "created_at": "2020-10-07T11:38:01.000000", "updated_at": > "2020-10-12T09:57:53.000000", "attachments": [], "name": > "test_volume_002", "description": "", "volume_type": "lvm", > "snapshot_id": null, "source_volid": null, "metadata": {}, "links": > [{"rel": "self", "href": > "http://192.168.110.201:8776/v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d"}, > {"rel": "bookmark", "href": > "http://192.168.110.201:8776/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d"}], > "user_id": "f78c008ff22d40d2870f1c34919c93ad", "bootable": "false", > "encrypted": false, "replication_status": null, "consistencygroup_id": > null, "multiattach": false, "migration_status": null, > "os-vol-tenant-attr:tenant_id": "5fbba868d04b47fa87a72b9dd821ee12", > "os-vol-host-attr:host": "bc1bl10 at lvm#LVM_iSCSI", > "os-vol-mig-status-attr:migstat": null, > "os-vol-mig-status-attr:name_id": null}} > GET call to volumev3 for > http://192.168.110.201:8776/v3/5fbba868d04b47fa87a72b9dd821ee12/volumes/a9e188a4-7981-4174-a31d-a258de2a7b3d > used request id req-3d35fab2-88b2-46cf-a39a-f8942f3ee1c1 > REQ: curl -g -i -X POST > http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54/os-volume_attachments > -H "Accept: application/json" -H "Content-Type: application/json" -H > "User-Agent: python-novaclient" -H "X-Auth-Token: > {SHA256}7fa6e3ac57b32176603c393dd97a5ea04813739efb0d1d4bff4f9680ede427c1" > -H "X-OpenStack-Nova-API-Version: 2.1" -d '{"volumeAttachment": > {"volumeId": "a9e188a4-7981-4174-a31d-a258de2a7b3d", "device": "/dev/vdb"}}' > Resetting dropped connection: 192.168.110.201 > http://192.168.110.201:8774 "POST > /v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54/os-volume_attachments > HTTP/1.1" 200 194 > RESP: [200] Connection: close Content-Length: 194 Content-Type: > application/json OpenStack-API-Version: compute 2.1 Vary: > OpenStack-API-Version, X-OpenStack-Nova-API-Version > X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: > req-87ba31ae-3511-41be-a2b9-f6a0ccdb092a x-openstack-request-id: > req-87ba31ae-3511-41be-a2b9-f6a0ccdb092a > RESP BODY: {"volumeAttachment": {"id": > "a9e188a4-7981-4174-a31d-a258de2a7b3d", "serverId": > "2fd482ac-7a93-4626-955f-77b78f783d54", "volumeId": > "a9e188a4-7981-4174-a31d-a258de2a7b3d", "device": "/dev/vdb"}} > POST call to compute for > http://192.168.110.201:8774/v2.1/servers/2fd482ac-7a93-4626-955f-77b78f783d54/os-volume_attachments > used request id req-87ba31ae-3511-41be-a2b9-f6a0ccdb092a > clean_up AddServerVolume: > END return value: 0 > ``` > > However, I could not read the logs the way Dmitriy suggested, it simply > returns > > -- Logs begin at Mon 2020-10-05 14:30:22 UTC, end at Mon 2020-10-12 > 11:17:20 UTC. -- > -- No entries -- > > so I couldn't investigate the request. ( I tried this on both utility > and cinder api containers on the management node and on the storage and > respective compute node to be sure) Hi, I really don't know anything about those Ansible playbooks, so you'll have to either check them, check how the container is being run, check the cinder configuration to see if there's a specific logging option set, or check the output of the stdout of the container on startup to see the logging options output by Cinder on startup (on DEBUG mode). > > > It's important that the target_ip_address can be accessed from the Nova > > compute using the interface for the IP defined in my_ip in nova.conf > > my_ip in nova.conf seems to be the address of the nova_api container on > the management node, target_ip_address from cinder.conf seems to be the > (storage-net) address of the cinder-api container on the management node > (the value is different from the iscsi_ip_address specified in the > playbook). Can you ping from the address defined in my_ip on your compute node to the target_ip_address being used by Cinder? If you cannot, then that's the problem. Cheers, Gorka. > > > IIRC, for lvm storage, cinder-volumes should be launched on every nova > > compute node in order to attach volumes > > to instances, since it's not shared storage. > > I did not try this yet since the documentation didn't mention it and for > our current production system (mitaka, which someone else set up ages > ago) this doesn't seem to be the case but volumes can still be attached. > > Kind regards, > Oliver > > From geguileo at redhat.com Wed Oct 14 11:41:48 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 14 Oct 2020 13:41:48 +0200 Subject: [Openstack][cinder] dell unity iscsi faulty devices In-Reply-To: References: Message-ID: <20201014114148.w5f427us5pyiro6g@localhost> On 09/10, Ignazio Cassano wrote: > Hello Stackers, I am using dell emc iscsi driver on my centos 7 queens > openstack. It works and instances work as well but on compute nodes I got a > lot a faulty device reported by multipath il comand. > I do know why this happens, probably attacching and detaching volumes and > live migrating instances do not close something well. > I read this can cause serious performances problems on compute nodes. > Please, any workaround and/or patch is suggested ? > Regards > Ignazio Hi, There are many, many, many things that could be happening there, and it's not usually trivial doing the RCA, so the following questions are just me hoping this is something "easy" to find out. What os-brick version from Queens are you running? Latest (2.3.9), or maybe one older than 2.3.3? When you say you have faulty devices reported, are these faulty devices alone in the multipath DM? Or do you have some faulty ones with some that are ok? If there are some OK and some that aren't, are they consecutive devices? (as in /dev/sda /dev/sdb etc). Cheers, Gorka. From ionut at fleio.com Wed Oct 14 12:27:34 2020 From: ionut at fleio.com (Ionut Biru) Date: Wed, 14 Oct 2020 15:27:34 +0300 Subject: [magnum]monitoring enabled label problem Message-ID: hi guys Currently I'm using the latest from stable/ussuri. I have deployed a private cluster(floating ip disabled) with monitoring_enabled=true and it seems that the services related to monitoring fail to deploy with an error: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Endpoints.subsets[0].addresses[0]): missing required field "ip" in io.k8s.api.core.v1.EndpointAddress If I enable floating ip, everything is deployed correctly. Is there a workaround that I have to use in this type of scenario? https://paste.xinu.at/Wim/ -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Oct 14 12:55:16 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 14 Oct 2020 08:55:16 -0400 Subject: [ops][cinder] brocade fczm driver situation update Message-ID: <06aabd32-b676-df99-83a6-7f6cea3cecc2@gmail.com> Happy Victoria Release Day! This is a follow-up to my 26 June 2020 email concerning the proposed removal from cinder of the Brocade Fibre Channel Zone Manager driver [0], which, you may recall, the vendor no longer supports after the Train release and which was not operational in a Python 3 environment. The Cinder community has decided to assume the maintenance of this driver on a best-effort basis. See the "Known Issues" and "Other Notes" sections of the cinder victoria release notes [1] and the statement in the driver documentation [2] for what exactly this means. Special thanks to Cinder core Gorka Eguileor for driving this effort. Gorka fixed the bugs that hitherto prevented the driver from operating in Python 3, and these fixes have been backported to stable/ussuri and stable/train. Gorka also ran CI on driver against cinder victoria RC-1. The Brocade FCZM driver does not have ongoing third-party CI. If you use this driver, and would be interested in running third-party CI for it, please contact the Cinder project team. And if you have a relationship with the vendor, you may wish to ask them to reconsider their stance on this driver. [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015692.html [1] https://docs.openstack.org/releasenotes/cinder/victoria.html [2] https://docs.openstack.org/cinder/victoria/configuration/block-storage/fc-zoning.html#brocade-fibre-channel-zone-driver [3] https://review.opendev.org/#/q/topic:brocade+(status:open+OR+status:merged) From deepa.kr at fingent.com Wed Oct 14 13:08:16 2020 From: deepa.kr at fingent.com (Deepa KR) Date: Wed, 14 Oct 2020 18:38:16 +0530 Subject: Multiple backend for Openstack Message-ID: Hi All Good Day >From documentation we are able to understand that we can configure multiple backends for cinder (Ceph ,External Storage device or NFS etc) Is there any way we can choose the backend while instance launching .Say instance1 backend should be from an external storage device like EMC and instance2 to launch and have backend volume from ceph .Can this be achieved using cinder availability zone implementation or any other way? I have gone through below link (*section Configure Block Storage scheduler multi back end)* https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html Suggestions please .. Regards, Deepa K R -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbengt at redhat.com Wed Oct 14 08:44:58 2020 From: dbengt at redhat.com (Daniel Bengtsson) Date: Wed, 14 Oct 2020 10:44:58 +0200 Subject: [oslo] Project leadership In-Reply-To: References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> <2710df64-1e16-631f-640b-48b6d81769bc@openstack.org> Message-ID: Le 13/10/2020 à 18:13, Herve Beraud a écrit : > Final note, I would like to personally thank Ben for assuming the PTL > role during the previous cycles and for the works he have done at this > position and for oslo in general, especially during the previous cycle > where he was a volunteer even if oslo wasn't its main topic at work. Thanks a lot to Ben. Thanks a lot to Hervé for the mail. I'm happy to be liaison, join the first DPL and help. From openstack at nemebean.com Wed Oct 14 13:46:38 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Oct 2020 08:46:38 -0500 Subject: [oslo] PTG Update Message-ID: <04098962-75a4-5eea-9c46-51c73bf70a95@nemebean.com> Hi, Based on the lack of topic proposals in the etherpad[0], we are currently planning to skip the Oslo PTG session for this cycle. If you have something to discuss please contact the team ASAP so we have time to set something up. Thanks. -Ben 0: https://etherpad.opendev.org/p/oslo-wallaby-topics From smooney at redhat.com Wed Oct 14 13:58:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 14 Oct 2020 14:58:59 +0100 Subject: Multiple backend for Openstack In-Reply-To: References: Message-ID: On Wed, 2020-10-14 at 18:38 +0530, Deepa KR wrote: > Hi All > > Good Day > > From documentation we are able to understand that we can configure > multiple > backends for cinder (Ceph ,External Storage device or NFS etc) > Is there any way we can choose the backend while instance launching > .Say > instance1 backend should be from an external storage device like EMC  > and > instance2 to launch and have backend volume from ceph kind of you can use volume types to do this indrectly. end users shoudl generally not be aware if its ceph or an emc san you less you name the volumens tyeps "cpeh" and "emc" but that is a operator choice. you can map the volume type to specific backend using there config file. > .Can this be achieved > using cinder availability zone implementation or any other way? > > I have gone through below link (*section Configure Block Storage > scheduler > multi back end)* > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html well that basically covers what you have to do your cinder config for lvm emc and ceph might look like this [DEFAULT] enabled_backends=lvm,emc,ceph [lvm] volume_group = cinder-volume-1 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver [emc] use_multipath_for_image_xfer = true volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver volume_backend_name = emcfc [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 or you might have 3 diffent config one for each you then create 3 different volume types and assocatie each with a backend openstack --os-username admin --os-tenant-name admin volume type create lvm openstack --os-username admin --os-tenant-name admin volume type create emc openstack --os-username admin --os-tenant-name admin volume type create ceph openstack --os-username admin --os-tenant-name admin volume type set lvm --property volume_backend_name=lvm openstack --os-username admin --os-tenant-name admin volume type set emc --property volume_backend_name=emc openstack --os-username admin --os-tenant-name admin volume type set ceph --property volume_backend_name=ceph then you can create volumes with those types openstack volume create --size 1 --type lvm my_lvm_volume openstack volume create --size 1 --type ceph my_ceph_volume and boot a server with them. openstack server create --volume my_lvm_volume --volume my_ceph_volume ... if you have cross az attach disabel in nova and you want all type/backend to be accesable in all AZ you need to deploy a instance of the cinder volume driver for each of the tyeps in each az. i dont know if that answers your question or not but volume types are the way to requst a specific backend at the api level. > >   Suggestions please .. > > Regards, > Deepa K R From sean.mcginnis at gmx.com Wed Oct 14 14:11:42 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Oct 2020 09:11:42 -0500 Subject: Multiple backend for Openstack In-Reply-To: References: Message-ID: <43ae628e-af50-431f-24b0-58d5f99af799@gmx.com> > Good Day > > From documentation we are able to understand that we can configure > multiple backends for cinder (Ceph ,External Storage device or NFS etc) > Is there any way we can choose the backend while instance launching > .Say instance1 backend should be from an external storage device like > EMC  and instance2 to launch and have backend volume from ceph .Can > this be achieved using cinder availability zone implementation or any > other way? > > I have gone through below link (*section Configure Block Storage > scheduler multi back end)* > > https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html > > >   Suggestions please .. > > Regards, > Deepa K R > Hi Deepa, Backend selection is controlled by the scheduler, with the end user being able to choose different volume types configured by the administrator to tell the scheduler what characteristics are needed for the selected storage. The Volume Type section in the Multibackend docs briefly describe some of this: https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html#volume-type These volume types can be configured with extra specs the explicitly declare a specific backend name to use for creating volumes of that type, or it can contain extra specs that just define other properties (such as just stating the protocol needs to be iscsi) and the scheduler will use those to decide where to place the volume. Some description of defining availability zones in extra specs (which I'm seeing could really use some updates) can be found here: https://docs.openstack.org/cinder/latest/admin/blockstorage-availability-zone-type.html From the command line, you can also explicitly state which availability zone you want the volume created in. See bullets 2 and 3 here: https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html#create-a-volume Good luck! Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From moguimar at redhat.com Wed Oct 14 14:14:55 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 14 Oct 2020 16:14:55 +0200 Subject: [oslo] PTG Update In-Reply-To: <04098962-75a4-5eea-9c46-51c73bf70a95@nemebean.com> References: <04098962-75a4-5eea-9c46-51c73bf70a95@nemebean.com> Message-ID: Hey y'all, I think we should have at least the retrospective during the regular weekly meeting time if no other topic comes up. [ ]'s On Wed, Oct 14, 2020 at 3:47 PM Ben Nemec wrote: > Hi, > > Based on the lack of topic proposals in the etherpad[0], we are > currently planning to skip the Oslo PTG session for this cycle. If you > have something to discuss please contact the team ASAP so we have time > to set something up. > > Thanks. > > -Ben > > 0: https://etherpad.opendev.org/p/oslo-wallaby-topics > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Oct 14 14:35:28 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Oct 2020 09:35:28 -0500 Subject: OpenStack Victoria is officially released! Message-ID: <20201014143528.GB137875@sm-workstation> The official OpenStack Victoria release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2020-October/002041.html Thanks to all who were a part of the Victoria development cycle! This marks the official opening of the releases repo for Wallaby, and freezes are now lifted. Victoria is now a fully normal stable branch, and the normal stable policy now applies. Thanks! Sean From Arkady.Kanevsky at dell.com Wed Oct 14 14:56:47 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 14 Oct 2020 14:56:47 +0000 Subject: [all][elections][tc] Technical Committee Election Results In-Reply-To: <20201014003040.davu3545i4anre5d@yuggoth.org> References: <20201014003040.davu3545i4anre5d@yuggoth.org> Message-ID: Congrats to new TC! -----Original Message----- From: Jeremy Stanley Sent: Tuesday, October 13, 2020 7:31 PM To: openstack-discuss at lists.openstack.org; openstack-announce at lists.openstack.org Subject: [all][elections][tc] Technical Committee Election Results Please join me in congratulating the 4 newly elected members of the Technical Committe (TC). Dan Smith Ghanshyam Mann (gmann) Jay Bryant (jungleboyj) Kendall Nelson (diablo_rojo) Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f3d92f86f4254553 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voice is heard. Thank you for another great round. -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials From Arkady.Kanevsky at dell.com Wed Oct 14 14:59:00 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 14 Oct 2020 14:59:00 +0000 Subject: OpenStack Victoria is officially released! In-Reply-To: <20201014143528.GB137875@sm-workstation> References: <20201014143528.GB137875@sm-workstation> Message-ID: Congrats to all. And many thanks to Sean for leading it! -----Original Message----- From: Sean McGinnis Sent: Wednesday, October 14, 2020 9:35 AM To: openstack-discuss at lists.openstack.org Subject: OpenStack Victoria is officially released! [EXTERNAL EMAIL] The official OpenStack Victoria release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2020-October/002041.html Thanks to all who were a part of the Victoria development cycle! This marks the official opening of the releases repo for Wallaby, and freezes are now lifted. Victoria is now a fully normal stable branch, and the normal stable policy now applies. Thanks! Sean From amy at demarco.com Wed Oct 14 15:16:16 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 14 Oct 2020 10:16:16 -0500 Subject: OpenStack Victoria is officially released! In-Reply-To: <20201014143528.GB137875@sm-workstation> References: <20201014143528.GB137875@sm-workstation> Message-ID: Great work everyone!!!! Amy (spotz) On Wed, Oct 14, 2020 at 9:38 AM Sean McGinnis wrote: > The official OpenStack Victoria release announcement has been sent out: > > > http://lists.openstack.org/pipermail/openstack-announce/2020-October/002041.html > > Thanks to all who were a part of the Victoria development cycle! > > This marks the official opening of the releases repo for Wallaby, and > freezes > are now lifted. Victoria is now a fully normal stable branch, and the > normal > stable policy now applies. > > Thanks! > Sean > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Oct 14 15:23:59 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Oct 2020 10:23:59 -0500 Subject: [oslo] PTG Update In-Reply-To: References: <04098962-75a4-5eea-9c46-51c73bf70a95@nemebean.com> Message-ID: Are you thinking IRC or video call for that? On 10/14/20 9:14 AM, Moises Guimaraes de Medeiros wrote: > Hey y'all, > > I think we should have at least the retrospective during the regular > weekly meeting time if no other topic comes up. > > [ ]'s > > On Wed, Oct 14, 2020 at 3:47 PM Ben Nemec > wrote: > > Hi, > > Based on the lack of topic proposals in the etherpad[0], we are > currently planning to skip the Oslo PTG session for this cycle. If you > have something to discuss please contact the team ASAP so we have time > to set something up. > > Thanks. > > -Ben > > 0: https://etherpad.opendev.org/p/oslo-wallaby-topics > > > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > From mark at stackhpc.com Wed Oct 14 16:28:06 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 14 Oct 2020 17:28:06 +0100 Subject: [kolla] Reversing deprecation of VMware integration Message-ID: Hi, In the Ussuri release we deprecated support for VMware due to Nova deprecating their VMware driver, a lack of CI, and a lack of interest from the community. Since then, Nova has reversed the deprecation, and we know of at least one interested user. We are therefore reversing our deprecation of VMware support in Kolla. If you are interested in this driver, please help out with maintenance since we are unable to verify changes without CI. Regards, Mark From bjoernputtmann at netprojects.de Wed Oct 14 16:50:34 2020 From: bjoernputtmann at netprojects.de (bjoernputtmann at netprojects.de) Date: Wed, 14 Oct 2020 18:50:34 +0200 Subject: Problem with octavia LBaaS and nova availability zones In-Reply-To: References: <468d79df2c8ba8f3b9ae29c515f231e8@netprojects.de> Message-ID: <9a3956a3854c8ad7bf10a3c9e55223f7@netprojects.de> Hi Michael, thanks for taking the time reading this! The internal octavia network is a vlan on a physical network on the hosts. openstack --os-cloud admin subnet list --long gives: +--------------------------------------+----------------------------------------------------+--------+----------------------------------+-------+--------+--------------------------------------+--------------+-------------+--------------------+------+ | ID | Name | Status | Project | State | Shared | Subnets | Network Type | Router Type | Availability Zones | Tags | +--------------------------------------+----------------------------------------------------+--------+----------------------------------+-------+--------+--------------------------------------+--------------+-------------+--------------------+------+ ... | fe73f93f-9e76-4bd9-9ce4-e60882d313d9 | internal-octavia | ACTIVE | d52e2dccbc7e46d8b518120fa7c8753a | UP | False | fd10e49c-359a-45d0-b207-89ba3d438a68 | vlan | Internal | az-1, az-2 | | ... +--------------------------------------+----------------------------------------------------+--------+----------------------------------+-------+--------+--------------------------------------+--------------+-------------+--------------------+------+ so, Openstack seems to know the the network and it should work in both az. openstack --os-cloud admin subnet list --long gives: +--------------------------------------+---------------------------------------------------+--------------------------------------+--------------------+----------------------------------+-------+--------------+---------------------------------+-------------+------------+-----------------+---------------+------+ | ID | Name | Network | Subnet | Project | DHCP | Name Servers | Allocation Pools | Host Routes | IP Version | Gateway | Service Types | Tags | +--------------------------------------+---------------------------------------------------+--------------------------------------+--------------------+----------------------------------+-------+--------------+---------------------------------+-------------+------------+-----------------+---------------+------+ ... | fd10e49c-359a-45d0-b207-89ba3d438a68 | 172_20_48_0_20 | fe73f93f-9e76-4bd9-9ce4-e60882d313d9 | 172.20.48.0/20 | d52e2dccbc7e46d8b518120fa7c8753a | False | | 172.20.48.50-172.20.63.254 | | 4 | 172.20.48.1 | | | ... +--------------------------------------+---------------------------------------------------+--------------------------------------+--------------------+----------------------------------+-------+--------------+---------------------------------+-------------+------------+-----------------+---------------+------+ BTW, starting an instance manually in the service project with the correct network and the octavia image works. I recreated the availabilityzoneprofile via: openstack --os-cloud service_octavia loadbalancer availabilityzoneprofile create --name az-1 --provider amphora --availability-zone-data '{"compute_zone": "az-1", "management_network": "fe73f93f-9e76-4bd9-9ce4-e60882d313d9"}' openstack --os-cloud service_octavia loadbalancer availabilityzoneprofile create --name az-2 --provider amphora --availability-zone-data '{"compute_zone": "az-2", "management_network": "fe73f93f-9e76-4bd9-9ce4-e60882d313d9"}' but still no success. Am 2020-10-14 00:06, schrieb Michael Johnson: > Hi Björn, > > Yeah, I don't see a reason in that nova log snippet, so I can 't point > to the exact cause. It might be higher in the logs than the snippet > included. > > That said, it might be that the AZ definition in the AZ profile does > not include the appropriate lb-mgmt-net ID for az-2. Without that > defined, Octavia will use the default lb-mgmt-net ID from the > configuration file, which is likely only available in az-1. I would > try defining the management_network and valid_vip_networks for az-2. > > Michael From ignaziocassano at gmail.com Wed Oct 14 16:56:47 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 14 Oct 2020 18:56:47 +0200 Subject: [Openstack][cinder] dell unity iscsi faulty devices In-Reply-To: <20201014114148.w5f427us5pyiro6g@localhost> References: <20201014114148.w5f427us5pyiro6g@localhost> Message-ID: Hello, thank you for the answer. I am using os-brick 2.3.8 but I got same issues on stein with os.brick 2.8 For explain better the situation I send you the output of multipath -ll on a compute node: root at podvc-kvm01 ansible]# multipath -ll Oct 14 18:50:01 | sdbg: alua not supported Oct 14 18:50:01 | sdbe: alua not supported Oct 14 18:50:01 | sdbd: alua not supported Oct 14 18:50:01 | sdbf: alua not supported 360060160f0d049007ab7275f743d0286 dm-11 DGC ,VRAID size=30G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=0 status=enabled | |- 15:0:0:71 sdbg 67:160 failed faulty running | `- 12:0:0:71 sdbe 67:128 failed faulty running `-+- policy='round-robin 0' prio=0 status=enabled |- 11:0:0:71 sdbd 67:112 failed faulty running `- 13:0:0:71 sdbf 67:144 failed faulty running 360060160f0d049004cdb615f52343fdb dm-8 DGC ,VRAID size=80G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 15:0:0:210 sdau 66:224 active ready running | `- 12:0:0:210 sdas 66:192 active ready running `-+- policy='round-robin 0' prio=10 status=enabled |- 11:0:0:210 sdar 66:176 active ready running `- 13:0:0:210 sdat 66:208 active ready running 360060160f0d0490034aa645fe52265eb dm-12 DGC ,VRAID size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 12:0:0:177 sdbi 67:192 active ready running | `- 15:0:0:177 sdbk 67:224 active ready running `-+- policy='round-robin 0' prio=10 status=enabled |- 11:0:0:177 sdbh 67:176 active ready running `- 13:0:0:177 sdbj 67:208 active ready running 360060160f0d04900159f225fd6126db9 dm-6 DGC ,VRAID size=40G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 11:0:0:26 sdaf 65:240 active ready running | `- 13:0:0:26 sdah 66:16 active ready running `-+- policy='round-robin 0' prio=10 status=enabled |- 12:0:0:26 sdag 66:0 active ready running `- 15:0:0:26 sdai 66:32 active ready running Oct 14 18:50:01 | sdba: alua not supported Oct 14 18:50:01 | sdbc: alua not supported Oct 14 18:50:01 | sdaz: alua not supported Oct 14 18:50:01 | sdbb: alua not supported 360060160f0d049007eb7275f93937511 dm-10 DGC ,VRAID size=40G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=0 status=enabled | |- 12:0:0:242 sdba 67:64 failed faulty running | `- 15:0:0:242 sdbc 67:96 failed faulty running `-+- policy='round-robin 0' prio=0 status=enabled |- 11:0:0:242 sdaz 67:48 failed faulty running `- 13:0:0:242 sdbb 67:80 failed faulty running 360060160f0d049003a567c5fb72201e8 dm-7 DGC ,VRAID size=40G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 12:0:0:57 sdbq 68:64 active ready running | `- 15:0:0:57 sdbs 68:96 active ready running `-+- policy='round-robin 0' prio=10 status=enabled |- 11:0:0:57 sdbp 68:48 active ready running `- 13:0:0:57 sdbr 68:80 active ready running 360060160f0d04900c120625f802ea1fa dm-9 DGC ,VRAID size=25G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 11:0:0:234 sdav 66:240 active ready running | `- 13:0:0:234 sdax 67:16 active ready running `-+- policy='round-robin 0' prio=10 status=enabled |- 15:0:0:234 sday 67:32 active ready running `- 12:0:0:234 sdaw 67:0 active ready running 360060160f0d04900b8b0615fb14ef1bd dm-3 DGC ,VRAID size=50G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 11:0:0:11 sdan 66:112 active ready running | `- 13:0:0:11 sdap 66:144 active ready running `-+- policy='round-robin 0' prio=10 status=enabled |- 12:0:0:11 sdao 66:128 active ready running `- 15:0:0:11 sdaq 66:160 active ready running The active running are related to running virtual machines. The faulty are related to virtual macnines migrated on other kvm nodes. Every volume has 4 path because iscsi on unity needs two different vlans, each one with 2 addresses. I think this issue can be related to os-brick because when I migrate a virtual machine from host A host B in the cova compute log on host A I read: 2020-10-13 10:31:02.769 118727 DEBUG os_brick.initiator.connectors.iscsi [req-771ede8c-6e1b-4f3f-ad4a-1f6ed820a55c 66adb965bef64eaaab2af93ade87e2ca 85cace94dcc7484c85ff9337eb1d0c4c - default default] *Disconnecting from: []* Ignazio Il giorno mer 14 ott 2020 alle ore 13:41 Gorka Eguileor ha scritto: > On 09/10, Ignazio Cassano wrote: > > Hello Stackers, I am using dell emc iscsi driver on my centos 7 queens > > openstack. It works and instances work as well but on compute nodes I > got a > > lot a faulty device reported by multipath il comand. > > I do know why this happens, probably attacching and detaching volumes and > > live migrating instances do not close something well. > > I read this can cause serious performances problems on compute nodes. > > Please, any workaround and/or patch is suggested ? > > Regards > > Ignazio > > Hi, > > There are many, many, many things that could be happening there, and > it's not usually trivial doing the RCA, so the following questions are > just me hoping this is something "easy" to find out. > > What os-brick version from Queens are you running? Latest (2.3.9), or > maybe one older than 2.3.3? > > When you say you have faulty devices reported, are these faulty devices > alone in the multipath DM? Or do you have some faulty ones with some > that are ok? > > If there are some OK and some that aren't, are they consecutive devices? > (as in /dev/sda /dev/sdb etc). > > Cheers, > Gorka. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Oct 14 16:59:25 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 14 Oct 2020 16:59:25 +0000 Subject: [elections][tc][telemetry] Wallaby Cycle PTL and TC polling stats Message-ID: <20201014165924.jq4i3xijvq2idobt@yuggoth.org> Find attached historical analysis of election turnout (electorate size and number of ballots returned) updated with the data from the Wallaby Cycle TC and Telemetry PTL polls. I'm happy to answer any questions folks may have about these, or dig into additional related statistics anyone may be interested in seeing. -- Jeremy Stanley -------------- next part -------------- +----------------------------------+-----------------------+-------------------+-----------------------+ | Election | Electorate (delta %) | Voted (delta %) | Turnout % (delta %) | +----------------------------------+-----------------------+-------------------+-----------------------+ | Ocata - Freezer | 39 ( nan) | 34 ( nan) | 87.18 ( nan) | | Ocata - Ironic | 222 ( nan) | 96 ( nan) | 43.24 ( nan) | | Ocata - Keystone | 229 ( nan) | 96 ( nan) | 41.92 ( nan) | | Ocata - Kolla | 174 ( nan) | 77 ( nan) | 44.25 ( nan) | | Ocata - Magnum | 140 ( nan) | 63 ( nan) | 45.00 ( nan) | | Ocata - Quality_Assurance | 418 ( nan) | 138 ( nan) | 33.01 ( nan) | | Pike - Keystone | 185 ( -19.21) | 75 ( -21.88) | 40.54 ( -3.29) | | Pike - Ironic | 210 ( -5.41) | 112 ( 16.67) | 53.33 ( 23.33) | | Pike - Neutron | 380 ( nan) | 161 ( nan) | 42.37 ( nan) | | Pike - Quality_Assurance | 330 ( -21.05) | 134 ( -2.90) | 40.61 ( 23.00) | | Pike - Stable_Branch_Maintenance | 805 ( nan) | 249 ( nan) | 30.93 ( nan) | | Queens - Documentation | 271 ( nan) | 73 ( nan) | 26.94 ( nan) | | Queens - Ironic | 221 ( 5.24) | 72 ( -35.71) | 32.58 ( -38.91) | | Rocky - Kolla | 186 ( 6.90) | 71 ( -7.79) | 38.17 ( -13.74) | | Rocky - Mistral | 81 ( nan) | 35 ( nan) | 43.21 ( nan) | | Rocky - Quality_Assurance | 269 ( -18.48) | 117 ( -12.69) | 43.49 ( 7.11) | | Stein - Senlin | 59 ( nan) | 16 ( nan) | 27.12 ( nan) | | Stein - Tacker | 54 ( nan) | 20 ( nan) | 37.04 ( nan) | | Train - Nova | 194 ( nan) | 81 ( nan) | 41.75 ( nan) | | Train - OpenStack_Charms | 68 ( nan) | 23 ( nan) | 33.82 ( nan) | | Wallaby - Telemetry | 27 ( nan) | 14 ( nan) | 51.85 ( nan) | +----------------------------------+-----------------------+-------------------+-----------------------+ Election CIVS links Ocata - Freezer: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_662fe1dfea3b2980 Ocata - Ironic: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_5bbba65c5879783c Ocata - Keystone: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f0432662b678f99f Ocata - Kolla: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_9fa13adc6f6e7148 Ocata - Magnum: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2fd00175baa579a6 Ocata - Quality_Assurance: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_745c895dcf12c405 Pike - Keystone: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_1d2eef76fdfc1ec4 Pike - Ironic: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_08ff4080d37365ba Pike - Neutron: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_6df9c8e056680402 Pike - Quality_Assurance: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_99413ba03ba1c6b3 Pike - Stable_Branch_Maintenance: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_b719c6a5a681033b Queens - Documentation: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_d5d9fb5a2354e2a0 Queens - Ironic: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_0fb06bb4edfd3d08 Rocky - Kolla: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_eb44669f6742dd4b Rocky - Mistral: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_74983fd83cf5adab Rocky - Quality_Assurance: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_274f37d8e5497358 Stein - Senlin: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_5655e3b3821ece95 Stein - Tacker: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fe41cc8acc6ead91 Train - Nova: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_b03df704c3012e18 Train - OpenStack_Charms: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ca2c11f0f83ce84d Wallaby - Telemetry: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2428aa2ecc67cc17 -------------- next part -------------- +----------+-----------------------+-------------------+-----------------------+ | Election | Electorate (delta %) | Voted (delta %) | Turnout % (delta %) | +----------+-----------------------+-------------------+-----------------------+ | 10/2013 | 1106 ( nan) | 342 ( nan) | 30.92 ( nan) | | 04/2014 | 1510 ( 36.53) | 448 ( 30.99) | 29.67 ( -4.05) | | 10/2014 | 1893 ( 25.36) | 506 ( 12.95) | 26.73 ( -9.91) | | 04/2015 | 2169 ( 14.58) | 548 ( 8.30) | 25.27 ( -5.48) | | 10/2015 | 2759 ( 27.20) | 619 ( 12.96) | 22.44 ( -11.20) | | 04/2016 | 3284 ( 19.03) | 652 ( 5.33) | 19.85 ( -11.51) | | 10/2016 | 3517 ( 7.10) | 801 ( 22.85) | 22.78 ( 14.71) | | 04/2017 | 3191 ( -9.27) | 427 ( -46.69) | 13.38 ( -41.25) | | 10/2017 | 2430 ( -23.85) | 420 ( -1.64) | 17.28 ( 29.16) | | 04/2018 | 2025 ( -16.67) | 384 ( -8.57) | 18.96 ( 9.71) | | 09/2018 | 1636 ( -19.21) | 403 ( 4.95) | 24.63 ( 29.90) | | 03/2019 | 1390 ( -15.04) | 279 ( -30.77) | 20.07 ( -18.52) | | 04/2020 | 808 ( -41.87) | 208 ( -25.45) | 25.74 ( 28.25) | | 10/2020 | 619 ( -23.39) | 181 ( -12.98) | 29.24 ( 13.59) | +----------+-----------------------+-------------------+-----------------------+ Election CIVS links 10/2014: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c105db929e6c11f4 04/2015: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ef1379fee7b94688 10/2015: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ef58718618691a0 04/2016: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fef5cc22eb3dc27a 10/2016: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_356e6c1b16904010 04/2017: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_072c4cd7ff0673b5 10/2017: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ce86063991ef8aae 04/2018: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_98430d99fc2ed59d 09/2018: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864 03/2019: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_6c71f84caff2b37c 04/2020: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_95cb11614fb23566 10/2020: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f3d92f86f4254553 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From nate.johnston at redhat.com Wed Oct 14 17:09:59 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 14 Oct 2020 13:09:59 -0400 Subject: [all][elections][tc] Technical Committee Election Results In-Reply-To: <20201014003040.davu3545i4anre5d@yuggoth.org> References: <20201014003040.davu3545i4anre5d@yuggoth.org> Message-ID: <20201014170959.xeelx6i3bp2javxk@grind.home> Congratulations to everyone who ran for the TC, it was great to see so many candidates. All of your time and thought are appreciated. And special congratulations to the victors! Nate On Wed, Oct 14, 2020 at 12:30:40AM +0000, Jeremy Stanley wrote: > Please join me in congratulating the 4 newly elected members of the > Technical Committe (TC). > > Dan Smith > Ghanshyam Mann (gmann) > Jay Bryant (jungleboyj) > Kendall Nelson (diablo_rojo) > > Full results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f3d92f86f4254553 > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of > candidates helps engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We > need to ensure your voice is heard. > > Thank you for another great round. > > -- > Jeremy Stanley > on behalf of the OpenStack Technical Election Officials From deepa.kr at fingent.com Wed Oct 14 17:10:03 2020 From: deepa.kr at fingent.com (Deepa KR) Date: Wed, 14 Oct 2020 22:40:03 +0530 Subject: Multiple backend for Openstack In-Reply-To: References: Message-ID: Hello Sean Thanks much for the detailed response. I am still unclear on below points if you have cross az attach disable in nova and you want all type/backend to be accessible in all AZ you need to deploy a instance of the cinder volume driver for each of the types in each az. "deploy a instance of the cinder volume driver for each of the types in each az" <<<< How can i achieve above mentioned thing i don't know if that answers your question or not but volume types are the way to request a specific backend at the api level. <<< i see volume type option while creating volume .. But when I am trying to launch an instance I don't see that parameter in Horizon .. may be in the command line we have (not sure though) . Is this something that needs to be handled at AZ level ? On Wed, Oct 14, 2020 at 7:29 PM Sean Mooney wrote: > On Wed, 2020-10-14 at 18:38 +0530, Deepa KR wrote: > > Hi All > > > > Good Day > > > > From documentation we are able to understand that we can configure > > multiple > > backends for cinder (Ceph ,External Storage device or NFS etc) > > Is there any way we can choose the backend while instance launching > > .Say > > > instance1 backend should be from an external storage device like EMC > > and > > instance2 to launch and have backend volume from ceph > kind of you can use volume types to do this indrectly. > end users shoudl generally not be aware if its ceph or an emc san you > less you name the volumens tyeps "cpeh" and "emc" but that is a > operator choice. you can map the volume type to specific backend using > there config file. > > > .Can this be achieved > > using cinder availability zone implementation or any other way? > > > > I have gone through below link (*section Configure Block Storage > > scheduler > > multi back end)* > > > > > > > https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html > > well that basically covers what you have to do > > your cinder config for lvm emc and ceph might look like this > > > [DEFAULT] > enabled_backends=lvm,emc,ceph > > [lvm] > volume_group = cinder-volume-1 > volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver > > [emc] > use_multipath_for_image_xfer = true > volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver > volume_backend_name = emcfc > > [ceph] > volume_driver = cinder.volume.drivers.rbd.RBDDriver > volume_backend_name = ceph > rbd_pool = volumes > rbd_ceph_conf = /etc/ceph/ceph.conf > rbd_flatten_volume_from_snapshot = false > rbd_max_clone_depth = 5 > rbd_store_chunk_size = 4 > rados_connect_timeout = -1 > > or you might have 3 diffent config one for each > > you then create 3 different volume types and assocatie each with a > backend > > openstack --os-username admin --os-tenant-name admin volume type create > lvm > openstack --os-username admin --os-tenant-name admin volume type create > emc > openstack --os-username admin --os-tenant-name admin volume type create > ceph > > openstack --os-username admin --os-tenant-name admin volume type set > lvm --property volume_backend_name=lvm > openstack --os-username admin --os-tenant-name admin volume type set > emc --property volume_backend_name=emc > openstack --os-username admin --os-tenant-name admin volume type set > ceph --property volume_backend_name=ceph > > > then you can create volumes with those types > > openstack volume create --size 1 --type lvm my_lvm_volume > openstack volume create --size 1 --type ceph my_ceph_volume > > and boot a server with them. > > openstack server create --volume my_lvm_volume --volume my_ceph_volume > ... > > > if you have cross az attach disabel in nova and you want all > type/backend to be accesable in all AZ you need to deploy a instance of > the cinder volume driver for each of the tyeps in each az. > > i dont know if that answers your question or not but volume types are > the way to requst a specific backend at the api level. > > > > > Suggestions please .. > > > > Regards, > > Deepa K R > > > -- Regards, Deepa K R | DevOps Team Lead USA | UAE | INDIA | AUSTRALIA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_for_signature.png Type: image/png Size: 10509 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature-1.gif Type: image/gif Size: 566 bytes Desc: not available URL: From feilong at catalyst.net.nz Wed Oct 14 17:30:11 2020 From: feilong at catalyst.net.nz (feilong) Date: Thu, 15 Oct 2020 06:30:11 +1300 Subject: [magnum]monitoring enabled label problem In-Reply-To: References: Message-ID: <21faaec0-9901-5b35-1ed1-18fd4cd5ac7a@catalyst.net.nz> Hi Ionut, I didn't see this error before. Could you please share your cluster template so that I can reproduce? Thanks. On 15/10/20 1:27 am, Ionut Biru wrote: > hi guys > > Currently I'm using the latest from stable/ussuri. > > I have deployed a private cluster(floating ip disabled) with > monitoring_enabled=true and it seems that the services related to > monitoring fail to deploy with an error: Error: unable to build > kubernetes objects from release manifest: error validating "": error > validating data: ValidationError(Endpoints.subsets[0].addresses[0]): > missing required field "ip" in io.k8s.api.core.v1.EndpointAddress > > If I enable floating ip, everything is deployed correctly.  > > Is there a workaround that I have to use in this type of scenario? > > https://paste.xinu.at/Wim/ > > -- > Ionut Biru - https://fleio.com -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Oct 13 21:14:49 2020 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 13 Oct 2020 17:14:49 -0400 Subject: [cinder][manila][swift] Edge discussions at the upcoming PTG In-Reply-To: References: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> Message-ID: <20201013211449.yd7z2lkphpone2zt@barron.net> On 13/10/20 16:48 -0400, Brian Rosmaita wrote: >On 10/7/20 11:27 AM, Ildiko Vancsa wrote: >>Hi, >> >>We’ve started to have discussions in the area of object storage needs and solutions for edge use cases at the last PTG in June. I’m reaching out with the intention to continue this chat at the upcoming PTG in a few weeks. >> >>The OSF Edge Computing Group is meeting during the first three days of the PTG like last time. We are planning to have edge reference architecture models and testing type of discussions in the first two days (October 26-27) and have a cross-project and cross-community day on Wednesday (October 28). We would like to have a dedicated section for storage either on Monday or Tuesday. >> >>I think it might also be time to revisit other storage options as well if there’s interest. >> >>What do people think? > >I asked around the Cinder community a bit, and we don't have any >particular topics to discuss at this point. But if you scheduled the >storage discussion on Monday, some of us would be interested in >attending just to hear what edge people are currently talking about >storage-wise. (Cinder is meeting Tuesday-Friday.) > >If something does come up that the Edge group would like to talk over >with the Cinder team, we can make time for that on Wednesday. > > >cheers, >brian > >> >>For reference: >>* Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 >>* Notes from the previous PTG is here: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 >> >>Thanks, >>Ildikó Ildiko, We in Manila have discussed Edge deployment of shared file system service quite a bit. Currently we think of the problem primarily as how to provide safe, multi-tenant shared file system infrastructure local to each Edge site so that the data path remains available to consumers at the Edge site even when it is disconnected from the core. We'd like the storage to be available both to workloads running in VMs and to workloads running in containers at the edge (whether the containers reside in VMs or on bare metal edge hosts). I'm interested in calibrating this view of the problem set with actual Edge use cases and deployment perspectives and I'm sure Manila folks (like our PTL Goutham Pacha Ravi) would be happy to join in a cross-project session. -- Tom Barron From florian at datalounges.com Wed Oct 14 18:36:41 2020 From: florian at datalounges.com (Florian Rommel) Date: Wed, 14 Oct 2020 21:36:41 +0300 Subject: Multiple backend for Openstack In-Reply-To: References: Message-ID: <2DA13240-1B2B-4748-8D77-9C74CAD92CEB@datalounges.com> Hey Deepa, We have done this in the past with an old HP MSA appliance and ceph. Deploying this with cross az disabled will work fine, as long as the compute nodes have access to the storage backend and the necessary config is done. In order to create a volume type, you need to go to the admin section in the dashboard or via the Cli and create a new volume type. Then add extra specs and add backend_name = the name you gave it on the cinder config. Then you can select where you want to create the volume from the drop down or via cli. Bear in mind you need to create the volume type for each backen of you make more than one available. Hope this helps. //florian > On 14. Oct 2020, at 20.13, Deepa KR wrote: > >  > Hello Sean > > Thanks much for the detailed response. > > I am still unclear on below points > > if you have cross az attach disable in nova and you want all > type/backend to be accessible in all AZ you need to deploy a instance of > the cinder volume driver for each of the types in each az. > > "deploy a instance of > the cinder volume driver for each of the types in each az" <<<< How can i achieve above mentioned thing > > i don't know if that answers your question or not but volume types are > the way to request a specific backend at the api level. <<< i see volume type option while creating volume .. > But when I am trying to launch an instance I don't see that parameter in Horizon .. may be in the command line we have (not sure though) . > Is this something that needs to be handled at AZ level ? > > > >> On Wed, Oct 14, 2020 at 7:29 PM Sean Mooney wrote: >> On Wed, 2020-10-14 at 18:38 +0530, Deepa KR wrote: >> > Hi All >> > >> > Good Day >> > >> > From documentation we are able to understand that we can configure >> > multiple >> > backends for cinder (Ceph ,External Storage device or NFS etc) >> > Is there any way we can choose the backend while instance launching >> > .Say >> >> > instance1 backend should be from an external storage device like EMC >> > and >> > instance2 to launch and have backend volume from ceph >> kind of you can use volume types to do this indrectly. >> end users shoudl generally not be aware if its ceph or an emc san you >> less you name the volumens tyeps "cpeh" and "emc" but that is a >> operator choice. you can map the volume type to specific backend using >> there config file. >> >> > .Can this be achieved >> > using cinder availability zone implementation or any other way? >> > >> > I have gone through below link (*section Configure Block Storage >> > scheduler >> > multi back end)* >> > >> > >> > https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html >> >> well that basically covers what you have to do >> >> your cinder config for lvm emc and ceph might look like this >> >> >> [DEFAULT] >> enabled_backends=lvm,emc,ceph >> >> [lvm] >> volume_group = cinder-volume-1 >> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver >> >> [emc] >> use_multipath_for_image_xfer = true >> volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver >> volume_backend_name = emcfc >> >> [ceph] >> volume_driver = cinder.volume.drivers.rbd.RBDDriver >> volume_backend_name = ceph >> rbd_pool = volumes >> rbd_ceph_conf = /etc/ceph/ceph.conf >> rbd_flatten_volume_from_snapshot = false >> rbd_max_clone_depth = 5 >> rbd_store_chunk_size = 4 >> rados_connect_timeout = -1 >> >> or you might have 3 diffent config one for each >> >> you then create 3 different volume types and assocatie each with a >> backend >> >> openstack --os-username admin --os-tenant-name admin volume type create >> lvm >> openstack --os-username admin --os-tenant-name admin volume type create >> emc >> openstack --os-username admin --os-tenant-name admin volume type create >> ceph >> >> openstack --os-username admin --os-tenant-name admin volume type set >> lvm --property volume_backend_name=lvm >> openstack --os-username admin --os-tenant-name admin volume type set >> emc --property volume_backend_name=emc >> openstack --os-username admin --os-tenant-name admin volume type set >> ceph --property volume_backend_name=ceph >> >> >> then you can create volumes with those types >> >> openstack volume create --size 1 --type lvm my_lvm_volume >> openstack volume create --size 1 --type ceph my_ceph_volume >> >> and boot a server with them. >> >> openstack server create --volume my_lvm_volume --volume my_ceph_volume >> ... >> >> >> if you have cross az attach disabel in nova and you want all >> type/backend to be accesable in all AZ you need to deploy a instance of >> the cinder volume driver for each of the tyeps in each az. >> >> i dont know if that answers your question or not but volume types are >> the way to requst a specific backend at the api level. >> >> > >> > Suggestions please .. >> > >> > Regards, >> > Deepa K R >> >> > > > -- > > Regards, > Deepa K R | DevOps Team Lead > > > > USA | UAE | INDIA | AUSTRALIA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Oct 14 18:40:21 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Oct 2020 13:40:21 -0500 Subject: [oslo] Project leadership In-Reply-To: References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> <2710df64-1e16-631f-640b-48b6d81769bc@openstack.org> Message-ID: On 10/13/20 11:13 AM, Herve Beraud wrote: > Hello, > > According to our latest meeting/discussions I submitted a patch [1] to > move oslo under the DPL governance model. > > Indeed a few weeks ago Ben officially announced his stepping down from > its role of oslo PTL [2], at this point nobody volunteered to become the > new PTL of oslo during Wallaby. > > Accordingly to our latest discussions [3] and our latest meeting [4] we > decided to adopt the DPL governance model [5]. > > Accordingly to this governance model we assigned the required roles [6][7]. > > During the lastest oslo meeting we decided to create groups of paired > liaison [7], especially for release liaison role and the TaCT SIG > liaison role [6][7]. > > To continue the DPL process this patch [1] should be validated by the > current PTL (Ben) and all liaison people [7]. > > Final note, I would like to personally thank Ben for assuming the PTL > role during the previous cycles and for the works he have done at this > position and for oslo in general, especially during the previous cycle > where he was a volunteer even if oslo wasn't its main topic at work. I'm either foolish or dedicated because this is the second time I've worked on Oslo while not really being paid for it. Determining which is more accurate is left as an exercise for the reader. ;-) > > Best regards > > [1] https://review.opendev.org/757906 > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017491.html > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017692.html > [4] > http://eavesdrop.openstack.org/meetings/oslo/2020/oslo.2020-10-12-15.00.log.txt > [5] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > [6] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#required-roles > [7] https://wiki.openstack.org/wiki/Oslo#Project_Leadership_Liaisons > > Le jeu. 1 oct. 2020 à 11:54, Thierry Carrez > a écrit : > > Ben Nemec wrote: > > The general consensus from the people I've talked to seems to be a > > distributed model. To kick off that discussion, here's a list of > roles > > that I think should be filled in some form: > > > > * Release liaison > > * Security point-of-contact > > * TC liaison > > * Cross-project point-of-contact > > * PTG/Forum coordinator > > * Meeting chair > > * Community goal liaison (almost forgot this one since I haven't > > actually been doing it ;-). > > Note that only three roles absolutely need to be filled, per the TC > resolution[1]: > > - Release liaison > - tact-sig liaison (historically named the “infra Liaison”) > - Security point of contact > > The others are just recommended. > > [1] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > -- > Thierry > > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From florian at datalounges.com Wed Oct 14 18:52:14 2020 From: florian at datalounges.com (Florian Rommel) Date: Wed, 14 Oct 2020 21:52:14 +0300 Subject: =?utf-8?Q?Sorry_for_the_stupid_question..._it=E2=80=99s_been_a_w?= =?utf-8?Q?hile?= Message-ID: Hey everyone. So, I have developed openstack, wrote installers and fixed parts in the raw phytonutrients.. but... so I am , for fun and refreshing my memory, building an openstack deployment by hand, the hood old fashioned way. I have a few servers and I get everything working perfectly EXCEPT neutron ovs with dvr integration. I am using Ubuntu focal with ussuri release. When I deploy neutron i get the agents to show up and register with UP. The neutron-openvswitch-agent crashes with physical network floating to br-public not found. I am wondering how in need to configure the bridge in order to make this work. I have a empty interface (internet facing) which I added to br-public but where do I define the floating network name? Yes it’s stupid but I been banging my head against the wall:) Thanks already for help and sorry to anyone else for the dumb question:) //Florian From tpb at dyncloud.net Wed Oct 14 20:03:12 2020 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 14 Oct 2020 16:03:12 -0400 Subject: [cinder][manila][swift] Edge discussions at the upcoming PTG In-Reply-To: <20201013211449.yd7z2lkphpone2zt@barron.net> References: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> <20201013211449.yd7z2lkphpone2zt@barron.net> Message-ID: <20201014200312.mhtfmlb62fsywz6c@barron.net> Sorry, I somehow dropped the list address in the following, replying all now with it added. On 13/10/20 17:14 -0400, Tom Barron wrote: >On 13/10/20 16:48 -0400, Brian Rosmaita wrote: >>On 10/7/20 11:27 AM, Ildiko Vancsa wrote: >>>Hi, >>> >>>We’ve started to have discussions in the area of object storage needs and solutions for edge use cases at the last PTG in June. I’m reaching out with the intention to continue this chat at the upcoming PTG in a few weeks. >>> >>>The OSF Edge Computing Group is meeting during the first three days of the PTG like last time. We are planning to have edge reference architecture models and testing type of discussions in the first two days (October 26-27) and have a cross-project and cross-community day on Wednesday (October 28). We would like to have a dedicated section for storage either on Monday or Tuesday. >>> >>>I think it might also be time to revisit other storage options as well if there’s interest. >>> >>>What do people think? >> >>I asked around the Cinder community a bit, and we don't have any >>particular topics to discuss at this point. But if you scheduled >>the storage discussion on Monday, some of us would be interested in >>attending just to hear what edge people are currently talking about >>storage-wise. (Cinder is meeting Tuesday-Friday.) >> >>If something does come up that the Edge group would like to talk >>over with the Cinder team, we can make time for that on Wednesday. >> >> >>cheers, >>brian >> >>> >>>For reference: >>>* Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 >>>* Notes from the previous PTG is here: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 >>> >>>Thanks, >>>Ildikó > >Ildiko, > >We in Manila have discussed Edge deployment of shared file system >service quite a bit. Currently we think of the problem primarily as >how to provide safe, multi-tenant shared file system infrastructure >local to each Edge site so that the data path remains available to >consumers at the Edge site even when it is disconnected from the core. >We'd like the storage to be available both to workloads running in VMs >and to workloads running in containers at the edge (whether the >containers reside in VMs or on bare metal edge hosts). > >I'm interested in calibrating this view of the problem set with actual >Edge use cases and deployment perspectives and I'm sure Manila folks >(like our PTL Goutham Pacha Ravi) would be happy to join in a >cross-project session. > >-- Tom Barron > From tpb at dyncloud.net Wed Oct 14 20:04:16 2020 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 14 Oct 2020 16:04:16 -0400 Subject: [cinder][manila][swift] Edge discussions at the upcoming PTG In-Reply-To: References: <7BE99174-AC9B-4889-A86A-CC5A647C6353@gmail.com> <20201013211449.yd7z2lkphpone2zt@barron.net> Message-ID: <20201014200415.frscnlko27gjwupm@barron.net> [adding the list address back end, I was the one who dropped it by accident] On 14/10/20 08:42 +0200, Ildiko Vancsa wrote: >Hi Tom, > >Sounds good! I think that topic would resonate well with people in the edge wg and it would be an interesting discussion especially when it comes to the containers part. > >Adjusting to what Brian was saying in his mail if that works for you too we could start with storage discussions on Monday to see how the landscape looks and what the needs and thoughts are. On top of that if needed we could go into cross-project sync with Manila to sync on the project specific roadmaps and action items if any on Wednesday. Does that make sense? > >Thanks, >Ildikó > > >> On Oct 13, 2020, at 23:14, Tom Barron wrote: >> >> On 13/10/20 16:48 -0400, Brian Rosmaita wrote: >>> On 10/7/20 11:27 AM, Ildiko Vancsa wrote: >>>> Hi, >>>> >>>> We’ve started to have discussions in the area of object storage needs and solutions for edge use cases at the last PTG in June. I’m reaching out with the intention to continue this chat at the upcoming PTG in a few weeks. >>>> >>>> The OSF Edge Computing Group is meeting during the first three days of the PTG like last time. We are planning to have edge reference architecture models and testing type of discussions in the first two days (October 26-27) and have a cross-project and cross-community day on Wednesday (October 28). We would like to have a dedicated section for storage either on Monday or Tuesday. >>>> >>>> I think it might also be time to revisit other storage options as well if there’s interest. >>>> >>>> What do people think? >>> >>> I asked around the Cinder community a bit, and we don't have any particular topics to discuss at this point. But if you scheduled the storage discussion on Monday, some of us would be interested in attending just to hear what edge people are currently talking about storage-wise. (Cinder is meeting Tuesday-Friday.) >>> >>> If something does come up that the Edge group would like to talk over with the Cinder team, we can make time for that on Wednesday. >>> >>> >>> cheers, >>> brian >>> >>>> >>>> For reference: >>>> * Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 >>>> * Notes from the previous PTG is here: https://etherpad.opendev.org/p/ecg_virtual_ptg_planning_june_2020 >>>> >>>> Thanks, >>>> Ildikó >> >> Ildiko, >> >> We in Manila have discussed Edge deployment of shared file system service quite a bit. Currently we think of the problem primarily as how to provide safe, multi-tenant shared file system infrastructure local to each Edge site so that the data path remains available to consumers at the Edge site even when it is disconnected from the core. >> We'd like the storage to be available both to workloads running in VMs and to workloads running in containers at the edge (whether the containers reside in VMs or on bare metal edge hosts). >> >> I'm interested in calibrating this view of the problem set with actual Edge use cases and deployment perspectives and I'm sure Manila folks (like our PTL Goutham Pacha Ravi) would be happy to join in a cross-project session. >> >> -- Tom Barron >> > From gmann at ghanshyammann.com Wed Oct 14 21:32:40 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Oct 2020 16:32:40 -0500 Subject: [elections][tc][telemetry] Wallaby Cycle PTL and TC polling stats In-Reply-To: <20201014165924.jq4i3xijvq2idobt@yuggoth.org> References: <20201014165924.jq4i3xijvq2idobt@yuggoth.org> Message-ID: <17529078632.12469ea5f191654.4671485591001477809@ghanshyammann.com> Thanks Jeremy for taking care of Election and another perfect job. -gmann ---- On Wed, 14 Oct 2020 11:59:25 -0500 Jeremy Stanley wrote ---- > Find attached historical analysis of election turnout (electorate > size and number of ballots returned) updated with the data from the > Wallaby Cycle TC and Telemetry PTL polls. I'm happy to answer any > questions folks may have about these, or dig into additional related > statistics anyone may be interested in seeing. > -- > Jeremy Stanley > From helena at openstack.org Wed Oct 14 21:51:54 2020 From: helena at openstack.org (helena at openstack.org) Date: Wed, 14 Oct 2020 17:51:54 -0400 (EDT) Subject: [ptl] Victoria Release Community Meeting Message-ID: <1602712314.02751333@apps.rackspace.com> Hi Everyone, We are looking to do a community meeting following The Open Infrastructure Summit and PTG to discuss the Victoria Release. If you’re a PTL, please let me know if you’re interested in doing a prerecorded explanation of your project’s key features for the release. We will show a compilation of these recordings at the community meeting and follow it with a live Q&A session. Post community meeting we will have this recording live in the project navigator. Cheers, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Oct 14 22:02:48 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 14 Oct 2020 15:02:48 -0700 Subject: [ptl] Victoria Release Community Meeting In-Reply-To: <1602712314.02751333@apps.rackspace.com> References: <1602712314.02751333@apps.rackspace.com> Message-ID: Hello! You can think of these pre-recorded snippets as Project Updates since we aren't doing a summit for each release anymore. We hope to get them higher visibility by having them recorded and posted in the project navigator. Hope this helps :) -Kendall Nelson (diablo_rojo) On Wed, Oct 14, 2020 at 2:52 PM helena at openstack.org wrote: > Hi Everyone, > > > > We are looking to do a community meeting following The Open Infrastructure > Summit and PTG to discuss the Victoria Release. If you’re a PTL, please let > me know if you’re interested in doing a prerecorded explanation of your > project’s key features for the release. We will show a compilation of these > recordings at the community meeting and follow it with a live Q&A session. > Post community meeting we will have this recording live in the project > navigator. > > > > Cheers, > > Helena > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed Oct 14 23:08:15 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 14 Oct 2020 16:08:15 -0700 Subject: [ptl] Victoria Release Community Meeting In-Reply-To: References: <1602712314.02751333@apps.rackspace.com> Message-ID: Helena / Kendall, Interested! A couple of follow up questions: Should we adhere to a time limit / slide format? Is a date set for the community meeting? When are these recordings due? Thanks, Goutham On Wed, Oct 14, 2020 at 3:08 PM Kendall Nelson wrote: > Hello! > > You can think of these pre-recorded snippets as Project Updates since we > aren't doing a summit for each release anymore. We hope to get them higher > visibility by having them recorded and posted in the project navigator. > > Hope this helps :) > > -Kendall Nelson (diablo_rojo) > > On Wed, Oct 14, 2020 at 2:52 PM helena at openstack.org > wrote: > >> Hi Everyone, >> >> >> >> We are looking to do a community meeting following The Open >> Infrastructure Summit and PTG to discuss the Victoria Release. If you’re a >> PTL, please let me know if you’re interested in doing a prerecorded >> explanation of your project’s key features for the release. We will show a >> compilation of these recordings at the community meeting and follow it with >> a live Q&A session. Post community meeting we will have this recording live >> in the project navigator. >> >> >> >> Cheers, >> >> Helena >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 15 06:50:29 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 15 Oct 2020 08:50:29 +0200 Subject: Sorry for the stupid question... =?UTF-8?B?aXTigJlz?= been a while In-Reply-To: References: Message-ID: <2362955.0FsDiDE9NB@p1> Hi, I think that You need to configure bridge_mappings. See [1] for details. Dnia środa, 14 października 2020 20:52:14 CEST Florian Rommel pisze: > Hey everyone. > So, I have developed openstack, wrote installers and fixed parts in the raw > phytonutrients.. but... so I am , for fun and refreshing my memory, > building an openstack deployment by hand, the hood old fashioned way. I > have a few servers and I get everything working perfectly EXCEPT neutron > ovs with dvr integration. I am using Ubuntu focal with ussuri release. When > I deploy neutron i get the agents to show up and register with UP. The > neutron-openvswitch-agent crashes with physical network floating to > br-public not found. I am wondering how in need to configure the bridge in > order to make this work. I have a empty interface (internet facing) which I > added to br-public but where do I define the floating network name? > > Yes it’s stupid but I been banging my head against the wall:) > > Thanks already for help and sorry to anyone else for the dumb question:) > //Florian [1] https://docs.openstack.org/neutron/latest/admin/deploy-ovs-provider.html -- Slawek Kaplonski Principal Software Engineer Red Hat From sboyron at redhat.com Thu Oct 15 08:15:55 2020 From: sboyron at redhat.com (Sebastien Boyron) Date: Thu, 15 Oct 2020 10:15:55 +0200 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <52e15126-b5fc-1a4d-b21e-de7f7a37abf3@redhat.com> References: <52e15126-b5fc-1a4d-b21e-de7f7a37abf3@redhat.com> Message-ID: Hi, Thanks Daniel for taking care of this point and contributing to it. Daniel already opened some reviews on this subject : https://review.opendev.org/#/c/758028/ This can be tracked using topic "*setuptools-explicit*" ( https://review.opendev.org/#/q/topic:setuptools-explicit) Hervé Beraud made a remark on review 758028: ~~~ The rationale behind these changes LGTM. However I've some concerns related to pbr: pbr rely on setuptools [1] and still support python2.7 [2] setuptools 50.3.0 only support python3 [3] So I wonder if we should also define a version which support python2.7 to avoid issues on with this context. setuptools dropped the support of python 2 with 45.0.0 [4] so we could use the version 44.1.1 [5] for this use case. [1] https://opendev.org/openstack/pbr/src/branch/master/setup.py#L16 [2] https://opendev.org/openstack/pbr/src/branch/master/setup.cfg#L25 [3] https://pypi.org/project/setuptools/50.3.0/ [4] https://setuptools.readthedocs.io/en/latest/history.html#v45-0-0 [5] https://pypi.org/project/setuptools/44.1.1/ ~~~ I think it could be worth defining the version or a rule (py2 vs py3) here before performing a large series of patches. Cheers, *SEBASTIEN BOYRON* Red Hat On Mon, Oct 5, 2020 at 10:31 AM Daniel Bengtsson wrote: > > > Le 02/10/2020 à 15:40, Sebastien Boyron a écrit : > > I am opening the discussion and pointing to this right now, but I think > > we should wait for the Wallaby release before doing anything on that > > point to insert this modification > > into the regular development cycle. On a release point of view all the > > changes related to this proposal will be released through the classic > > release process > > and they will be landed with other projects changes, in other words it > > will not require a range of specific releases for projects. > It's a good idea. I agree explicit is better than implicit. I'm > interesting to help on this subject. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Oct 15 08:55:53 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 15 Oct 2020 10:55:53 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Stein Message-ID: Hi, As Victoria was released yesterday and we are in a less busy period, it is a good opportunity to call your attention to the following: In less than a month Stein is planned to enter into Extended Maintenance phase [1] (planned date: 2020-11-11). I have generated the list of *open* and *unreleased* changes in *stable/stein* for the follows-policy tagged repositories [2]. These lists could help the teams, who are planning to do a *final* release on Stein before moving stable/stein branches to Extended Maintenance. Feel free to edit and extend these lists to track your progress! * At the transition date the Release Team will tag the *latest* (Stein)   releases of repositories with *stein-em* tag. * After the transition stable/stein will be still open for bugfixes,   but there won't be any official releases. NOTE: teams, please focus on wrapping up your libraries first if there is any concern about the changes, in order to avoid broken releases! Thanks, Előd [1] https://releases.openstack.org [2] https://etherpad.openstack.org/p/stein-final-release-before-em From geguileo at redhat.com Thu Oct 15 08:57:30 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 15 Oct 2020 10:57:30 +0200 Subject: [Openstack][cinder] dell unity iscsi faulty devices In-Reply-To: References: <20201014114148.w5f427us5pyiro6g@localhost> Message-ID: <20201015085730.f3i4u3qfcfgfwrww@localhost> On 14/10, Ignazio Cassano wrote: > Hello, thank you for the answer. > I am using os-brick 2.3.8 but I got same issues on stein with os.brick 2.8 > For explain better the situation I send you the output of multipath -ll on > a compute node: > root at podvc-kvm01 ansible]# multipath -ll > Oct 14 18:50:01 | sdbg: alua not supported > Oct 14 18:50:01 | sdbe: alua not supported > Oct 14 18:50:01 | sdbd: alua not supported > Oct 14 18:50:01 | sdbf: alua not supported > 360060160f0d049007ab7275f743d0286 dm-11 DGC ,VRAID > size=30G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=0 status=enabled > | |- 15:0:0:71 sdbg 67:160 failed faulty running > | `- 12:0:0:71 sdbe 67:128 failed faulty running > `-+- policy='round-robin 0' prio=0 status=enabled > |- 11:0:0:71 sdbd 67:112 failed faulty running > `- 13:0:0:71 sdbf 67:144 failed faulty running > 360060160f0d049004cdb615f52343fdb dm-8 DGC ,VRAID > size=80G features='2 queue_if_no_path retain_attached_hw_handler' > hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=50 status=active > | |- 15:0:0:210 sdau 66:224 active ready running > | `- 12:0:0:210 sdas 66:192 active ready running > `-+- policy='round-robin 0' prio=10 status=enabled > |- 11:0:0:210 sdar 66:176 active ready running > `- 13:0:0:210 sdat 66:208 active ready running > 360060160f0d0490034aa645fe52265eb dm-12 DGC ,VRAID > size=100G features='2 queue_if_no_path retain_attached_hw_handler' > hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=50 status=active > | |- 12:0:0:177 sdbi 67:192 active ready running > | `- 15:0:0:177 sdbk 67:224 active ready running > `-+- policy='round-robin 0' prio=10 status=enabled > |- 11:0:0:177 sdbh 67:176 active ready running > `- 13:0:0:177 sdbj 67:208 active ready running > 360060160f0d04900159f225fd6126db9 dm-6 DGC ,VRAID > size=40G features='2 queue_if_no_path retain_attached_hw_handler' > hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=50 status=active > | |- 11:0:0:26 sdaf 65:240 active ready running > | `- 13:0:0:26 sdah 66:16 active ready running > `-+- policy='round-robin 0' prio=10 status=enabled > |- 12:0:0:26 sdag 66:0 active ready running > `- 15:0:0:26 sdai 66:32 active ready running > Oct 14 18:50:01 | sdba: alua not supported > Oct 14 18:50:01 | sdbc: alua not supported > Oct 14 18:50:01 | sdaz: alua not supported > Oct 14 18:50:01 | sdbb: alua not supported > 360060160f0d049007eb7275f93937511 dm-10 DGC ,VRAID > size=40G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=0 status=enabled > | |- 12:0:0:242 sdba 67:64 failed faulty running > | `- 15:0:0:242 sdbc 67:96 failed faulty running > `-+- policy='round-robin 0' prio=0 status=enabled > |- 11:0:0:242 sdaz 67:48 failed faulty running > `- 13:0:0:242 sdbb 67:80 failed faulty running > 360060160f0d049003a567c5fb72201e8 dm-7 DGC ,VRAID > size=40G features='2 queue_if_no_path retain_attached_hw_handler' > hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=50 status=active > | |- 12:0:0:57 sdbq 68:64 active ready running > | `- 15:0:0:57 sdbs 68:96 active ready running > `-+- policy='round-robin 0' prio=10 status=enabled > |- 11:0:0:57 sdbp 68:48 active ready running > `- 13:0:0:57 sdbr 68:80 active ready running > 360060160f0d04900c120625f802ea1fa dm-9 DGC ,VRAID > size=25G features='2 queue_if_no_path retain_attached_hw_handler' > hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=50 status=active > | |- 11:0:0:234 sdav 66:240 active ready running > | `- 13:0:0:234 sdax 67:16 active ready running > `-+- policy='round-robin 0' prio=10 status=enabled > |- 15:0:0:234 sday 67:32 active ready running > `- 12:0:0:234 sdaw 67:0 active ready running > 360060160f0d04900b8b0615fb14ef1bd dm-3 DGC ,VRAID > size=50G features='2 queue_if_no_path retain_attached_hw_handler' > hwhandler='1 alua' wp=rw > |-+- policy='round-robin 0' prio=50 status=active > | |- 11:0:0:11 sdan 66:112 active ready running > | `- 13:0:0:11 sdap 66:144 active ready running > `-+- policy='round-robin 0' prio=10 status=enabled > |- 12:0:0:11 sdao 66:128 active ready running > `- 15:0:0:11 sdaq 66:160 active ready running > > The active running are related to running virtual machines. > The faulty are related to virtual macnines migrated on other kvm nodes. > Every volume has 4 path because iscsi on unity needs two different vlans, > each one with 2 addresses. > I think this issue can be related to os-brick because when I migrate a > virtual machine from host A host B in the cova compute log on host A I read: > 2020-10-13 10:31:02.769 118727 DEBUG os_brick.initiator.connectors.iscsi > [req-771ede8c-6e1b-4f3f-ad4a-1f6ed820a55c 66adb965bef64eaaab2af93ade87e2ca > 85cace94dcc7484c85ff9337eb1d0c4c - default default] *Disconnecting from: []* > > Ignazio Hi, That's definitely the right clue!! Though I don't fully agree with this being an os-brick issue just yet. ;-) Like I mentioned before, RCA is usually non-trivial, and explaining how to debug these issues over email is close to impossible, but if this were my system, and assuming you have tested normal attach/detach procedure and is working fine, this is what I would do: - Enable DEBUG logs on Nova compute node (I believe you already have) - Attach a new device to an instance on that node with --debug to get the request id - Get the connection information dictionary that os-brick receives on the call to connect_volume for that request, and the data that os-brick returns to Nova on that method call completion. - Check if the returned data to Nova is a multipathed device or not (in 'path'), and whether we have the wwn or not (in 'scsi_wwn'). It should be a multipath device, and then I would check its status in the multipath daemon. - Now do the live migration (with --debug to get the request id) and see what information Nova passes in that request to os-brick's disconnect_volume. - Is it the same? Then it's likely an os-brick issue, and I can have a look at the logs if you put the logs for that os-brick detach process in a pastebin [1]. - Is it different? Then it's either a Nova bug or a Cinder driver specific bug. - Is there a call from Nova to Cinder, in the migration request, for that same volume to initialize_connection passing the source host connector info (info from the host that is currently attached)? If there is a call, check if the returned data is different from the one we used to do the attach, if that's the case then it's a Nova and Cinder driver bug that was solved on the Nova side in 17.0.10 [2]. - If there's no call to Cinder's initialize_connection, the it's most likely a Nova bug. Try to find out if this connection info makes any sense for that host (LUN, target, etc.) or if this is the one from the destination volume. I hope this somehow helps. Cheers, Gorka. [1]: http://paste.openstack.org/ [2]: https://review.opendev.org/#/c/637827/ > > Il giorno mer 14 ott 2020 alle ore 13:41 Gorka Eguileor > ha scritto: > > > On 09/10, Ignazio Cassano wrote: > > > Hello Stackers, I am using dell emc iscsi driver on my centos 7 queens > > > openstack. It works and instances work as well but on compute nodes I > > got a > > > lot a faulty device reported by multipath il comand. > > > I do know why this happens, probably attacching and detaching volumes and > > > live migrating instances do not close something well. > > > I read this can cause serious performances problems on compute nodes. > > > Please, any workaround and/or patch is suggested ? > > > Regards > > > Ignazio > > > > Hi, > > > > There are many, many, many things that could be happening there, and > > it's not usually trivial doing the RCA, so the following questions are > > just me hoping this is something "easy" to find out. > > > > What os-brick version from Queens are you running? Latest (2.3.9), or > > maybe one older than 2.3.3? > > > > When you say you have faulty devices reported, are these faulty devices > > alone in the multipath DM? Or do you have some faulty ones with some > > that are ok? > > > > If there are some OK and some that aren't, are they consecutive devices? > > (as in /dev/sda /dev/sdb etc). > > > > Cheers, > > Gorka. > > > > From radoslaw.piliszek at gmail.com Thu Oct 15 09:15:28 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 15 Oct 2020 11:15:28 +0200 Subject: [kolla][kayobe] The upcoming Kolla meetings (IRC, Klub, Kall) *cancelled* due to OpenStack-wise events (Open Infra Summit, PTG) Message-ID: Hi Folks! The subject is pretty self-explanatory. Next week we have Open Infrastructure Summit [1]. And the following week - PTG [2]. As a cycle-trailing project we are now also working on our Victoria release. Hence, we decided to *cancel* the regular meetings on the following days (all at 15 UTC): Oct 15 - Kolla Kall Oct 21 - Kolla IRC meeting Oct 22 - Kolla Klub Oct 28 - Kolla IRC meeting Oct 29 - Kolla Kall Please join us for the Kolla PTG sessions instead! [3] We will also coordinate asynchronously as we do most of the usual working week on IRC. We will be restoring the normal schedule after PTG ends. [1] https://www.openstack.org/summit/ [2] https://www.openstack.org/ptg/ [3] https://etherpad.opendev.org/p/kolla-wallaby-ptg -yoctozepto From yumeng_bao at yahoo.com Thu Oct 15 09:39:49 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 15 Oct 2020 17:39:49 +0800 Subject: [cyborg] Edge sync up at the PTG In-Reply-To: References: Message-ID: Hi Ildikó, Sorry for late response, I was just back from holidays. I saw your maillist thread this afternoon and was about to response when I saw this mail. ^^ Wednesday (October 28) at 1300 UTC - 1400 UTC has been occupied by nova-cyborg integration discussion[1]. Can we pick up another time? Like Tuesday (October 27) at 1300 UTC - 1400 UTC? Thanks, Yumeng [1]https://etherpad.opendev.org/p/cyborg-wallaby-goals Regards, Yumeng > On Oct 14, 2020, at 2:52 PM, Ildiko Vancsa wrote: > > Hi Yumeng, > > I hope you are doing good. > > I’m reaching out to you to check if you saw my email on the openstack-discuss mailing list about having a joint session with the edge wg again at the upcoming PTG? > > Thanks, > Ildikó > > >> Begin forwarded message: >> >> From: Ildiko Vancsa >> Subject: [cyborg] Edge sync up at the PTG >> Date: October 7, 2020 at 17:17:00 GMT+2 >> To: OpenStack Discuss >> >> Hi Cyborg Team, >> >> I’m reaching out to check if you are available during the upcoming PTG to continue discussions we were having in June to see where Cyborg evolved since then and how we can continue collaborating. >> >> Like last time, the OSF Edge Computing Group is meeting on the first three days and we have a slot reserved to sync up with OpenStack projects such as Cyborg on Wednesday (October 28) at 1300 UTC - 1400 UTC. Would the team be available to join at that time? >> >> Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 >> >> Thanks, >> Ildikó >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Oct 15 09:57:40 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Oct 2020 11:57:40 +0200 Subject: Python 3.9 is here in Debian Sid Message-ID: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> Hi there! Matthias Klose decided that uploading Python 3.9 on the 14th of this month (yes, the same day of the OpenStack release) would be fun. I am now really having fun ... fixing bugs! :) What's not so funny for me, is that some key packages like Greenlet or lxml aren't built yet for Python 3.9, even though they should. As a consequence, I just need to wait to be able to upload to Unstable. So this will delay my usual announcement for the release GA in Debian (even though the unofficial Victoria repo for Buster is ready and I could spawn VMs on Buster + Victoria). Anyways, all of this will settle slowly. Though Python 3.9 is here to stay, and to soon reach Ubuntu as well (it's planned for the next release). It'd be nice if projects were starting to investigate gating on Python 3.9 as early as possible. As much as I can tell, there's not so many issues to fix. The first one I fixed is here: https://review.opendev.org/758237 but I don't expect much more, and hopefully, I'll be able to propose more patch as I find the issues. Must we wait until Python 3.9 reaches Ubuntu to enable gating in the OpenStack projects? FYI, I do expect Victoria to run over Python 3.9 in Bullseye. Please don't reply with the usual "Victoria doesn't support it", because that's irrelevant. I'm fine with backporting patches, as long as they also exist in Wallaby. Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu Oct 15 09:59:33 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Oct 2020 11:59:33 +0200 Subject: [oslo] Project leadership In-Reply-To: References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> <20200930214825.v7hvjec2ejffay55@yuggoth.org> Message-ID: <15859f8e-6ee8-39e7-3630-9e4220a13b44@debian.org> On 10/1/20 10:59 AM, Herve Beraud wrote: > Hello, > > First, thanks Ben for this email. > > I'm personally in favor of experimenting with the DPL on oslo during W. Though I've heard that the Debian Project Leader is busy... :) Thomas From james.page at canonical.com Thu Oct 15 10:32:47 2020 From: james.page at canonical.com (James Page) Date: Thu, 15 Oct 2020 11:32:47 +0100 Subject: Python 3.9 is here in Debian Sid In-Reply-To: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> Message-ID: On Thu, Oct 15, 2020 at 10:59 AM Thomas Goirand wrote: > Hi there! > > Matthias Klose decided that uploading Python 3.9 on the 14th of this > month (yes, the same day of the OpenStack release) would be fun. I am > now really having fun ... fixing bugs! :) > > What's not so funny for me, is that some key packages like Greenlet or > lxml aren't built yet for Python 3.9, even though they should. As a > consequence, I just need to wait to be able to upload to Unstable. So > this will delay my usual announcement for the release GA in Debian (even > though the unofficial Victoria repo for Buster is ready and I could > spawn VMs on Buster + Victoria). > > Anyways, all of this will settle slowly. Though Python 3.9 is here to > stay, and to soon reach Ubuntu as well (it's planned for the next release). > > It'd be nice if projects were starting to investigate gating on Python > 3.9 as early as possible. As much as I can tell, there's not so many > issues to fix. The first one I fixed is here: > > https://review.opendev.org/758237 > > but I don't expect much more, and hopefully, I'll be able to propose > more patch as I find the issues. > > Must we wait until Python 3.9 reaches Ubuntu to enable gating in the > OpenStack projects? > 3.9 is already available in Ubuntu 20.04 (albeit at an RC but it will be updated) so that should not block enabling test gates. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Oct 15 11:22:36 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Oct 2020 13:22:36 +0200 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: References: Message-ID: <21808778-ede3-bba2-36b4-d48a9bb07fa3@debian.org> On 10/2/20 3:40 PM, Sebastien Boyron wrote: > Hey all, > > Almost all openstack projects are using pbr and setuptools. > > A great majority are directly importing setuptools in the code > (setup.py) while not explicitly requiring it in the requirements. > In these cases , setuptools is only installed thanks to pbr dependency. > > > Example 1: Having a look on nova code : > http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/nova > > > We can see that setuptools is importer in setup.py to requires pbr > whereas neither in *requirements.txt nor in *constraints.txt > > > Example 2: This is exactly the same for neutron : > http://codesearch.openstack.org/?q=setuptools&i=nope&files=&repos=openstack/neutron > > > I discovered this while making some cleaning on rpm-packaging spec files. > Spec files should reflect the content  of explicits requirements of the > related project. > Until now there is no issue on that, but to make it proper, setuptools > has been removed from all the projects rpm dependencies and > relies only on pbr rpm dependency except if there is an explicit > requirement on it in the project. If tomorrow, unlikely, >  pbr evolves and no longer requires setuptools, many things can fail: > - All explicits imports in python code should break. > - RPM generation should break too since it won't be present as a > BuildRequirement. > - RPM installation of old versions will pull the latest pbr version > which will not require anymore and can break the execution. > - RPM build can be held by distribute if there is not setuptools > buildRequired anymore. > > As the python PEP20 claims "Explicit is better than implicit." and it > should be our mantra on Openstack, especially with this kind of nasty case. > https://www.python.org/dev/peps/pep-0020/ > > I think we should explicitly require setuptools if, and only if, we need > to make an explicit import on it. > > This will help to have the right requirements into the RPMs while still > staying simple and logical; keeping project requirements and > RPM requirements in phase. > > I am opening the discussion and pointing to this right now, but I think > we should wait for the Wallaby release before doing anything on that > point to insert this modification > into the regular development cycle. On a release point of view all the > changes related to this proposal will be released through the classic > release process > and they will be landed with other projects changes, in other words it > will not require a range of specific releases for projects. > > *SEBASTIEN BOYRON* > Red Hat Hi, IMO, this is a problem in downstream distributions, not in OpenStack (unless this influences pip3). In Debian, I did hard-wire an explicit build-dependency on python3-setuptools on each and every package. I don't understand why you haven't done the same thing. If projects want to add an implicit dependency on setuptools, I'm not against it, but IMO this isn't worth the effort. Cheers, Thomas Goirand (zigo) From ionut at fleio.com Thu Oct 15 11:30:20 2020 From: ionut at fleio.com (Ionut Biru) Date: Thu, 15 Oct 2020 14:30:20 +0300 Subject: [magnum]monitoring enabled label problem In-Reply-To: <21faaec0-9901-5b35-1ed1-18fd4cd5ac7a@catalyst.net.nz> References: <21faaec0-9901-5b35-1ed1-18fd4cd5ac7a@catalyst.net.nz> Message-ID: Hi, The config: https://paste.xinu.at/0ZzG/ Here is the log: https://paste.xinu.at/J5qDq/ I could replicate without selinux_mode=disabled, is there because i was trying to test something. On Wed, Oct 14, 2020 at 8:30 PM feilong wrote: > Hi Ionut, > > I didn't see this error before. Could you please share your cluster > template so that I can reproduce? Thanks. > > > On 15/10/20 1:27 am, Ionut Biru wrote: > > hi guys > > Currently I'm using the latest from stable/ussuri. > > I have deployed a private cluster(floating ip disabled) with > monitoring_enabled=true and it seems that the services related to > monitoring fail to deploy with an error: Error: unable to build kubernetes > objects from release manifest: error validating "": error validating data: > ValidationError(Endpoints.subsets[0].addresses[0]): missing required field > "ip" in io.k8s.api.core.v1.EndpointAddress > > If I enable floating ip, everything is deployed correctly. > > Is there a workaround that I have to use in this type of scenario? > > https://paste.xinu.at/Wim/ > > -- > Ionut Biru - https://fleio.com > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Oct 15 12:02:08 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 15 Oct 2020 14:02:08 +0200 Subject: [Openstack][cinder] dell unity iscsi faulty devices In-Reply-To: <20201015085730.f3i4u3qfcfgfwrww@localhost> References: <20201014114148.w5f427us5pyiro6g@localhost> <20201015085730.f3i4u3qfcfgfwrww@localhost> Message-ID: Ok, thanks. I am going to apply steps you suggested. Ignazio Il Gio 15 Ott 2020, 10:57 Gorka Eguileor ha scritto: > On 14/10, Ignazio Cassano wrote: > > Hello, thank you for the answer. > > I am using os-brick 2.3.8 but I got same issues on stein with os.brick > 2.8 > > For explain better the situation I send you the output of multipath -ll > on > > a compute node: > > root at podvc-kvm01 ansible]# multipath -ll > > Oct 14 18:50:01 | sdbg: alua not supported > > Oct 14 18:50:01 | sdbe: alua not supported > > Oct 14 18:50:01 | sdbd: alua not supported > > Oct 14 18:50:01 | sdbf: alua not supported > > 360060160f0d049007ab7275f743d0286 dm-11 DGC ,VRAID > > size=30G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=0 status=enabled > > | |- 15:0:0:71 sdbg 67:160 failed faulty running > > | `- 12:0:0:71 sdbe 67:128 failed faulty running > > `-+- policy='round-robin 0' prio=0 status=enabled > > |- 11:0:0:71 sdbd 67:112 failed faulty running > > `- 13:0:0:71 sdbf 67:144 failed faulty running > > 360060160f0d049004cdb615f52343fdb dm-8 DGC ,VRAID > > size=80G features='2 queue_if_no_path retain_attached_hw_handler' > > hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=50 status=active > > | |- 15:0:0:210 sdau 66:224 active ready running > > | `- 12:0:0:210 sdas 66:192 active ready running > > `-+- policy='round-robin 0' prio=10 status=enabled > > |- 11:0:0:210 sdar 66:176 active ready running > > `- 13:0:0:210 sdat 66:208 active ready running > > 360060160f0d0490034aa645fe52265eb dm-12 DGC ,VRAID > > size=100G features='2 queue_if_no_path retain_attached_hw_handler' > > hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=50 status=active > > | |- 12:0:0:177 sdbi 67:192 active ready running > > | `- 15:0:0:177 sdbk 67:224 active ready running > > `-+- policy='round-robin 0' prio=10 status=enabled > > |- 11:0:0:177 sdbh 67:176 active ready running > > `- 13:0:0:177 sdbj 67:208 active ready running > > 360060160f0d04900159f225fd6126db9 dm-6 DGC ,VRAID > > size=40G features='2 queue_if_no_path retain_attached_hw_handler' > > hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=50 status=active > > | |- 11:0:0:26 sdaf 65:240 active ready running > > | `- 13:0:0:26 sdah 66:16 active ready running > > `-+- policy='round-robin 0' prio=10 status=enabled > > |- 12:0:0:26 sdag 66:0 active ready running > > `- 15:0:0:26 sdai 66:32 active ready running > > Oct 14 18:50:01 | sdba: alua not supported > > Oct 14 18:50:01 | sdbc: alua not supported > > Oct 14 18:50:01 | sdaz: alua not supported > > Oct 14 18:50:01 | sdbb: alua not supported > > 360060160f0d049007eb7275f93937511 dm-10 DGC ,VRAID > > size=40G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=0 status=enabled > > | |- 12:0:0:242 sdba 67:64 failed faulty running > > | `- 15:0:0:242 sdbc 67:96 failed faulty running > > `-+- policy='round-robin 0' prio=0 status=enabled > > |- 11:0:0:242 sdaz 67:48 failed faulty running > > `- 13:0:0:242 sdbb 67:80 failed faulty running > > 360060160f0d049003a567c5fb72201e8 dm-7 DGC ,VRAID > > size=40G features='2 queue_if_no_path retain_attached_hw_handler' > > hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=50 status=active > > | |- 12:0:0:57 sdbq 68:64 active ready running > > | `- 15:0:0:57 sdbs 68:96 active ready running > > `-+- policy='round-robin 0' prio=10 status=enabled > > |- 11:0:0:57 sdbp 68:48 active ready running > > `- 13:0:0:57 sdbr 68:80 active ready running > > 360060160f0d04900c120625f802ea1fa dm-9 DGC ,VRAID > > size=25G features='2 queue_if_no_path retain_attached_hw_handler' > > hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=50 status=active > > | |- 11:0:0:234 sdav 66:240 active ready running > > | `- 13:0:0:234 sdax 67:16 active ready running > > `-+- policy='round-robin 0' prio=10 status=enabled > > |- 15:0:0:234 sday 67:32 active ready running > > `- 12:0:0:234 sdaw 67:0 active ready running > > 360060160f0d04900b8b0615fb14ef1bd dm-3 DGC ,VRAID > > size=50G features='2 queue_if_no_path retain_attached_hw_handler' > > hwhandler='1 alua' wp=rw > > |-+- policy='round-robin 0' prio=50 status=active > > | |- 11:0:0:11 sdan 66:112 active ready running > > | `- 13:0:0:11 sdap 66:144 active ready running > > `-+- policy='round-robin 0' prio=10 status=enabled > > |- 12:0:0:11 sdao 66:128 active ready running > > `- 15:0:0:11 sdaq 66:160 active ready running > > > > The active running are related to running virtual machines. > > The faulty are related to virtual macnines migrated on other kvm nodes. > > Every volume has 4 path because iscsi on unity needs two different vlans, > > each one with 2 addresses. > > I think this issue can be related to os-brick because when I migrate a > > virtual machine from host A host B in the cova compute log on host A I > read: > > 2020-10-13 10:31:02.769 118727 DEBUG os_brick.initiator.connectors.iscsi > > [req-771ede8c-6e1b-4f3f-ad4a-1f6ed820a55c > 66adb965bef64eaaab2af93ade87e2ca > > 85cace94dcc7484c85ff9337eb1d0c4c - default default] *Disconnecting from: > []* > > > > Ignazio > > Hi, > > That's definitely the right clue!! Though I don't fully agree with this > being an os-brick issue just yet. ;-) > > Like I mentioned before, RCA is usually non-trivial, and explaining how > to debug these issues over email is close to impossible, but if this > were my system, and assuming you have tested normal attach/detach > procedure and is working fine, this is what I would do: > > - Enable DEBUG logs on Nova compute node (I believe you already have) > - Attach a new device to an instance on that node with --debug to get > the request id > - Get the connection information dictionary that os-brick receives on > the call to connect_volume for that request, and the data that > os-brick returns to Nova on that method call completion. > - Check if the returned data to Nova is a multipathed device or not (in > 'path'), and whether we have the wwn or not (in 'scsi_wwn'). It > should be a multipath device, and then I would check its status in the > multipath daemon. > - Now do the live migration (with --debug to get the request id) and see > what information Nova passes in that request to os-brick's > disconnect_volume. > - Is it the same? Then it's likely an os-brick issue, and I can have a > look at the logs if you put the logs for that os-brick detach > process in a pastebin [1]. > - Is it different? Then it's either a Nova bug or a Cinder driver > specific bug. > - Is there a call from Nova to Cinder, in the migration request, for > that same volume to initialize_connection passing the source host > connector info (info from the host that is currently attached)? > If there is a call, check if the returned data is different from > the one we used to do the attach, if that's the case then it's a > Nova and Cinder driver bug that was solved on the Nova side in > 17.0.10 [2]. > - If there's no call to Cinder's initialize_connection, the it's > most likely a Nova bug. Try to find out if this connection info > makes any sense for that host (LUN, target, etc.) or if this is > the one from the destination volume. > > I hope this somehow helps. > > Cheers, > Gorka. > > > [1]: http://paste.openstack.org/ > [2]: https://review.opendev.org/#/c/637827/ > > > > > Il giorno mer 14 ott 2020 alle ore 13:41 Gorka Eguileor < > geguileo at redhat.com> > > ha scritto: > > > > > On 09/10, Ignazio Cassano wrote: > > > > Hello Stackers, I am using dell emc iscsi driver on my centos 7 > queens > > > > openstack. It works and instances work as well but on compute nodes I > > > got a > > > > lot a faulty device reported by multipath il comand. > > > > I do know why this happens, probably attacching and detaching > volumes and > > > > live migrating instances do not close something well. > > > > I read this can cause serious performances problems on compute nodes. > > > > Please, any workaround and/or patch is suggested ? > > > > Regards > > > > Ignazio > > > > > > Hi, > > > > > > There are many, many, many things that could be happening there, and > > > it's not usually trivial doing the RCA, so the following questions are > > > just me hoping this is something "easy" to find out. > > > > > > What os-brick version from Queens are you running? Latest (2.3.9), or > > > maybe one older than 2.3.3? > > > > > > When you say you have faulty devices reported, are these faulty devices > > > alone in the multipath DM? Or do you have some faulty ones with some > > > that are ok? > > > > > > If there are some OK and some that aren't, are they consecutive > devices? > > > (as in /dev/sda /dev/sdb etc). > > > > > > Cheers, > > > Gorka. > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Oct 15 12:03:40 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Oct 2020 14:03:40 +0200 Subject: Python 3.9 is here in Debian Sid In-Reply-To: References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> Message-ID: On 10/15/20 12:32 PM, James Page wrote: > 3.9 is already available in Ubuntu 20.04 (albeit at an RC but it will be > updated) so that should not block enabling test gates. > > Cheers > > James Fantastic news, thanks for sharing! Thomas From smooney at redhat.com Thu Oct 15 12:43:27 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 15 Oct 2020 13:43:27 +0100 Subject: Python 3.9 is here in Debian Sid In-Reply-To: References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> Message-ID: On Thu, 2020-10-15 at 11:32 +0100, James Page wrote: > On Thu, Oct 15, 2020 at 10:59 AM Thomas Goirand > wrote: > > > Hi there! > > > > Matthias Klose decided that uploading Python 3.9 on the 14th of > > this > > month (yes, the same day of the OpenStack release) would be fun. I > > am > > now really having fun ... fixing bugs! :) > > > > What's not so funny for me, is that some key packages like Greenlet > > or > > lxml aren't built yet for Python 3.9, even though they should. As a > > consequence, I just need to wait to be able to upload to Unstable. > > So > > this will delay my usual announcement for the release GA in Debian > > (even > > though the unofficial Victoria repo for Buster is ready and I could > > spawn VMs on Buster + Victoria). > > > > Anyways, all of this will settle slowly. Though Python 3.9 is here > > to > > stay, and to soon reach Ubuntu as well (it's planned for the next > > release). > > > > It'd be nice if projects were starting to investigate gating on > > Python > > 3.9 as early as possible. As much as I can tell, there's not so > > many > > issues to fix. The first one I fixed is here: > > > > https://review.opendev.org/758237 > > > > but I don't expect much more, and hopefully, I'll be able to > > propose > > more patch as I find the issues. > > > > Must we wait until Python 3.9 reaches Ubuntu to enable gating in > > the > > OpenStack projects? > > > > 3.9 is already available in Ubuntu 20.04 (albeit at an RC but it will > be > updated) so that should not block enabling test gates. rhel 9 wont be around for quite a while yet but i think its also going to be based on 3.9 so having early gating on 20.04 would be quite useful as i think we should be trying to make 3.9 supported in wallaby the next majory redhat openstack release will be based on wallaby and that is likely to be rhel9 based so having rhel 9 compatiablw will help all the major distos be able to ship wallaby when it is released > > Cheers > > James From fungi at yuggoth.org Thu Oct 15 12:56:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Oct 2020 12:56:34 +0000 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <21808778-ede3-bba2-36b4-d48a9bb07fa3@debian.org> References: <21808778-ede3-bba2-36b4-d48a9bb07fa3@debian.org> Message-ID: <20201015125633.tdb4awsurn4mn25k@yuggoth.org> On 2020-10-15 13:22:36 +0200 (+0200), Thomas Goirand wrote: [...] > IMO, this is a problem in downstream distributions, not in > OpenStack (unless this influences pip3). > > In Debian, I did hard-wire an explicit build-dependency on > python3-setuptools on each and every package. I don't understand > why you haven't done the same thing. [...] And even this isn't completely necessary... it's entirely possible for projects to only use PBR and setuptools for creating packages, but use other Python stdlib methods (e.g., importlib in Python 3.8) to access version information and related package metadata during runtime. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zhengcharon at gmail.com Thu Oct 15 06:05:46 2020 From: zhengcharon at gmail.com (Charon Zheng) Date: Thu, 15 Oct 2020 14:05:46 +0800 Subject: Resource Requirement Message-ID: Hello! I’m a openstack beginner in China and I need your help. I am following the doc. on https://docs.openstack.org/sahara/pike/user/vanilla-plugin.html, but the image(http://sahara-files.mirantis.com/images/upstream) is 404. Could you please provide me with a new dowload link? It will be really helpful. I'm sorry if there is any bother. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Thu Oct 15 13:37:34 2020 From: mthode at mthode.org (Matthew Thode) Date: Thu, 15 Oct 2020 08:37:34 -0500 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: References: <52e15126-b5fc-1a4d-b21e-de7f7a37abf3@redhat.com> Message-ID: <20201015133734.j5o74zeolquizus4@mthode.org> I don't think pbr uses constraints or obeys global-requirements. If it does, I don't think it should. On 20-10-15 10:15:55, Sebastien Boyron wrote: > Hi, > > Thanks Daniel for taking care of this point and contributing to it. > > Daniel already opened some reviews on this subject : > https://review.opendev.org/#/c/758028/ > > This can be tracked using topic "*setuptools-explicit*" ( > https://review.opendev.org/#/q/topic:setuptools-explicit) > > Hervé Beraud made a remark on review 758028: > > ~~~ > The rationale behind these changes LGTM. > > However I've some concerns related to pbr: > > pbr rely on setuptools [1] and still support python2.7 [2] > setuptools 50.3.0 only support python3 [3] > So I wonder if we should also define a version which support python2.7 to > avoid issues on with this context. setuptools dropped the support of python > 2 with 45.0.0 [4] so we could use the version 44.1.1 [5] for this use case. > > [1] https://opendev.org/openstack/pbr/src/branch/master/setup.py#L16 > [2] https://opendev.org/openstack/pbr/src/branch/master/setup.cfg#L25 > [3] https://pypi.org/project/setuptools/50.3.0/ > [4] https://setuptools.readthedocs.io/en/latest/history.html#v45-0-0 > [5] https://pypi.org/project/setuptools/44.1.1/ > ~~~ > > I think it could be worth defining the version or a rule (py2 vs py3) here > before performing a large series of patches. > > Cheers, > > *SEBASTIEN BOYRON* > Red Hat > > On Mon, Oct 5, 2020 at 10:31 AM Daniel Bengtsson wrote: > > > > > > > Le 02/10/2020 à 15:40, Sebastien Boyron a écrit : > > > I am opening the discussion and pointing to this right now, but I think > > > we should wait for the Wallaby release before doing anything on that > > > point to insert this modification > > > into the regular development cycle. On a release point of view all the > > > changes related to this proposal will be released through the classic > > > release process > > > and they will be landed with other projects changes, in other words it > > > will not require a range of specific releases for projects. > > It's a good idea. I agree explicit is better than implicit. I'm > > interesting to help on this subject. > > > > -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Thu Oct 15 13:39:15 2020 From: mthode at mthode.org (Matthew Thode) Date: Thu, 15 Oct 2020 08:39:15 -0500 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <20201015125633.tdb4awsurn4mn25k@yuggoth.org> References: <21808778-ede3-bba2-36b4-d48a9bb07fa3@debian.org> <20201015125633.tdb4awsurn4mn25k@yuggoth.org> Message-ID: <20201015133915.k73mqo23zncqges5@mthode.org> On 20-10-15 12:56:34, Jeremy Stanley wrote: > On 2020-10-15 13:22:36 +0200 (+0200), Thomas Goirand wrote: > [...] > > IMO, this is a problem in downstream distributions, not in > > OpenStack (unless this influences pip3). > > > > In Debian, I did hard-wire an explicit build-dependency on > > python3-setuptools on each and every package. I don't understand > > why you haven't done the same thing. > [...] > > And even this isn't completely necessary... it's entirely possible > for projects to only use PBR and setuptools for creating packages, > but use other Python stdlib methods (e.g., importlib in Python 3.8) > to access version information and related package metadata during > runtime. > -- > Jeremy Stanley For a large majority of packages, setuptools is required because they have entry-points. This mostly includes all the clients, but also hits some oslo packages as well. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ltoscano at redhat.com Thu Oct 15 13:53:07 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 15 Oct 2020 15:53:07 +0200 Subject: [sahara] Resource Requirement In-Reply-To: References: Message-ID: <6190999.G0QQBjFxQf@whitebase.usersys.redhat.com> On Thursday, 15 October 2020 08:05:46 CEST Charon Zheng wrote: > Hello! > > I’m a openstack beginner in China and I need your help. > > I am following the doc. on > https://docs.openstack.org/sahara/pike/user/vanilla-plugin.html, but the > image(http://sahara-files.mirantis.com/images/upstream) is 404. Could you > please provide me with a new dowload link? It will be really helpful. > Please note the Mirantis images were provided just for convenience, but you should try to rebuild your own images as explained in that page: https://docs.openstack.org/sahara/pike/user/vanilla-imagebuilder.html Please also note that pike is no more supported by the sahara team. -- Luigi From fungi at yuggoth.org Thu Oct 15 14:08:46 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Oct 2020 14:08:46 +0000 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <20201015133915.k73mqo23zncqges5@mthode.org> References: <21808778-ede3-bba2-36b4-d48a9bb07fa3@debian.org> <20201015125633.tdb4awsurn4mn25k@yuggoth.org> <20201015133915.k73mqo23zncqges5@mthode.org> Message-ID: <20201015140846.24dnruz2newnv6rj@yuggoth.org> On 2020-10-15 08:39:15 -0500 (-0500), Matthew Thode wrote: > On 20-10-15 12:56:34, Jeremy Stanley wrote: > > On 2020-10-15 13:22:36 +0200 (+0200), Thomas Goirand wrote: > > [...] > > > IMO, this is a problem in downstream distributions, not in > > > OpenStack (unless this influences pip3). > > > > > > In Debian, I did hard-wire an explicit build-dependency on > > > python3-setuptools on each and every package. I don't understand > > > why you haven't done the same thing. > > [...] > > > > And even this isn't completely necessary... it's entirely possible > > for projects to only use PBR and setuptools for creating packages, > > but use other Python stdlib methods (e.g., importlib in Python 3.8) > > to access version information and related package metadata during > > runtime. > > For a large majority of packages, setuptools is required because they > have entry-points. This mostly includes all the clients, but also hits > some oslo packages as well. https://docs.python.org/3/library/importlib.metadata.html#entry-points Maybe time to start thinking about a cycle goal around that? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ildiko.vancsa at gmail.com Thu Oct 15 14:09:20 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 15 Oct 2020 16:09:20 +0200 Subject: [cyborg] Edge sync up at the PTG In-Reply-To: References: Message-ID: <0588042E-526F-4C57-8C4A-9EBBF706EFF5@gmail.com> Hi Yumeng, Sorry for being so impatient, forgot about the holidays. We could do 1400 UTC on Wednesday if the slot after the sync with Nova works? I was just wondering about that one as there might be some aspects of that session that’s interesting for the edge group. If that doesn’t work we can do Tuesday as you suggested. Which one do you prefer? Thanks, Ildikó > On Oct 15, 2020, at 11:39, yumeng bao wrote: > > Hi Ildikó, > > Sorry for late response, I was just back from holidays. I saw your maillist thread this afternoon and was about to response when I saw this mail. ^^ > > Wednesday (October 28) at 1300 UTC - 1400 UTC has been occupied by nova-cyborg integration discussion[1]. > Can we pick up another time? Like Tuesday (October 27) at 1300 UTC - 1400 UTC? > > Thanks, > Yumeng > > [1]https://etherpad.opendev.org/p/cyborg-wallaby-goals > > Regards, > Yumeng > >> On Oct 14, 2020, at 2:52 PM, Ildiko Vancsa wrote: >> >> Hi Yumeng, >> >> I hope you are doing good. >> >> I’m reaching out to you to check if you saw my email on the openstack-discuss mailing list about having a joint session with the edge wg again at the upcoming PTG? >> >> Thanks, >> Ildikó >> >> >>> Begin forwarded message: >>> >>> From: Ildiko Vancsa >>> Subject: [cyborg] Edge sync up at the PTG >>> Date: October 7, 2020 at 17:17:00 GMT+2 >>> To: OpenStack Discuss >>> >>> Hi Cyborg Team, >>> >>> I’m reaching out to check if you are available during the upcoming PTG to continue discussions we were having in June to see where Cyborg evolved since then and how we can continue collaborating. >>> >>> Like last time, the OSF Edge Computing Group is meeting on the first three days and we have a slot reserved to sync up with OpenStack projects such as Cyborg on Wednesday (October 28) at 1300 UTC - 1400 UTC. Would the team be available to join at that time? >>> >>> Our planning etherpad is here: https://etherpad.opendev.org/p/ecg-vptg-october-2020 >>> >>> Thanks, >>> Ildikó >>> >>> >> From sean.mcginnis at gmx.com Thu Oct 15 14:21:28 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 15 Oct 2020 09:21:28 -0500 Subject: Python 3.9 is here in Debian Sid In-Reply-To: References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> Message-ID: > > 3.9 is already available in Ubuntu 20.04 (albeit at an RC but it will > be updated) so that should not block enabling test gates. > > Cheers > > James It might be good to add a non-voting "openstack-tox-py39" job to the wallaby template: https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L488 All official projects should be running with that template now, so it would be an easy way to get jobs going and start to see what issues are uncovered. Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 15 14:40:32 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 15 Oct 2020 16:40:32 +0200 Subject: [neutron] Drivers meeting - 16.10.2020 Message-ID: <1923227.kyBMuW35D4@p1> Hi, Due to lack of agenda (and day off for me, and other Red Hatters :)) I'm cancelling tomorrow's drivers meeting. See You next week on the OpenInfra Summit :) -- Slawek Kaplonski Principal Software Engineer Red Hat From laurentfdumont at gmail.com Thu Oct 15 14:40:39 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 15 Oct 2020 10:40:39 -0400 Subject: docs.openstack.org down? Message-ID: Hey everyone, Trying to find some obscure flavor references and it seems that the docs site is down? Any maintenance happening? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Oct 15 15:19:11 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 15 Oct 2020 10:19:11 -0500 Subject: docs.openstack.org down? In-Reply-To: References: Message-ID: > Hey everyone, > > Trying to find some obscure flavor references and it seems that the > docs site is down? Any maintenance happening? > > Thanks! There appears to have been some sort of Apache web server issue. I believe it is now resolved. If you still see issues, troubleshooting is (was) taking place in the #opendev channel on IRC. From tonyliu0592 at hotmail.com Thu Oct 15 15:53:10 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 15 Oct 2020 15:53:10 +0000 Subject: [Kolla] why is Ceph package installed in OVN container? Message-ID: Hi, Regarding to Kolla OVN Ussuri container, for example [1], image layer #28. Why is centos-release-ceph-nautilus installed in OVN container? [1] https://hub.docker.com/layers/kolla/centos-binary-ovn-nb-db-server/ussuri/images/sha256-1101c068abe997ccd916290348f34d32d44c8818ff360abe8b4a50501c131896?context=explore Thanks! Tony From smooney at redhat.com Thu Oct 15 15:56:55 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 15 Oct 2020 16:56:55 +0100 Subject: [Kolla] why is Ceph package installed in OVN container? In-Reply-To: References: Message-ID: <78d129007818b8f71f3441ef7f19d6fde4d9c0b5.camel@redhat.com> On Thu, 2020-10-15 at 15:53 +0000, Tony Liu wrote: > Hi, > > Regarding to Kolla OVN Ussuri container, for example [1], > image layer #28. > > Why is centos-release-ceph-nautilus installed in OVN container? > > [1] > https://hub.docker.com/layers/kolla/centos-binary-ovn-nb-db-server/ussuri/images/sha256-1101c068abe997ccd916290348f34d32d44c8818ff360abe8b4a50501c131896?context=explore probaly because of this https://github.com/openstack/kolla/blob/master/docker/base/Dockerfile.j 2#L192-L196 > > > Thanks! > Tony > > From tonyliu0592 at hotmail.com Thu Oct 15 16:01:43 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 15 Oct 2020 16:01:43 +0000 Subject: [Kolla] why is Ceph package installed in OVN container? In-Reply-To: <78d129007818b8f71f3441ef7f19d6fde4d9c0b5.camel@redhat.com> References: <78d129007818b8f71f3441ef7f19d6fde4d9c0b5.camel@redhat.com> Message-ID: > -----Original Message----- > From: Sean Mooney > Sent: Thursday, October 15, 2020 8:57 AM > To: Tony Liu ; openstack- > discuss at lists.openstack.org > Subject: Re: [Kolla] why is Ceph package installed in OVN container? > > On Thu, 2020-10-15 at 15:53 +0000, Tony Liu wrote: > > Hi, > > > > Regarding to Kolla OVN Ussuri container, for example [1], image layer > > #28. > > > > Why is centos-release-ceph-nautilus installed in OVN container? > > > > [1] > > https://hub.docker.com/layers/kolla/centos-binary-ovn-nb-db-server/uss > > uri/images/sha256-1101c068abe997ccd916290348f34d32d44c8818ff360abe8b4a > > 50501c131896?context=explore > probaly because of this > https://github.com/openstack/kolla/blob/master/docker/base/Dockerfile.j > 2#L192-L196 Then, why is that service and release specific package required in base image? It doesn't cause issues other than taking a bit extra space, but shouldn't it be removed if it's not required? > > > > > > Thanks! > > Tony > > > > > From mark at stackhpc.com Thu Oct 15 16:06:33 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 15 Oct 2020 17:06:33 +0100 Subject: [Kolla] why is Ceph package installed in OVN container? In-Reply-To: References: <78d129007818b8f71f3441ef7f19d6fde4d9c0b5.camel@redhat.com> Message-ID: On Thu, 15 Oct 2020 at 17:02, Tony Liu wrote: > > > -----Original Message----- > > From: Sean Mooney > > Sent: Thursday, October 15, 2020 8:57 AM > > To: Tony Liu ; openstack- > > discuss at lists.openstack.org > > Subject: Re: [Kolla] why is Ceph package installed in OVN container? > > > > On Thu, 2020-10-15 at 15:53 +0000, Tony Liu wrote: > > > Hi, > > > > > > Regarding to Kolla OVN Ussuri container, for example [1], image layer > > > #28. > > > > > > Why is centos-release-ceph-nautilus installed in OVN container? > > > > > > [1] > > > https://hub.docker.com/layers/kolla/centos-binary-ovn-nb-db-server/uss > > > uri/images/sha256-1101c068abe997ccd916290348f34d32d44c8818ff360abe8b4a > > > 50501c131896?context=explore > > probaly because of this > > https://github.com/openstack/kolla/blob/master/docker/base/Dockerfile.j > > 2#L192-L196 > > Then, why is that service and release specific package required > in base image? It doesn't cause issues other than taking a bit extra > space, but shouldn't it be removed if it's not required? centos-release-ceph-nautilus is just a small package containing the yum repos for the CentOS storage SIG's Ceph Nautilus. Ceph itself should not be installed. > > > > > > > > > > Thanks! > > > Tony > > > > > > > > > From tonyliu0592 at hotmail.com Thu Oct 15 16:17:25 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 15 Oct 2020 16:17:25 +0000 Subject: [Kolla] container images for Victoria? Message-ID: Hi, I see the announcement of Victoria release. What's the plan to release Kolla container images for Victoria? Kolla Ussuri installs packages from [1]. For Victoria, [2] is available. But in [2], I don't find OpenvSwitch and OVN packages. Are they going to be provided later, or there is any new changes? [1] http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-ussuri/ [2] http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-victoria/ Thanks! Tony From smooney at redhat.com Thu Oct 15 16:32:53 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 15 Oct 2020 17:32:53 +0100 Subject: [Kolla] why is Ceph package installed in OVN container? In-Reply-To: References: <78d129007818b8f71f3441ef7f19d6fde4d9c0b5.camel@redhat.com> Message-ID: On Thu, 2020-10-15 at 16:01 +0000, Tony Liu wrote: > > -----Original Message----- > > From: Sean Mooney > > Sent: Thursday, October 15, 2020 8:57 AM > > To: Tony Liu ; openstack- > > discuss at lists.openstack.org > > Subject: Re: [Kolla] why is Ceph package installed in OVN container? > > > > On Thu, 2020-10-15 at 15:53 +0000, Tony Liu wrote: > > > Hi, > > > > > > Regarding to Kolla OVN Ussuri container, for example [1], image layer > > > #28. > > > > > > Why is centos-release-ceph-nautilus installed in OVN container? > > > > > > [1] > > > https://hub.docker.com/layers/kolla/centos-binary-ovn-nb-db-server/uss > > > uri/images/sha256-1101c068abe997ccd916290348f34d32d44c8818ff360abe8b4a > > > 50501c131896?context=explore > > probaly because of this > > https://github.com/openstack/kolla/blob/master/docker/base/Dockerfile.j > > 2#L192-L196 > > Then, why is that service and release specific package required > in base image? It doesn't cause issues other than taking a bit extra > space, but shouldn't it be removed if it's not required? it shoudl be installed in nova-base cinder-base glance base or openstack-base but its looks like tis been installed in base for 4+ years i dont really know why but it proably can be moved > > > > > > > > > > Thanks! > > > Tony > > > > > > > > > From fungi at yuggoth.org Thu Oct 15 16:33:43 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Oct 2020 16:33:43 +0000 Subject: [Kolla] container images for Victoria? In-Reply-To: References: Message-ID: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> On 2020-10-15 16:17:25 +0000 (+0000), Tony Liu wrote: > I see the announcement of Victoria release. > What's the plan to release Kolla container images for Victoria? > > Kolla Ussuri installs packages from [1]. For Victoria, [2] is available. > But in [2], I don't find OpenvSwitch and OVN packages. Are they going > to be provided later, or there is any new changes? > > [1] http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-ussuri/ > [2] http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-victoria/ Hopefully production Kolla deployment doesn't obtain packages from those URLs. They're package caches for test systems, and are not guaranteed to exist long term (we could rename them at any moment or take them indefinitely offline for extended maintenance), nor are they necessarily guaranteed to be reachable outside the test regions of our CI system to which they're dedicated. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tonyliu0592 at hotmail.com Thu Oct 15 16:48:11 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 15 Oct 2020 16:48:11 +0000 Subject: [Kolla] container images for Victoria? In-Reply-To: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> References: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> Message-ID: > -----Original Message----- > From: Jeremy Stanley > Sent: Thursday, October 15, 2020 9:34 AM > To: Tony Liu > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Kolla] container images for Victoria? > > On 2020-10-15 16:17:25 +0000 (+0000), Tony Liu wrote: > > I see the announcement of Victoria release. > > What's the plan to release Kolla container images for Victoria? > > > > Kolla Ussuri installs packages from [1]. For Victoria, [2] is > available. > > But in [2], I don't find OpenvSwitch and OVN packages. Are they going > > to be provided later, or there is any new changes? > > > > [1] > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-ussu > > ri/ [2] > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-vict > > oria/ > > Hopefully production Kolla deployment doesn't obtain packages from those > URLs. They're package caches for test systems, and are not guaranteed to > exist long term (we could rename them at any moment or take them > indefinitely offline for extended maintenance), nor are they necessarily > guaranteed to be reachable outside the test regions of our CI system to > which they're dedicated. Where is production Kolla deployment supposed to obtain packages? Thanks! Tony > -- > Jeremy Stanley From S.Kieske at mittwald.de Thu Oct 15 16:50:59 2020 From: S.Kieske at mittwald.de (Sven Kieske) Date: Thu, 15 Oct 2020 16:50:59 +0000 Subject: [Kolla Ansible] RabbitMQ Interface Configuration Message-ID: Hi, I got a question regarding a change which was made, quite some time ago in kolla-ansible. The change in question is: https://review.opendev.org/#/c/584427/ specifically the following diff, the file was moved to a new format and name, but the possibility to configure the used interface for rabbitmq/erlang was removed. May I ask if this was maybe by accident, or what the reason for the removal of these parameters was? I'm asking because I'm currently deploying Openstack and am in the process of hardening the configuration. It stood out to me, that the beam vm from rabbitmq listens on all interfaces[1], so I wanted to change that. If there is another way to change this via kolla-ansible, it would be very kind to let me know. Notice, I do not try to configure "ERL_EPMD_ADDRESS" (which we already do), but to control the TCP Port 25672, which, as far as I understood the rabbitmq docs, is controlled via the erlang/beam vm "inet_dist_use_interface" parameter, which was removed in this changeset. But I might be totally wrong, I find the RabbitMQ docs a little hard to parse at times. This is currently a deployment with 3 rabbitmq nodes, if that matters. Thank you very much for your time in advance! See here the relevant diff, for convenience: commit b163cb02d1486f8844ac52e619de7b62321e42b0 Author: Paul Bourke Date: Fri Jul 20 16:35:25 2018 +0100 Update rabbitmq to use new conf & clustering Depends-On: I75e00312b36e1678b90a42cf58d24652323eff27 Change-Id: Ia716fabffca41eff816e59bbf9f4cab79ee8b72f diff --git a/ansible/roles/rabbitmq/templates/rabbitmq.config.j2 b/ansible/roles/rabbitmq/templates/rabbitmq.config. j2 deleted file mode 100644 index 960f9fb8a..000000000 --- a/ansible/roles/rabbitmq/templates/rabbitmq.config.j2 +++ /dev/null @@ -1,24 +0,0 @@ -[ - {kernel, [ - {inet_dist_use_interface, {% raw %}{{% endraw %}{{ api_interface_address | regex_replace('\.', ',') }}}}, - {inet_dist_listen_min, {{ role_rabbitmq_cluster_port }}}, - {inet_dist_listen_max, {{ role_rabbitmq_cluster_port }}} [1]: ss -tulpn | awk '$5 ~ /0.0.0.0:|\[::\]:/ && /beam/' tcp LISTEN 0 128 0.0.0.0:25672 0.0.0.0:* users:(("beam.smp",pid=194345,fd=63)) -- Mit freundlichen Grüßen / Regards Sven Kieske Systementwickler Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From fungi at yuggoth.org Thu Oct 15 16:57:11 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Oct 2020 16:57:11 +0000 Subject: [Kolla] container images for Victoria? In-Reply-To: References: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> Message-ID: <20201015165711.af5bqfecdidnwgxz@yuggoth.org> On 2020-10-15 16:48:11 +0000 (+0000), Tony Liu wrote: [...] > > > [1] > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-ussu > > > ri/ [2] > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack-vict > > > oria/ > > > > Hopefully production Kolla deployment doesn't obtain packages from those > > URLs. They're package caches for test systems, and are not guaranteed to > > exist long term (we could rename them at any moment or take them > > indefinitely offline for extended maintenance), nor are they necessarily > > guaranteed to be reachable outside the test regions of our CI system to > > which they're dedicated. > > Where is production Kolla deployment supposed to obtain packages? If you're building your own images, then they should ideally get pulled from somewhere like http://mirror.centos.org/centos/8/ or one of the specific official CentOS mirrors close to where you're doing your image builds (definitely not from one of opendev.org's random unofficial CI system mirrors). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lbragstad at gmail.com Thu Oct 15 18:06:15 2020 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 15 Oct 2020 13:06:15 -0500 Subject: [Keystone] 'list' object has no attribute 'get' In-Reply-To: References: Message-ID: Hi Tony, You can submit bug reports against keystone using Launchpad [0]. Please include relevant information about how you produced the issue. [0] https://bugs.launchpad.net/keystone/+filebug On Thu, Sep 10, 2020 at 11:47 AM Tony Liu wrote: > Is this known issue with openstack-keystone-17.0.0-1.el8.noarch? > > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > [req-3bcdd315-1975-4d8a-969d-166dd3e8a3b6 113ee63a9ed0466794e24d069efc302c > 4c142a681d884010ab36a7ac687d910c - default default] 'list' object has no > attribute 'get': AttributeError: 'list' object has no attribute 'get' > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context Traceback > (most recent call last): > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", > line 103, in _inner > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context return > method(self, request) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", > line 353, in process_request > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context resp = > super(AuthContextMiddleware, self).process_request(request) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", > line 411, in process_request > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > allow_expired=allow_expired) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", > line 445, in _do_fetch_token > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context data = > self.fetch_token(token, **kwargs) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", > line 248, in fetch_token > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context token, > access_rules_support=ACCESS_RULES_MIN_VERSION) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in > wrapped > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > __ret_val = __f(*args, **kwargs) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 145, in > validate_token > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context token > = self._validate_token(token_id) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "", line 2, > in _validate_token > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in > get_or_create_for_user_func > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context key, > user_func, timeout, should_cache_fn, (arg, kw) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in > get_or_create > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > async_creator, > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context return > self._enter() > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 94, in _enter > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > generated = self._enter_create(value, createdtime) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 180, in > _enter_create > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context return > self.creator() > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 916, in > gen_value > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > *creator_args[0], **creator_args[1] > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 179, in > _validate_token > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > token.mint(token_id, issued_at) > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line > 579, in mint > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > self._validate_token_resources() > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line > 471, in _validate_token_resources > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context if > self.project and not self.project_domain.get('enabled'): > 2020-09-10 09:35:53.638 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > AttributeError: 'list' object has no attribute 'get' > > > Thanks! > Tony > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Thu Oct 15 18:15:24 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 15 Oct 2020 20:15:24 +0200 Subject: [Kolla] why is Ceph package installed in OVN container? In-Reply-To: References: <78d129007818b8f71f3441ef7f19d6fde4d9c0b5.camel@redhat.com> Message-ID: W dniu 15.10.2020 o 18:32, Sean Mooney pisze: >> Then, why is that service and release specific package required >> in base image? It doesn't cause issues other than taking a bit extra >> space, but shouldn't it be removed if it's not required? > it shoudl be installed in nova-base cinder-base glance base > or openstack-base > but its looks like tis been installed in base for 4+ years > > i dont really know why but it proably can be moved We install all packages adding repositories in base image. And disable them in next step (same layer). Then images which need those repos have them enabled via 'dnf config-manager --enable REPONAME' so it can be used. This change was done quite a while ago and was improvement compared to previous situation when all repos were enabled by default. From tonyliu0592 at hotmail.com Thu Oct 15 18:49:58 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 15 Oct 2020 18:49:58 +0000 Subject: [Kolla] container images for Victoria? In-Reply-To: <20201015165711.af5bqfecdidnwgxz@yuggoth.org> References: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> <20201015165711.af5bqfecdidnwgxz@yuggoth.org> Message-ID: > -----Original Message----- > From: Jeremy Stanley > Sent: Thursday, October 15, 2020 9:57 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [Kolla] container images for Victoria? > > On 2020-10-15 16:48:11 +0000 (+0000), Tony Liu wrote: > [...] > > > > [1] > > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack- > > > > ussu > > > > ri/ [2] > > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack- > > > > vict > > > > oria/ > > > > > > Hopefully production Kolla deployment doesn't obtain packages from > > > those URLs. They're package caches for test systems, and are not > > > guaranteed to exist long term (we could rename them at any moment or > > > take them indefinitely offline for extended maintenance), nor are > > > they necessarily guaranteed to be reachable outside the test regions > > > of our CI system to which they're dedicated. > > > > Where is production Kolla deployment supposed to obtain packages? > > If you're building your own images, then they should ideally get pulled > from somewhere like http://mirror.centos.org/centos/8/ or one of the > specific official CentOS mirrors close to where you're doing your image > builds (definitely not from one of opendev.org's random unofficial CI > system mirrors). I see ovn and openvswitch packages in [1], but not in [2]. Where are those packages built, RDO? Any way to know when ovn and openvswitch packages will be available? [1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/o/ [2] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/Packages/o/ Thanks! Tony > -- > Jeremy Stanley From laurentfdumont at gmail.com Thu Oct 15 20:58:21 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 15 Oct 2020 16:58:21 -0400 Subject: docs.openstack.org down? In-Reply-To: References: Message-ID: Thanks! Looks all clear now :) On Thu, Oct 15, 2020 at 11:27 AM Sean McGinnis wrote: > > > Hey everyone, > > > > Trying to find some obscure flavor references and it seems that the > > docs site is down? Any maintenance happening? > > > > Thanks! > There appears to have been some sort of Apache web server issue. I > believe it is now resolved. If you still see issues, troubleshooting is > (was) taking place in the #opendev channel on IRC. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Oct 15 21:06:19 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Oct 2020 21:06:19 +0000 Subject: docs.openstack.org down? In-Reply-To: References: Message-ID: <20201015210618.dwatuvqfvfkjhukj@yuggoth.org> On 2020-10-15 16:58:21 -0400 (-0400), Laurent Dumont wrote: [...] > > There appears to have been some sort of Apache web server issue. > > I believe it is now resolved. If you still see issues, > > troubleshooting is (was) taking place in the #opendev channel on > > IRC. > > Thanks! Looks all clear now :) For those interested in following along, the more permanent mitigation is being engineered here: https://review.opendev.org/#/q/topic:ua-filter -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From laurentfdumont at gmail.com Fri Oct 16 00:00:51 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 15 Oct 2020 20:00:51 -0400 Subject: High level benchmarking - Openstack Tenant/Provider networking. Message-ID: Hey everyone, We are currently working on some baseline user network benchmarking in our Openstack deployments. We are facing some strange issues where the actual real-life performance of VM are not what we expect. We have some large variations in terms of performance between hardware generations that we would like to confirm the nature of. While we are looking at deploying OPFNV / smaller tools to give us some data regarding our performance, are there any community resources that might list benchmarks previously done by other Openstack users? Our deployments specs are pretty standard some I'm surprised I couldn't find some baseline benchmarks online. Something like : 4vCPU VM 72 CPU Hypervisor Linux Bridge with iptables for security-groups implementation 1500 MTU / 9000 MTU with 10G ports You can expect : x PPS / x BW for VM to VM on vxlan x PPS / x BW for VM to VM on vlan Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwy5056 at gmail.com Fri Oct 16 00:33:01 2020 From: cwy5056 at gmail.com (fisheater chang) Date: Fri, 16 Oct 2020 00:33:01 +0000 Subject: [ceilometer] How to install old version of gnocchi Message-ID: I use Ubuntu 18.04 to install https://docs.openstack.org/ceilometer/stein/install/install-base-ubuntu.html (ceilometer) When I run service gnocchi-api restart, I get Failed to restart gnocchi-api.service: Unit gnocchi-api.service not found. I can't find any solution to address this issue. Someone said the new version doesn't include the gnocchi-api, so I am trying to install the previous version, but I don't know where I can find the old version of gnocchi-api https://stackoverflow.com/questions/47520779/service-gnocchi-api-not-found Can anyone help? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Oct 16 01:04:37 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 15 Oct 2020 18:04:37 -0700 Subject: [manila] No IRC Meetings on 22nd Oct, 29th Oct 2020 Message-ID: Hello Zorillas, As mentioned in the meeting today, we'll not be running our weekly IRC meetings on 22nd Oct and 29th Oct 2020 because of the Summit and the PTG. I hope to see you all at these events. For your convenience, here's a list of events we'd love to have you attend: Forum & Summit ============= OpenStack storage for Containers Monday, October 19, 1:15pm - 2:00pm UTC https://www.openstack.org/summit/2020/summit-schedule/events/24763/openstack-storage-for-containers Operational Concerns for Openstack Manila Tuesday, October 20, 1:15pm - 2:00pm UTC https://www.openstack.org/summit/2020/summit-schedule/events/24762/operational-concerns-for-openstack-manila Connecting ecosystems: How Cinder CSI, Ember CSI and Manila CSI leverage OpenStack bits in Kubernetes Tuesday, October 20, 5:15pm - 5:45pm UTC https://www.openstack.org/summit/2020/summit-schedule/events/24634/connecting-ecosystems-how-cinder-csi-ember-csi-and-manila-csi-leverage-openstack-bits-in-kubernetes PTG ==== Will share the final schedule over email next week. Topics/Plans: https://etherpad.opendev.org/p/wallaby-ptg-manila-planning Schedule: http://ptg.openstack.org/ptg.html Dates and Times: - Oct 26th 2020, Mon: 1500-1700 UTC - Oct 27th 2020, Tue: 1400-1700 UTC - Oct 28th 2020, Wed: 1400-1700 UTC - Oct 30th 2020, Fri: 1400-1600 UTC (Happy Hour: 1600-1700 UTC) Thanks, Goutham From iwienand at redhat.com Fri Oct 16 02:56:49 2020 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 16 Oct 2020 13:56:49 +1100 Subject: [diskimage-builder][ironic-python-agent-builder][ci][focal][ironic] ipa-builder CI jobs can't migrate to ubuntu focal nodeset In-Reply-To: <20201008041835.GA1011725@fedora19.localdomain> References: <20201008041835.GA1011725@fedora19.localdomain> Message-ID: <20201016025649.GA1494388@fedora19.localdomain> On Thu, Oct 08, 2020 at 03:18:35PM +1100, Ian Wienand wrote: > - As clarkb has mentioned, probably the most promising alternative is > to use the upstream container images as the basis for the initial > chroot environments. I just got this working for ubuntu-focal with [1]. It needs cleanups and changes in a few different places, but I think it shows it works. I'll keep working on it, but I think it is a promising alternative. -i [1] https://review.opendev.org/#/c/722148/ From doug at doughellmann.com Fri Oct 16 03:06:58 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Oct 2020 23:06:58 -0400 Subject: [requirements][oslo] Explicit requirement to setuptools. In-Reply-To: <20201015140846.24dnruz2newnv6rj@yuggoth.org> References: <20201015140846.24dnruz2newnv6rj@yuggoth.org> Message-ID: <7AB8356D-6A54-4A1E-80B3-5BD5E257E8CA@doughellmann.com> > On Oct 15, 2020, at 10:19 AM, Jeremy Stanley wrote: > > On 2020-10-15 08:39:15 -0500 (-0500), Matthew Thode wrote: >>> On 20-10-15 12:56:34, Jeremy Stanley wrote: >>>> On 2020-10-15 13:22:36 +0200 (+0200), Thomas Goirand wrote: >>> [...] >>>> IMO, this is a problem in downstream distributions, not in >>>> OpenStack (unless this influences pip3). >>>> >>>> In Debian, I did hard-wire an explicit build-dependency on >>>> python3-setuptools on each and every package. I don't understand >>>> why you haven't done the same thing. >>> [...] >>> >>> And even this isn't completely necessary... it's entirely possible >>> for projects to only use PBR and setuptools for creating packages, >>> but use other Python stdlib methods (e.g., importlib in Python 3.8) >>> to access version information and related package metadata during >>> runtime. >> >> For a large majority of packages, setuptools is required because they >> have entry-points. This mostly includes all the clients, but also hits >> some oslo packages as well. > > https://docs.python.org/3/library/importlib.metadata.html#entry-points > > Maybe time to start thinking about a cycle goal around that? The latest version of stevedore already uses importlib.metadata instead of setuptools and quite a few of the libraries and services no longer need it either. https://review.opendev.org/#/q/topic:osc-performance+(status:open+OR+status:merged) Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Fri Oct 16 04:40:59 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 16 Oct 2020 06:40:59 +0200 Subject: [Kolla] container images for Victoria? In-Reply-To: References: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> <20201015165711.af5bqfecdidnwgxz@yuggoth.org> Message-ID: Hi Tony, RDO now (Victoria) depends on OVS/OVN packages from CentOS NFV SIG: http://mirror.centos.org/centos/8/nfv/x86_64/openvswitch-2/ Michal On Thu, 15 Oct 2020 at 20:52, Tony Liu wrote: > > -----Original Message----- > > From: Jeremy Stanley > > Sent: Thursday, October 15, 2020 9:57 AM > > To: openstack-discuss at lists.openstack.org > > Subject: Re: [Kolla] container images for Victoria? > > > > On 2020-10-15 16:48:11 +0000 (+0000), Tony Liu wrote: > > [...] > > > > > [1] > > > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack- > > > > > ussu > > > > > ri/ [2] > > > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack- > > > > > vict > > > > > oria/ > > > > > > > > Hopefully production Kolla deployment doesn't obtain packages from > > > > those URLs. They're package caches for test systems, and are not > > > > guaranteed to exist long term (we could rename them at any moment or > > > > take them indefinitely offline for extended maintenance), nor are > > > > they necessarily guaranteed to be reachable outside the test regions > > > > of our CI system to which they're dedicated. > > > > > > Where is production Kolla deployment supposed to obtain packages? > > > > If you're building your own images, then they should ideally get pulled > > from somewhere like http://mirror.centos.org/centos/8/ or one of the > > specific official CentOS mirrors close to where you're doing your image > > builds (definitely not from one of opendev.org's random unofficial CI > > system mirrors). > > I see ovn and openvswitch packages in [1], but not in [2]. > Where are those packages built, RDO? > Any way to know when ovn and openvswitch packages will be available? > > [1] > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/o/ > [2] > http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/Packages/o/ > > Thanks! > Tony > > -- > > Jeremy Stanley > > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Fri Oct 16 08:21:32 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Fri, 16 Oct 2020 16:21:32 +0800 Subject: [TripleO]Train overcloud deployment Message-ID: Hi, Has Anyone have deploy openstack train using tripleO? mind to share what template that you are using for overcloud deployment? Openstack guideline does not provide updated steps based on train..some of the template shown in openstack portal is no longer available in train thus I'm not sure which template that should I focus and add onto overcloud deployment. I would appreciate it if someone could share what template that you add onto overcloud deployment, thus I can focus as so many templates define. I plan to have few dpdk computes and 3 controllers. Thank you for your attention and help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Fri Oct 16 08:23:44 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Fri, 16 Oct 2020 16:23:44 +0800 Subject: [Openstack][Cinder] In-Reply-To: References: Message-ID: Noted and thank you. I'm running tripleo deployment. Thank you. On Wed, Oct 14, 2020 at 12:53 AM Alan Bishop wrote: > > > On Tue, Oct 13, 2020 at 9:50 AM Alan Bishop wrote: > >> >> >> On Sun, Oct 11, 2020 at 11:20 PM dangerzone ar >> wrote: >> >>> Hi Team, May I know if anyone has deployed Huawei oceanstor SAN storage >>> during overcloud deployment? >>> >> >> Hi, >> >> Your use of the term "overcloud" suggests you are using TripleO. My >> response assumes that is true, and you should probably ignore my response >> if you're not using TripleO. >> >> I have not deployed TripleO with a Huawei SAN for the cinder backend, but >> it should be possible. >> >> >>> (i) Do you need a specific driver to define or download in order to >>> deploy it? >>> >> >> See [1] for many details related to deploying the Huawei cinder driver. >> >> [1] >> https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html >> > > That's the Rocky documentation. I don't know if the documentation has > changed, but here's a link to the latest version: > > > https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/huawei-storage-driver.html > > >> >>> (ii) Can I deploy separately SAN storage after overcloud deployment, I >>> mean after a month of openstack deployment i want to add SAN storage to my >>> infrastructure. Is it possible? >>> >> >> The short answer is yes. TripleO has the ability to deploy additional >> cinder storage backends via what's known as a stack (i.e. the overcloud) >> update. The initial overcloud deployment can be done using another cinder >> backend X, and later you can add a Huawei backend so the overcloud has two >> backends (X + Huawei). >> >> >>> (iii) Please advise me how to deploy Huawei oceanstor SAN storage. >>> >>> >> TripleO does not have specific support for deploying a Huawei SAN, but >> you can still deploy one by following [2]. That doc describes the technique >> for how to deploy the Huawei SAN as a "custom" block storage device. The >> document provides an example for deploying two NetApp backends, but the >> concept will be the same for you to deploy a single Huawei backend. The key >> will be crafting the TripleO environment file to configure the Huawei >> settings described in [1]. >> >> [2] >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cinder_custom_backend.html >> >> One additional thing to note is that I see the Huawei driver requires >> access to an XML file that contains additional configuration settings. >> You'll need to get this file onto the overcloud node(s) where the >> cinder-volume service runs. And if your overcloud services run in >> containers (as modern TripleO releases do), then you'll need to provide a >> way for the containerized cinder-volume service to have access to the XML >> file. Fortunately this can be achieved using a TripleO parameter by >> including something like this in one of the overcloud deployment's env file: >> >> parameter_defaults: >> CinderVolumeOptVolumes: >> - >> /etc/cinder/cinder_huawei_conf.xml:/etc/cinder/cinder_huawei_conf.xml:ro >> >> That will allow the /etc/cinder/cinder_huawei_conf.xml file installed on >> the overcloud host to be visible to the cinder-volume service running >> inside a container. >> >> Alan >> >> >>> Please advise further. Thank you >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yingjisun at vmware.com Fri Oct 16 08:31:13 2020 From: yingjisun at vmware.com (Yingji Sun) Date: Fri, 16 Oct 2020 08:31:13 +0000 Subject: How to change nova db in Train ? Message-ID: <3D3500DC-CDB7-4BE2-B00A-43430F9EEE95@vmware.com> Buddies, I have a question about updating nova db. I would like to change the length of internal_access_path in console_auth_tokens, starting from Train. I want to modify nova/db/sqlalchemy/models.py and the migration file under nova/db/sqlalchemy/migrate_repo/versions. However I did not find a placeholder file for Train. 402_add_resources.py is the last one in Train and 403_ is "Add reserved schema migrations for Ussuri". So how can I do it in Train ? Yingji. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Oct 16 09:13:55 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 16 Oct 2020 11:13:55 +0200 Subject: How to change nova db in Train ? In-Reply-To: <3D3500DC-CDB7-4BE2-B00A-43430F9EEE95@vmware.com> References: <3D3500DC-CDB7-4BE2-B00A-43430F9EEE95@vmware.com> Message-ID: <7ZEAIQ.5ELQJNS0VB6A3@est.tech> On Fri, Oct 16, 2020 at 08:31, Yingji Sun wrote: > Buddies, > > > > I have a question about updating nova db. > > I would like to change the length of internal_access_path in > console_auth_tokens, starting from Train. I want to modify > nova/db/sqlalchemy/models.py and the migration file under > nova/db/sqlalchemy/migrate_repo/versions. However I did not find a > placeholder file for Train. 402_add_resources.py is the last one in > Train and 403_ is "Add reserved schema migrations for Ussuri". > > > > So how can I do it in Train ? > I think technically you can use 403 for your Train backport as there was no newer migration ever merged. But I hope others more experience with DB migration backporting in Nova can confirm this. Cheers, gibi > > > Yingji. > From balazs.gibizer at est.tech Fri Oct 16 09:20:47 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 16 Oct 2020 11:20:47 +0200 Subject: [nova] Wallaby Message-ID: Hi, Thanks you all for your contribution to the Victoria release. Now nova master is fully open for Wallaby. The runway etherpad[1] is also available if you already have features or improvements ready for review. If you need discussion around your features then please add a topic to the PTG etherpad [2]. I will do some reorganization of the topics on [2] during next week to move the related items closer and assign some tentative timeslots for topics. Cheers, gibi [1] https://etherpad.opendev.org/p/nova-runways-wallaby [2] https://etherpad.opendev.org/p/nova-wallaby-ptg From radoslaw.piliszek at gmail.com Fri Oct 16 12:02:41 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 16 Oct 2020 14:02:41 +0200 Subject: [Kolla Ansible] RabbitMQ Interface Configuration In-Reply-To: References: Message-ID: Hi Sven, I replied to you on IRC. We would accept this functionality back. It looks like it was just an omission. As a workaround, you can firewall it away yourself. -yoctozepto On Thu, Oct 15, 2020 at 6:53 PM Sven Kieske wrote: > > Hi, > > I got a question regarding a change which was made, quite some time ago > in kolla-ansible. The change in question is: https://review.opendev.org/#/c/584427/ > > specifically the following diff, the file was moved to a new format and name, but the possibility > to configure the used interface for rabbitmq/erlang was removed. > > May I ask if this was maybe by accident, or what the reason for the removal of these parameters was? > > I'm asking because I'm currently deploying Openstack and am in the > process of hardening the configuration. > > It stood out to me, that the beam vm from rabbitmq listens on all > interfaces[1], so I wanted to change that. > > If there is another way to change this via kolla-ansible, it would > be very kind to let me know. > > Notice, I do not try to configure "ERL_EPMD_ADDRESS" (which we already do), but to control > the TCP Port 25672, which, as far as I understood the rabbitmq docs, is controlled > via the erlang/beam vm "inet_dist_use_interface" parameter, which was removed in this changeset. > > But I might be totally wrong, I find the RabbitMQ docs a little hard to parse at times. > > This is currently a deployment with 3 rabbitmq nodes, if that matters. > > Thank you very much for your time in advance! > > See here the relevant diff, for convenience: > > commit b163cb02d1486f8844ac52e619de7b62321e42b0 > Author: Paul Bourke > Date: Fri Jul 20 16:35:25 2018 +0100 > > Update rabbitmq to use new conf & clustering > > Depends-On: I75e00312b36e1678b90a42cf58d24652323eff27 > Change-Id: Ia716fabffca41eff816e59bbf9f4cab79ee8b72f > > diff --git a/ansible/roles/rabbitmq/templates/rabbitmq.config.j2 b/ansible/roles/rabbitmq/templates/rabbitmq.config. > j2 > deleted file mode 100644 > index 960f9fb8a..000000000 > --- a/ansible/roles/rabbitmq/templates/rabbitmq.config.j2 > +++ /dev/null > @@ -1,24 +0,0 @@ > -[ > - {kernel, [ > - {inet_dist_use_interface, {% raw %}{{% endraw %}{{ api_interface_address | regex_replace('\.', ',') }}}}, > - {inet_dist_listen_min, {{ role_rabbitmq_cluster_port }}}, > - {inet_dist_listen_max, {{ role_rabbitmq_cluster_port }}} > > > [1]: > ss -tulpn | awk '$5 ~ /0.0.0.0:|\[::\]:/ && /beam/' > tcp LISTEN 0 128 0.0.0.0:25672 0.0.0.0:* users:(("beam.smp",pid=194345,fd=63)) > > -- > Mit freundlichen Grüßen / Regards > > Sven Kieske > Systementwickler > > > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 4-6 > 32339 Espelkamp > > Tel.: 05772 / 293-900 > Fax: 05772 / 293-333 > > https://www.mittwald.de > > Geschäftsführer: Robert Meyer, Florian Jürgens > > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen > > Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit > gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. > From radoslaw.piliszek at gmail.com Fri Oct 16 12:03:35 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 16 Oct 2020 14:03:35 +0200 Subject: [Kolla] container images for Victoria? In-Reply-To: References: <20201015163342.ufgxiez4brwsepiu@yuggoth.org> <20201015165711.af5bqfecdidnwgxz@yuggoth.org> Message-ID: As a final note, I'd like to say that Kolla Victoria is not ready yet. Please hold on until we release before rushing in production. -yoctozepto On Fri, Oct 16, 2020 at 6:49 AM Michał Nasiadka wrote: > > Hi Tony, > > RDO now (Victoria) depends on OVS/OVN packages from CentOS NFV SIG: > http://mirror.centos.org/centos/8/nfv/x86_64/openvswitch-2/ > > > Michal > > On Thu, 15 Oct 2020 at 20:52, Tony Liu wrote: >> >> > -----Original Message----- >> > From: Jeremy Stanley >> > Sent: Thursday, October 15, 2020 9:57 AM >> > To: openstack-discuss at lists.openstack.org >> > Subject: Re: [Kolla] container images for Victoria? >> > >> > On 2020-10-15 16:48:11 +0000 (+0000), Tony Liu wrote: >> > [...] >> > > > > [1] >> > > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack- >> > > > > ussu >> > > > > ri/ [2] >> > > > > http://mirror.iad.rax.opendev.org/centos/8/cloud/x86_64/openstack- >> > > > > vict >> > > > > oria/ >> > > > >> > > > Hopefully production Kolla deployment doesn't obtain packages from >> > > > those URLs. They're package caches for test systems, and are not >> > > > guaranteed to exist long term (we could rename them at any moment or >> > > > take them indefinitely offline for extended maintenance), nor are >> > > > they necessarily guaranteed to be reachable outside the test regions >> > > > of our CI system to which they're dedicated. >> > > >> > > Where is production Kolla deployment supposed to obtain packages? >> > >> > If you're building your own images, then they should ideally get pulled >> > from somewhere like http://mirror.centos.org/centos/8/ or one of the >> > specific official CentOS mirrors close to where you're doing your image >> > builds (definitely not from one of opendev.org's random unofficial CI >> > system mirrors). >> >> I see ovn and openvswitch packages in [1], but not in [2]. >> Where are those packages built, RDO? >> Any way to know when ovn and openvswitch packages will be available? >> >> [1] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/o/ >> [2] http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/Packages/o/ >> >> Thanks! >> Tony >> > -- >> > Jeremy Stanley >> > -- > Michał Nasiadka > mnasiadka at gmail.com From mrunge at matthias-runge.de Fri Oct 16 13:12:48 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 16 Oct 2020 15:12:48 +0200 Subject: [ceilometer] How to install old version of gnocchi In-Reply-To: References: Message-ID: On 16/10/2020 02:33, fisheater chang wrote: > I use Ubuntu 18.04 to install > https://docs.openstack.org/ceilometer/stein/install/install-base-ubuntu.html > > (ceilometer) > When I run |service gnocchi-api restart|, I get |Failed to restart > gnocchi-api.service: Unit gnocchi-api.service not found.| > I can not speak for Ubuntu distributions here. The error message just states, there is no gnocchi-api.service. There are two possible solutions: either it's gone or it has been renamed. In CentOS/RHEL based deployments, the gnocchi-api service was moved to httpd. Maybe that's the case for Ubuntu as well? Matthias From thomas at goirand.fr Fri Oct 16 08:21:34 2020 From: thomas at goirand.fr (Thomas Goirand) Date: Fri, 16 Oct 2020 10:21:34 +0200 Subject: Python 3.9 is here in Debian Sid In-Reply-To: References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> Message-ID: <4fd89e88-d8ce-7faa-772f-0908a3387d5c@goirand.fr> On 10/15/20 4:21 PM, Sean McGinnis wrote: > >> >> 3.9 is already available in Ubuntu 20.04 (albeit at an RC but it will >> be updated) so that should not block enabling test gates. >> >> Cheers >> >> James > > It might be good to add a non-voting "openstack-tox-py39" job to the > wallaby template: > > https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L488 > > All official projects should be running with that template now, so it > would be an easy way to get jobs going and start to see what issues are > uncovered. > > Sean I also think it'd be nice to have non-voting jobs detecting deprecated stuff. For example, a quick grep shows that a lot of projects are still using collections.Mapping instead of collections.abc.Mapping (which is to be removed in Python 3.10, according to the 3.9 release notes). Would there be a way to get our CI report these issues earlier? Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Fri Oct 16 13:59:10 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Oct 2020 13:59:10 +0000 Subject: Python 3.9 is here in Debian Sid In-Reply-To: <4fd89e88-d8ce-7faa-772f-0908a3387d5c@goirand.fr> References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> <4fd89e88-d8ce-7faa-772f-0908a3387d5c@goirand.fr> Message-ID: <20201016135910.paltyva5ut2x5qcc@yuggoth.org> On 2020-10-16 10:21:34 +0200 (+0200), Thomas Goirand wrote: [...] > I also think it'd be nice to have non-voting jobs detecting > deprecated stuff. For example, a quick grep shows that a lot of > projects are still using collections.Mapping instead of > collections.abc.Mapping (which is to be removed in Python 3.10, > according to the 3.9 release notes). Would there be a way to get > our CI report these issues earlier? They're going to all explode, at least until PBR gets some more changes merged and released to stop doing deprecated things. I've been slowly working my way through testing simple PBR-using projects with PYTHONWARNINGS=error (instead of =default::DeprecationWarning) and fixing or noting the issues I encounter. Up until recently, a number of its dependencies were also throwing deprecation warnings under 3.9, but now I think we're down to just a couple of remaining fixes pending. We didn't want to try to rush in a new PBR release until Victoria was wrapped up, but now I think we can finish this fairly soon. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From massimo.sgaravatto at gmail.com Fri Oct 16 14:06:43 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 16 Oct 2020 16:06:43 +0200 Subject: [ops] [cinder] __DEFAULT__ volume type Message-ID: I have recently updated my Cloud from Rocky to Train (I am running Cinder v. 15.4.0) I have a question concerning the __DEFAULT__ volume type, that I don't remember to have seen before. Since: - I have no volumes using this volume type - I defined in in the [DEFAULT] section of cinder.conf the attribute "default_volume_type" to a value different than "__DEFAULT___" I assume that I can safely delete the __DEFAULT__ volume type Is this correct? Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.Kieske at mittwald.de Fri Oct 16 14:11:35 2020 From: S.Kieske at mittwald.de (Sven Kieske) Date: Fri, 16 Oct 2020 14:11:35 +0000 Subject: [Kolla Ansible] RabbitMQ Interface Configuration In-Reply-To: References: Message-ID: <039262c68773e0a3df081a87a30961fc08725d86.camel@mittwald.de> On Fr, 2020-10-16 at 14:02 +0200, Radosław Piliszek wrote: > Hi Sven, > > I replied to you on IRC. > > We would accept this functionality back. > It looks like it was just an omission. > > As a workaround, you can firewall it away yourself. Hi, thanks for your help so far! I'm in the process of submitting the patch. -- Mit freundlichen Grüßen / Regards Sven Kieske Systementwickler Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From marios at redhat.com Fri Oct 16 14:28:18 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 16 Oct 2020 17:28:18 +0300 Subject: [tripleo] proposed schedule TripleO meetup @ Wallaby PTG Message-ID: Hello tripleOs o/ I would like to share a proposed schedule for the TripleO meetup at the coming PTG. The proposal is at [1]. First I'd like to give a huge thank you to everybody for taking the time to volunteer and lead those sessions. We are lucky to have folks bringing a wide array of topics including deployment, edge, upgrades, high-availability, ci, storage and networking. The schedule has 5 fourty minute sessions per day (we have 4 hours/day 1300-1700 UTC) with 10 mins breaks between them. If you would like to propose an alternative structure or have suggestions/improvements etc then let's discuss it here. If you want to make changes to your session time please reach out directly to the person running the time slot you wish to change with. If there are no objections then please go ahead and swap the sessions in the schedule (please let me know too). If you can't reach that person or there are any other problems please message me and we will work it out together. If you have added (or plan to add) a new discussion topic in [2] and you don't see it in [1] then please let me know so I can add it. If you are leading a session then please remember to prepare the session etherpad over the next few days and link it at [1]. Of course, make sure you are registered for the PTG :) at [3] thank you for reading and for sharing any thoughts/suggestions/improvement to the proposed schedule [1], regards (+ happy friday !) marios [1] https://etherpad.opendev.org/p/tripleo-ptg-wallaby [2] https://etherpad.opendev.org/p/tripleo-wallaby-topics [3] https://october2020ptg.eventbrite.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.Kieske at mittwald.de Fri Oct 16 14:31:10 2020 From: S.Kieske at mittwald.de (Sven Kieske) Date: Fri, 16 Oct 2020 14:31:10 +0000 Subject: [Kolla Ansible] RabbitMQ Interface Configuration In-Reply-To: <039262c68773e0a3df081a87a30961fc08725d86.camel@mittwald.de> References: <039262c68773e0a3df081a87a30961fc08725d86.camel@mittwald.de> Message-ID: Patch posted: https://review.opendev.org/#/c/758576/ -- Mit freundlichen Grüßen / Regards Sven Kieske Systementwickler Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From akekane at redhat.com Fri Oct 16 15:39:59 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 16 Oct 2020 21:09:59 +0530 Subject: [glance] No IRC meetings next 2 weeks Message-ID: Hello All, As discussed in yesterday's weekly meeting, we will not be conducting our next 2 weekly IRC meetings on 22 October and 29 October because of the Summit and the PTG. See you all during PTG discussions, below are the details about glance schedule during PTG. Topics/Plans: https://etherpad.opendev.org/p/Glance-Wallaby-PTG-planning Schedule: http://ptg.openstack.org/ptg.html Dates and Times: * Oct 27th 2020: Tuesday 1400-1700 UTC * Oct 28th 2020: Wednesday 1400-1700 UTC * Oct 29th 2020: Thursday 1400-1700 UTC * Oct 30th 2020: Friday 1400-1700 UTC Thanks and Best Regards, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Fri Oct 16 16:55:46 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 16 Oct 2020 18:55:46 +0200 Subject: [OSSA-2020-007] Blazar: Remote code execution in blazar-dashboard (CVE-2020-26943) Message-ID: <20201016165546.GA84510@raider.home> ======================================================== OSSA-2020-007: Remote code execution in blazar-dashboard ======================================================== :Date: October 12, 2020 :CVE: CVE-2020-26943 Affects ~~~~~~~ - Blazar-dashboard: <1.3.1, ==2.0.0, ==3.0.0 Description ~~~~~~~~~~~ Lukas Euler (Positive Security) reported a vulnerability in blazar-dashboard. A user allowed to access the Blazar dashboard in Horizon may trigger code execution on the Horizon host as the user the Horizon service runs under. This may result in Horizon host unauthorized access and further compromise of the Horizon service. All setups using the Horizon dashboard with the blazar-dashboard plugin are affected. Patches ~~~~~~~ - https://review.opendev.org/755814 (Stein) - https://review.opendev.org/755813 (Train) - https://review.opendev.org/755812 (Ussuri) - https://review.opendev.org/756064 (Victoria) - https://review.opendev.org/755810 (Wallaby) Credits ~~~~~~~ - Lukas Euler from Positive Security (CVE-2020-26943) References ~~~~~~~~~~ - https://launchpad.net/bugs/1895688 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26943 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lars at redhat.com Fri Oct 16 18:29:58 2020 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 16 Oct 2020 14:29:58 -0400 Subject: [Ironic] Teaching virtualbmc how to talk to Ironic? Message-ID: <20201016182958.dkp6nm56bwoz7i3n@redhat.com> In the work that we're doing with the Mass Open Cloud [1], we're looking at using Ironic (and the multi-tenant support we contributed) to manage access to a shared pool of hardware while still permitting people to use their own provisioning tools. We don't want to expose the hardware BMC directly to consumers; we want Ironic to act as the access control mechanism for all activities involving the hardware. The missing part of this scenario is that at the moment this would require provisioning tools to know how to talk to the Ironic API if they want to perform BMC actions on the host, such as controlling power. While talking with Mainn the other day, it occurred to me that maybe we could teach virtualbmc [2] how to talk to Ironic, so that we could provide a virtual IPMI interface to provisioning tools. There are some obvious questions here around credentials (I think we'd probably generate them randomly when assigning control of a piece of hardware to someone, but that's more of an implementation detail). I wanted to sanity check this idea: does this seem reasonable? Are there alternatives you would suggest? Thanks! [1] https://github.com/CCI-MOC/esi [2] https://github.com/openstack/virtualbmc -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | N1LKS From kendall at openstack.org Fri Oct 16 19:55:16 2020 From: kendall at openstack.org (Kendall Waters) Date: Fri, 16 Oct 2020 14:55:16 -0500 Subject: Your Virtual PTG Checklist Message-ID: <11577795-4D73-4BDF-BACE-00694C5D78C7@openstack.org> REGISTRATION If you haven't done so, please register for the PTG! This is how we will be able to provide you with the tooling information and passwords. Register here: https://october2020ptg.eventbrite.com FINAL SCHEDULE The final schedule [1] for the event is set here and in the PTGBot [2]. IRC The main form of synchronous communication between attendees during the PTG is on IRC. If you are not on IRC, learn how to get started here [3]. The main PTG IRC channel is #openstack-ptg on Freenode. It's used to interact with the PTGbot, and Foundation staff will be present to help answer questions. PTGBOT The PTGbot [3] is an open source tool that PTG track moderators use to surface what's currently happening at the event. Track moderators will send messages to the bot via IRC, and from that information, the bot publishes a webpage with several sections of information: - The discussion topics currently discussed in the room ("now") - An indicative set of discussion topics coming up next ("next") - The schedule for the day with available extra slots you can book Learn more about the ptgbot via the documentation here [4]. HELP DESK We are here to help! If you have any questions during the event week, we encourage you to join the #openstack-ptg IRC channel and ask them there. You can ping Kendall Waters (wendallkaters) and Kendall Nelson(diablo_rojo) directly on IRC. We will also have a dedicated Zoom room, but ONLY FOR MONDAY, October 26 where an OSF staff member will be available to answer your event related questions. You can also always reach someone at ptg at openstack.org if you are unable to connect to IRC. FEEDBACK We have preemptively created an etherpad[5] to collect all of your feedback throughout the event. Please add your thoughts as the week goes on. GAME TIME! Since we all know that coming together as a community to hang out and play games is part of what makes the PTG great, we are going to try something new this time around. This PTG, we are going to have two 'game nights'- obviously time zones are hard and it might not be night for everyone, but the times are as follows. - Thursday, October 29 at 19:00 UTC - Friday, October 30 at 10:00 UTC - Friday, October 30 at 23:00 UTC If you are interested in participating, write your name in the etherpad [6]. We will use it to coordinate games for each of the timeslots. [1] PTG schedule: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG2-Oct26-30-2020-Schedule-1.pdf [2] PTGbot: http://ptg.openstack.org/ptg.html [3] How to get started on IRC: https://docs.openstack.org/contributors/common/irc.html [4] PTGbot documentation: https://github.com/openstack/ptgbot/blob/master/README.rst [5] Feedback Etherpad: https://etherpad.opendev.org/p/October2020-PTG-Feedback [6] Game Time Etherpad: https://etherpad.opendev.org/p/October2020-PTG-Games -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Fri Oct 16 22:49:40 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Fri, 16 Oct 2020 17:49:40 -0500 Subject: [nova] NUMA scheduling Message-ID: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> Hi, I'm at a loss for finding good information about how a VM's vCPUs and Memory are assigned to NUMA nodes within a scheduled physical host. I think Libvirt does this, and the Nova Scheduler simply finds the right physical host to run the VM, and thus Nova has no input on which NUMA node to choose. So this might be a Libvirt question. We are running Stein and have the issue where VMs launch on NUMA Node 0, and not on NUMA Node 1, in physical hosts with two processors, and are simply looking for a way to tell Libvirt to consider NUMA Node 1 when scheduling a VM, since there is nearly all of the memory available on NUMA Node 1. Our flavors are defined with hw:numa_nodes='1' since we want all vCPUs+Memory to land on a single NUMA Node, and so the guest OS has visibility that a single NUMA Node is being used. We are "not" looking for a way to pin a VM to a specific NUMA node (such as for SR-IOV purposes). Any suggestions where to look for the solution? Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat Oct 17 03:19:59 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 16 Oct 2020 23:19:59 -0400 Subject: [nova] NUMA scheduling In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> Message-ID: As far as I know, numa_nodes=1 just means --> the resources for that VM should run on one NUMA node (so either NUMA0 or NUMA1). If there is space free on both, then it's probably going to pick one of the two? On Fri, Oct 16, 2020 at 6:56 PM Eric K. Miller wrote: > Hi, > > > > I'm at a loss for finding good information about how a VM's vCPUs and > Memory are assigned to NUMA nodes within a scheduled physical host. I > think Libvirt does this, and the Nova Scheduler simply finds the right > physical host to run the VM, and thus Nova has no input on which NUMA node > to choose. So this might be a Libvirt question. > > > > We are running Stein and have the issue where VMs launch on NUMA Node 0, > and not on NUMA Node 1, in physical hosts with two processors, and are > simply looking for a way to tell Libvirt to consider NUMA Node 1 when > scheduling a VM, since there is nearly all of the memory available on NUMA > Node 1. > > > > Our flavors are defined with hw:numa_nodes='1' since we want all > vCPUs+Memory to land on a single NUMA Node, and so the guest OS has > visibility that a single NUMA Node is being used. > > > > We are "not" looking for a way to pin a VM to a specific NUMA node (such > as for SR-IOV purposes). > > > > Any suggestions where to look for the solution? > > > > Thanks! > > > Eric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Sat Oct 17 03:47:19 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Fri, 16 Oct 2020 22:47:19 -0500 Subject: [nova] NUMA scheduling In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> > As far as I know, numa_nodes=1 just means --> the resources for that VM should run on one NUMA node (so either NUMA0 or NUMA1). If there is space free on both, then it's probably going to pick one of the two? I thought the same, but it appears that VMs are never scheduled on NUMA1 even though NUMA0 is full (causing OOM to trigger and kill running VMs). I would have hoped that a NUMA node was treated like a host, and thus "VMs being balanced across nodes". The discussion on NUMA handling is long, so I was hoping that there might be information about the latest solution to the problem - or to be told that there isn't a good solution other than using huge pages. Eric From eandersson at blizzard.com Sat Oct 17 04:04:57 2020 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Sat, 17 Oct 2020 04:04:57 +0000 Subject: [nova] NUMA scheduling In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> , <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> Message-ID: We have been running with NUMA configured for a long time and don't believe I have seen this behavior. It's important that you configure the flavors / aggregates correct. I think this might be what you are looking for penstack flavor set m1.large --property hw:cpu_policy=dedicated https://docs.openstack.org/nova/pike/admin/cpu-topologies.html Pretty sure we also set this for any flavor that only requires a single NUMA zone openstack flavor set m1.large --property hw:numa_nodes=1 ________________________________ From: Eric K. Miller Sent: Friday, October 16, 2020 8:47 PM To: Laurent Dumont Cc: openstack-discuss Subject: RE: [nova] NUMA scheduling > As far as I know, numa_nodes=1 just means --> the resources for that VM should run on one NUMA node (so either NUMA0 or NUMA1). If there is space free on both, then it's probably going to pick one of the two? I thought the same, but it appears that VMs are never scheduled on NUMA1 even though NUMA0 is full (causing OOM to trigger and kill running VMs). I would have hoped that a NUMA node was treated like a host, and thus "VMs being balanced across nodes". The discussion on NUMA handling is long, so I was hoping that there might be information about the latest solution to the problem - or to be told that there isn't a good solution other than using huge pages. Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Sat Oct 17 05:13:14 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Sat, 17 Oct 2020 00:13:14 -0500 Subject: [nova] NUMA scheduling In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> , <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> >We have been running with NUMA configured for a long time and don't believe I have seen this behavior. It's important that you configure the flavors / aggregates correct. We are not looking for pinned CPUs - rather we want shared CPUs within a single NUMA node. Our flavor properties, for one particular flavor, are: hw:cpu_cores='4', hw:cpu_policy='shared', hw:cpu_sockets='1', hw:numa_nodes='1' We already have separate aggregates for dedicated and shared cpu_policy flavors. > Pretty sure we also set this for any flavor that only requires a single NUMA zone > openstack flavor set m1.large --property hw:numa_nodes=1 I thought so too, but it doesn't look like the above properties are allowing VMs to be provisioned on the second NUMA node. From amy at demarco.com Sat Oct 17 13:26:49 2020 From: amy at demarco.com (Amy Marrich) Date: Sat, 17 Oct 2020 08:26:49 -0500 Subject: Divisive Language stance final(?) draft and Forum session Message-ID: Hi everyone, Below is what we hope is the final draft of the OSF's stance on Divisive Language. If you have any comments please place them on the etherpad[0] and/or join us this Tuesday during the forum[1]. Please note that decisions related to what words should be replaced with are being left with the project's technical leadership with the knowledge that context is important to determine the best alternative. That said, the Diversity and Inclusion WG will be available to assist these groups. Thanks, Amy Marrich (spotz) 0 - https://etherpad.opendev.org/p/divisivelanguage 1- https://www.openstack.org/summit/2020/summit-schedule/events/24778/divisive-language-and-what-you-should-know 1a - https://zoom.us/j/94295083896?pwd=VmZCTFN3eERDK1ltRHRyWDl0eG1hZz09 1b - https://etherpad.opendev.org/p/vSummit2020__DivisiveLanguage ------------------ The OpenStack Foundation (OSF) Board of Directors supports removal of wording identified as oppressive, racist and sexist by members of our communities from the software and documentation they produce. While we know there will be challenges, both technical and non-technical, this is an action we feel is important. These efforts are the responsibility of the various technical leadership groups within our communities, and we trust them to make appropriate decisions about applicability, timelines, minimizing impact to users and operators, and choosing the changes that make the most sense for their projects. Contributors will take care to make changes to software in the least disruptive way possible. While standardized wording is a laudable goal, we recognize that different implementation contexts might require different solutions. In many cases the work is also complicated by external dependencies for these projects, and their capacity to make necessary changes before we can implement ours. Terminology to which special attention should be paid includes: 1. The use of "slave," or "master" in reference to slavery-oriented relationships, as is currently found in databases, the domain name system, etc. 1. The terms "blacklist" and "whitelist" in various contexts (which might require a variety of different replacements to make sense for those contexts) 1. The use of "master" in non-slavery-related contexts such as revision control branches and documentation builds (pending feedback from our community members) We shall continue to be vigilant for other language areas that cause challenges and work with the community to evolve this policy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Oct 17 14:00:50 2020 From: zigo at debian.org (Thomas Goirand) Date: Sat, 17 Oct 2020 16:00:50 +0200 Subject: Python 3.9 is here in Debian Sid In-Reply-To: <20201016135910.paltyva5ut2x5qcc@yuggoth.org> References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> <4fd89e88-d8ce-7faa-772f-0908a3387d5c@goirand.fr> <20201016135910.paltyva5ut2x5qcc@yuggoth.org> Message-ID: <79557a72-d6bd-db15-e295-ec7d207d6f56@debian.org> On 10/16/20 3:59 PM, Jeremy Stanley wrote: > On 2020-10-16 10:21:34 +0200 (+0200), Thomas Goirand wrote: > [...] >> I also think it'd be nice to have non-voting jobs detecting >> deprecated stuff. For example, a quick grep shows that a lot of >> projects are still using collections.Mapping instead of >> collections.abc.Mapping (which is to be removed in Python 3.10, >> according to the 3.9 release notes). Would there be a way to get >> our CI report these issues earlier? > > They're going to all explode, at least until PBR gets some more > changes merged and released to stop doing deprecated things. I've > been slowly working my way through testing simple PBR-using projects > with PYTHONWARNINGS=error (instead of =default::DeprecationWarning) > and fixing or noting the issues I encounter. Up until recently, a > number of its dependencies were also throwing deprecation warnings > under 3.9, but now I think we're down to just a couple of remaining > fixes pending. We didn't want to try to rush in a new PBR release > until Victoria was wrapped up, but now I think we can finish this > fairly soon. > Great, thanks for this work. Thomas From satish.txt at gmail.com Sat Oct 17 15:05:56 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 17 Oct 2020 11:05:56 -0400 Subject: [nova] NUMA scheduling In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> Message-ID: This is very odd, I am running NUMA aware openstack cloud and my VMs are getting scheduled on both sides of NUMA zone. Following is my flavor settings. Also I am using huge pages for performance. (make sure you have NUMATopologyFilter filter configured). hw:cpu_policy='dedicated', hw:cpu_sockets='2', hw:cpu_threads='2', hw:mem_page_size='large' what if you remove hw:numa_nodes=1 ? ~S On Sat, Oct 17, 2020 at 1:21 AM Eric K. Miller wrote: > > >We have been running with NUMA configured for a long time and don't believe I have seen this behavior. It's important that you configure the flavors / aggregates correct. > > We are not looking for pinned CPUs - rather we want shared CPUs within a single NUMA node. > > Our flavor properties, for one particular flavor, are: > hw:cpu_cores='4', hw:cpu_policy='shared', hw:cpu_sockets='1', hw:numa_nodes='1' > > We already have separate aggregates for dedicated and shared cpu_policy flavors. > > > Pretty sure we also set this for any flavor that only requires a single NUMA zone > > openstack flavor set m1.large --property hw:numa_nodes=1 > > I thought so too, but it doesn't look like the above properties are allowing VMs to be provisioned on the second NUMA node. > > > From emiller at genesishosting.com Sat Oct 17 15:31:48 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Sat, 17 Oct 2020 10:31:48 -0500 Subject: [nova] NUMA scheduling In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> Hi Satish, > This is very odd, I am running NUMA aware openstack cloud and my VMs > are getting scheduled on both sides of NUMA zone. Following is my > flavor settings. Also I am using huge pages for performance. (make > sure you have NUMATopologyFilter filter configured). > > hw:cpu_policy='dedicated', hw:cpu_sockets='2', hw:cpu_threads='2', > hw:mem_page_size='large' > > what if you remove hw:numa_nodes=1 ? Note that we are using a shared CPU policy (for various hosts). I don't know if this is causing our issue or not, but we definitely do not want to pin CPUs to VMs on these hosts. Without the hw:numa_nodes property, an individual VM is created with its vCPUs and Memory divided between the two NUMA nodes, which is not what we would prefer. We would prefer, instead, to have all vCPUs and Memory for the VM placed into a single NUMA node so all cores of the VM have access to this NUMA node's memory instead of having one core require cross-NUMA communications. With large core processors and large amounts of memory, it doesn't make much sense to have small VMs (such as 4 core VMs) span two NUMA nodes. With our current settings, every VM is placed into a single NUMA node (as we wanted), but they always land in NUMA node 0 and never in NUMA node 1. It does, however, appear that QEMU's memory overhead and Linux' buffer/cache is landing in NUMA node 1. Native processes on the hosts are spread between NUMA nodes. We don't have huge pages enabled, so we have not enabled the NUMATopologyFilter. Eric From laurentfdumont at gmail.com Sat Oct 17 17:02:20 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 17 Oct 2020 13:02:20 -0400 Subject: [nova] NUMA scheduling In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> Message-ID: What is the error thrown by Openstack when NUMA0 is full? On Sat, Oct 17, 2020 at 11:40 AM Eric K. Miller wrote: > Hi Satish, > > > This is very odd, I am running NUMA aware openstack cloud and my VMs > > are getting scheduled on both sides of NUMA zone. Following is my > > flavor settings. Also I am using huge pages for performance. (make > > sure you have NUMATopologyFilter filter configured). > > > > hw:cpu_policy='dedicated', hw:cpu_sockets='2', hw:cpu_threads='2', > > hw:mem_page_size='large' > > > > what if you remove hw:numa_nodes=1 ? > > Note that we are using a shared CPU policy (for various hosts). I don't > know if this is causing our issue or not, but we definitely do not want to > pin CPUs to VMs on these hosts. > > Without the hw:numa_nodes property, an individual VM is created with its > vCPUs and Memory divided between the two NUMA nodes, which is not what we > would prefer. We would prefer, instead, to have all vCPUs and Memory for > the VM placed into a single NUMA node so all cores of the VM have access to > this NUMA node's memory instead of having one core require cross-NUMA > communications. > > With large core processors and large amounts of memory, it doesn't make > much sense to have small VMs (such as 4 core VMs) span two NUMA nodes. > > With our current settings, every VM is placed into a single NUMA node (as > we wanted), but they always land in NUMA node 0 and never in NUMA node 1. > It does, however, appear that QEMU's memory overhead and Linux' > buffer/cache is landing in NUMA node 1. Native processes on the hosts are > spread between NUMA nodes. > > We don't have huge pages enabled, so we have not enabled the > NUMATopologyFilter. > > Eric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Sat Oct 17 17:18:43 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Sat, 17 Oct 2020 12:18:43 -0500 Subject: [nova] NUMA scheduling In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048149BF@gmsxchsvr01.thecreation.com> > What is the error thrown by Openstack when NUMA0 is full? OOM is actually killing the QEMU process, which causes Nova to report: /var/log/kolla/nova/nova-compute.log.4:2020-08-25 12:31:19.812 6 WARNING nova.compute.manager [req-62bddc53-ca8b-4bdc-bf41-8690fc88076f - - - - -] [instance: 8d8a262a-6e60-4e8a-97f9-14462f09b9e5] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4 So, there isn't a NUMA or memory-specific error from Nova - Nova is simply scheduling a VM on a node that it thinks has enough memory, and Libvirt (or Nova?) is configuring the VM to use CPU cores on a full NUMA node. NUMA Node 1 had about 240GiB of free memory with about 100GiB of buffer/cache space used, so plenty of free memory, whereas NUMA Node 0 was pretty tight on free memory. These are some logs in /var/log/messages (not for the nova-compute.log entry above, but the same condition for a VM that was killed - logs were rolled, so I had to pick a different VM): Oct 10 15:17:01 kernel: CPU 0/KVM invoked oom-killer: gfp_mask=0x100dca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0 Oct 10 15:17:01 kernel: CPU: 15 PID: 30468 Comm: CPU 0/KVM Not tainted 5.3.8-1.el7.elrepo.x86_64 #1 Oct 10 15:17:01 kernel: Hardware name: Oct 10 15:17:01 kernel: Call Trace: Oct 10 15:17:01 kernel: dump_stack+0x63/0x88 Oct 10 15:17:01 kernel: dump_header+0x51/0x210 Oct 10 15:17:01 kernel: oom_kill_process+0x105/0x130 Oct 10 15:17:01 kernel: out_of_memory+0x105/0x4c0 … … Oct 10 15:17:01 kernel: active_anon:108933472 inactive_anon:174036 isolated_anon:0#012 active_file:21875969 inactive_file:2418794 isolated_file:32#012 unevictable:88113 dirty:0 writeback:4 unstable:0#012 slab_reclaimable:3056118 slab_unreclaimable:432301#012 mapped:71768 shmem:570159 pagetables:258264 bounce:0#012 free:58924792 free_pcp:326 free_cma:0 Oct 10 15:17:01 kernel: Node 0 active_anon:382548916kB inactive_anon:173052kB active_file:0kB inactive_file:2272kB unevictable:289840kB isolated(anon):0kB isolated(file):128kB mapped:16696kB dirty:0kB writeback:0kB shmem:578812kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 286420992kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no Oct 10 15:17:01 kernel: Node 0 DMA free:15880kB min:0kB low:12kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15880kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 1589 385604 385604 385604 Oct 10 15:17:01 kernel: Node 0 DMA32 free:1535904kB min:180kB low:1780kB high:3380kB active_anon:90448kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1717888kB managed:1627512kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:1008kB local_pcp:248kB free_cma:0kB Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 0 384015 384015 384015 Oct 10 15:17:01 kernel: Node 0 Normal free:720756kB min:818928kB low:1212156kB high:1605384kB active_anon:382458300kB inactive_anon:173052kB active_file:0kB inactive_file:2272kB unevictable:289840kB writepending:0kB present:399507456kB managed:393231952kB mlocked:289840kB kernel_stack:58344kB pagetables:889796kB bounce:0kB free_pcp:296kB local_pcp:0kB free_cma:0kB Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 0 0 0 0 Oct 10 15:17:01 kernel: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15880kB Oct 10 15:17:01 kernel: Node 0 DMA32: 1*4kB (U) 1*8kB (M) 0*16kB 9*32kB (UM) 11*64kB (UM) 12*128kB (UM) 12*256kB (UM) 11*512kB (UM) 11*1024kB (M) 1*2048kB (U) 369*4096kB (M) = 1535980kB Oct 10 15:17:01 kernel: Node 0 Normal: 76633*4kB (UME) 30442*8kB (UME) 7998*16kB (UME) 1401*32kB (UE) 6*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 723252kB Oct 10 15:17:01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB Oct 10 15:17:01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Oct 10 15:17:01 kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB Oct 10 15:17:01 kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Oct 10 15:17:01 kernel: 24866489 total pagecache pages Oct 10 15:17:01 kernel: 0 pages in swap cache Oct 10 15:17:01 kernel: Swap cache stats: add 0, delete 0, find 0/0 Oct 10 15:17:01 kernel: Free swap = 0kB Oct 10 15:17:01 kernel: Total swap = 0kB Oct 10 15:17:01 kernel: 200973631 pages RAM Oct 10 15:17:01 kernel: 0 pages HighMem/MovableOnly Oct 10 15:17:01 kernel: 3165617 pages reserved Oct 10 15:17:01 kernel: 0 pages hwpoisoned Oct 10 15:17:01 kernel: Tasks state (memory values in pages): Oct 10 15:17:01 kernel: [ 2414] 0 2414 33478 20111 315392 0 0 systemd-journal Oct 10 15:17:01 kernel: [ 2438] 0 2438 31851 540 143360 0 0 lvmetad Oct 10 15:17:01 kernel: [ 2453] 0 2453 12284 1141 131072 0 -1000 systemd-udevd Oct 10 15:17:01 kernel: [ 4170] 0 4170 13885 446 131072 0 -1000 auditd Oct 10 15:17:01 kernel: [ 4393] 0 4393 5484 526 86016 0 0 irqbalance Oct 10 15:17:01 kernel: [ 4394] 0 4394 6623 624 102400 0 0 systemd-logind … … Oct 10 15:17:01 kernel: oom-kill:constraint=CONSTRAINT_MEMORY_POLICY,nodemask=0,cpuset=vcpu0,mems_allowed=0,global_oom,task_memcg=/machine.slice/machine-qemu\x2d237\x2dinstance\x2d0000fda8.scope,task=qemu-kvm,pid=25496,uid=42436 Oct 10 15:17:01 kernel: Out of memory: Killed process 25496 (qemu-kvm) total-vm:67989512kB, anon-rss:66780940kB, file-rss:11052kB, shmem-rss:4kB Oct 10 15:17:02 kernel: oom_reaper: reaped process 25496 (qemu-kvm), now anon-rss:0kB, file-rss:36kB, shmem-rss:4kB -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Oct 17 17:41:46 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 17 Oct 2020 13:41:46 -0400 Subject: [nova] NUMA scheduling In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA048149BF@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BF@gmsxchsvr01.thecreation.com> Message-ID: I would say try without "hw:numa_nodes=1" in flavor properties. ~S On Sat, Oct 17, 2020 at 1:28 PM Eric K. Miller wrote: > > > What is the error thrown by Openstack when NUMA0 is full? > > > > OOM is actually killing the QEMU process, which causes Nova to report: > > > > /var/log/kolla/nova/nova-compute.log.4:2020-08-25 12:31:19.812 6 WARNING nova.compute.manager [req-62bddc53-ca8b-4bdc-bf41-8690fc88076f - - - - -] [instance: 8d8a262a-6e60-4e8a-97f9-14462f09b9e5] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4 > > > > So, there isn't a NUMA or memory-specific error from Nova - Nova is simply scheduling a VM on a node that it thinks has enough memory, and Libvirt (or Nova?) is configuring the VM to use CPU cores on a full NUMA node. > > > > NUMA Node 1 had about 240GiB of free memory with about 100GiB of buffer/cache space used, so plenty of free memory, whereas NUMA Node 0 was pretty tight on free memory. > > > > These are some logs in /var/log/messages (not for the nova-compute.log entry above, but the same condition for a VM that was killed - logs were rolled, so I had to pick a different VM): > > > > Oct 10 15:17:01 kernel: CPU 0/KVM invoked oom-killer: gfp_mask=0x100dca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0 > > Oct 10 15:17:01 kernel: CPU: 15 PID: 30468 Comm: CPU 0/KVM Not tainted 5.3.8-1.el7.elrepo.x86_64 #1 > > Oct 10 15:17:01 kernel: Hardware name: > > Oct 10 15:17:01 kernel: Call Trace: > > Oct 10 15:17:01 kernel: dump_stack+0x63/0x88 > > Oct 10 15:17:01 kernel: dump_header+0x51/0x210 > > Oct 10 15:17:01 kernel: oom_kill_process+0x105/0x130 > > Oct 10 15:17:01 kernel: out_of_memory+0x105/0x4c0 > > … > > … > > Oct 10 15:17:01 kernel: active_anon:108933472 inactive_anon:174036 isolated_anon:0#012 active_file:21875969 inactive_file:2418794 isolated_file:32#012 unevictable:88113 dirty:0 writeback:4 unstable:0#012 slab_reclaimable:3056118 slab_unreclaimable:432301#012 mapped:71768 shmem:570159 pagetables:258264 bounce:0#012 free:58924792 free_pcp:326 free_cma:0 > > Oct 10 15:17:01 kernel: Node 0 active_anon:382548916kB inactive_anon:173052kB active_file:0kB inactive_file:2272kB unevictable:289840kB isolated(anon):0kB isolated(file):128kB mapped:16696kB dirty:0kB writeback:0kB shmem:578812kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 286420992kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no > > Oct 10 15:17:01 kernel: Node 0 DMA free:15880kB min:0kB low:12kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15880kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB > > Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 1589 385604 385604 385604 > > Oct 10 15:17:01 kernel: Node 0 DMA32 free:1535904kB min:180kB low:1780kB high:3380kB active_anon:90448kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1717888kB managed:1627512kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:1008kB local_pcp:248kB free_cma:0kB > > Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 0 384015 384015 384015 > > Oct 10 15:17:01 kernel: Node 0 Normal free:720756kB min:818928kB low:1212156kB high:1605384kB active_anon:382458300kB inactive_anon:173052kB active_file:0kB inactive_file:2272kB unevictable:289840kB writepending:0kB present:399507456kB managed:393231952kB mlocked:289840kB kernel_stack:58344kB pagetables:889796kB bounce:0kB free_pcp:296kB local_pcp:0kB free_cma:0kB > > Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 0 0 0 0 > > Oct 10 15:17:01 kernel: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15880kB > > Oct 10 15:17:01 kernel: Node 0 DMA32: 1*4kB (U) 1*8kB (M) 0*16kB 9*32kB (UM) 11*64kB (UM) 12*128kB (UM) 12*256kB (UM) 11*512kB (UM) 11*1024kB (M) 1*2048kB (U) 369*4096kB (M) = 1535980kB > > Oct 10 15:17:01 kernel: Node 0 Normal: 76633*4kB (UME) 30442*8kB (UME) 7998*16kB (UME) 1401*32kB (UE) 6*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 723252kB > > Oct 10 15:17:01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB > > Oct 10 15:17:01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB > > Oct 10 15:17:01 kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB > > Oct 10 15:17:01 kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB > > Oct 10 15:17:01 kernel: 24866489 total pagecache pages > > Oct 10 15:17:01 kernel: 0 pages in swap cache > > Oct 10 15:17:01 kernel: Swap cache stats: add 0, delete 0, find 0/0 > > Oct 10 15:17:01 kernel: Free swap = 0kB > > Oct 10 15:17:01 kernel: Total swap = 0kB > > Oct 10 15:17:01 kernel: 200973631 pages RAM > > Oct 10 15:17:01 kernel: 0 pages HighMem/MovableOnly > > Oct 10 15:17:01 kernel: 3165617 pages reserved > > Oct 10 15:17:01 kernel: 0 pages hwpoisoned > > Oct 10 15:17:01 kernel: Tasks state (memory values in pages): > > Oct 10 15:17:01 kernel: [ 2414] 0 2414 33478 20111 315392 0 0 systemd-journal > > Oct 10 15:17:01 kernel: [ 2438] 0 2438 31851 540 143360 0 0 lvmetad > > Oct 10 15:17:01 kernel: [ 2453] 0 2453 12284 1141 131072 0 -1000 systemd-udevd > > Oct 10 15:17:01 kernel: [ 4170] 0 4170 13885 446 131072 0 -1000 auditd > > Oct 10 15:17:01 kernel: [ 4393] 0 4393 5484 526 86016 0 0 irqbalance > > Oct 10 15:17:01 kernel: [ 4394] 0 4394 6623 624 102400 0 0 systemd-logind > > … > > … > > Oct 10 15:17:01 kernel: oom-kill:constraint=CONSTRAINT_MEMORY_POLICY,nodemask=0,cpuset=vcpu0,mems_allowed=0,global_oom,task_memcg=/machine.slice/machine-qemu\x2d237\x2dinstance\x2d0000fda8.scope,task=qemu-kvm,pid=25496,uid=42436 > > Oct 10 15:17:01 kernel: Out of memory: Killed process 25496 (qemu-kvm) total-vm:67989512kB, anon-rss:66780940kB, file-rss:11052kB, shmem-rss:4kB > > Oct 10 15:17:02 kernel: oom_reaper: reaped process 25496 (qemu-kvm), now anon-rss:0kB, file-rss:36kB, shmem-rss:4kB From emiller at genesishosting.com Sat Oct 17 17:44:03 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Sat, 17 Oct 2020 12:44:03 -0500 Subject: [nova] NUMA scheduling In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BF@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA048149C0@gmsxchsvr01.thecreation.com> > I would say try without "hw:numa_nodes=1" in flavor properties. We already tested this long ago. I mentioned previously: Without the hw:numa_nodes property, an individual VM is created with its vCPUs and Memory divided between the two NUMA nodes, which is not what we would prefer. We would prefer, instead, to have all vCPUs and Memory for the VM placed into a single NUMA node so all cores of the VM have access to this NUMA node's memory instead of having one core require cross-NUMA communications. From satish.txt at gmail.com Sat Oct 17 17:44:14 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 17 Oct 2020 13:44:14 -0400 Subject: [nova] NUMA scheduling In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA048149B4@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BA@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BB@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BD@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA048149BF@gmsxchsvr01.thecreation.com> Message-ID: or "hw:numa_nodes=2" to see if vm vcpu spreads to both zones. On Sat, Oct 17, 2020 at 1:41 PM Satish Patel wrote: > > I would say try without "hw:numa_nodes=1" in flavor properties. > > ~S > > On Sat, Oct 17, 2020 at 1:28 PM Eric K. Miller > wrote: > > > > > What is the error thrown by Openstack when NUMA0 is full? > > > > > > > > OOM is actually killing the QEMU process, which causes Nova to report: > > > > > > > > /var/log/kolla/nova/nova-compute.log.4:2020-08-25 12:31:19.812 6 WARNING nova.compute.manager [req-62bddc53-ca8b-4bdc-bf41-8690fc88076f - - - - -] [instance: 8d8a262a-6e60-4e8a-97f9-14462f09b9e5] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4 > > > > > > > > So, there isn't a NUMA or memory-specific error from Nova - Nova is simply scheduling a VM on a node that it thinks has enough memory, and Libvirt (or Nova?) is configuring the VM to use CPU cores on a full NUMA node. > > > > > > > > NUMA Node 1 had about 240GiB of free memory with about 100GiB of buffer/cache space used, so plenty of free memory, whereas NUMA Node 0 was pretty tight on free memory. > > > > > > > > These are some logs in /var/log/messages (not for the nova-compute.log entry above, but the same condition for a VM that was killed - logs were rolled, so I had to pick a different VM): > > > > > > > > Oct 10 15:17:01 kernel: CPU 0/KVM invoked oom-killer: gfp_mask=0x100dca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0 > > > > Oct 10 15:17:01 kernel: CPU: 15 PID: 30468 Comm: CPU 0/KVM Not tainted 5.3.8-1.el7.elrepo.x86_64 #1 > > > > Oct 10 15:17:01 kernel: Hardware name: > > > > Oct 10 15:17:01 kernel: Call Trace: > > > > Oct 10 15:17:01 kernel: dump_stack+0x63/0x88 > > > > Oct 10 15:17:01 kernel: dump_header+0x51/0x210 > > > > Oct 10 15:17:01 kernel: oom_kill_process+0x105/0x130 > > > > Oct 10 15:17:01 kernel: out_of_memory+0x105/0x4c0 > > > > … > > > > … > > > > Oct 10 15:17:01 kernel: active_anon:108933472 inactive_anon:174036 isolated_anon:0#012 active_file:21875969 inactive_file:2418794 isolated_file:32#012 unevictable:88113 dirty:0 writeback:4 unstable:0#012 slab_reclaimable:3056118 slab_unreclaimable:432301#012 mapped:71768 shmem:570159 pagetables:258264 bounce:0#012 free:58924792 free_pcp:326 free_cma:0 > > > > Oct 10 15:17:01 kernel: Node 0 active_anon:382548916kB inactive_anon:173052kB active_file:0kB inactive_file:2272kB unevictable:289840kB isolated(anon):0kB isolated(file):128kB mapped:16696kB dirty:0kB writeback:0kB shmem:578812kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 286420992kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no > > > > Oct 10 15:17:01 kernel: Node 0 DMA free:15880kB min:0kB low:12kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15880kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB > > > > Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 1589 385604 385604 385604 > > > > Oct 10 15:17:01 kernel: Node 0 DMA32 free:1535904kB min:180kB low:1780kB high:3380kB active_anon:90448kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1717888kB managed:1627512kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:1008kB local_pcp:248kB free_cma:0kB > > > > Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 0 384015 384015 384015 > > > > Oct 10 15:17:01 kernel: Node 0 Normal free:720756kB min:818928kB low:1212156kB high:1605384kB active_anon:382458300kB inactive_anon:173052kB active_file:0kB inactive_file:2272kB unevictable:289840kB writepending:0kB present:399507456kB managed:393231952kB mlocked:289840kB kernel_stack:58344kB pagetables:889796kB bounce:0kB free_pcp:296kB local_pcp:0kB free_cma:0kB > > > > Oct 10 15:17:01 kernel: lowmem_reserve[]: 0 0 0 0 0 > > > > Oct 10 15:17:01 kernel: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15880kB > > > > Oct 10 15:17:01 kernel: Node 0 DMA32: 1*4kB (U) 1*8kB (M) 0*16kB 9*32kB (UM) 11*64kB (UM) 12*128kB (UM) 12*256kB (UM) 11*512kB (UM) 11*1024kB (M) 1*2048kB (U) 369*4096kB (M) = 1535980kB > > > > Oct 10 15:17:01 kernel: Node 0 Normal: 76633*4kB (UME) 30442*8kB (UME) 7998*16kB (UME) 1401*32kB (UE) 6*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 723252kB > > > > Oct 10 15:17:01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB > > > > Oct 10 15:17:01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB > > > > Oct 10 15:17:01 kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB > > > > Oct 10 15:17:01 kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB > > > > Oct 10 15:17:01 kernel: 24866489 total pagecache pages > > > > Oct 10 15:17:01 kernel: 0 pages in swap cache > > > > Oct 10 15:17:01 kernel: Swap cache stats: add 0, delete 0, find 0/0 > > > > Oct 10 15:17:01 kernel: Free swap = 0kB > > > > Oct 10 15:17:01 kernel: Total swap = 0kB > > > > Oct 10 15:17:01 kernel: 200973631 pages RAM > > > > Oct 10 15:17:01 kernel: 0 pages HighMem/MovableOnly > > > > Oct 10 15:17:01 kernel: 3165617 pages reserved > > > > Oct 10 15:17:01 kernel: 0 pages hwpoisoned > > > > Oct 10 15:17:01 kernel: Tasks state (memory values in pages): > > > > Oct 10 15:17:01 kernel: [ 2414] 0 2414 33478 20111 315392 0 0 systemd-journal > > > > Oct 10 15:17:01 kernel: [ 2438] 0 2438 31851 540 143360 0 0 lvmetad > > > > Oct 10 15:17:01 kernel: [ 2453] 0 2453 12284 1141 131072 0 -1000 systemd-udevd > > > > Oct 10 15:17:01 kernel: [ 4170] 0 4170 13885 446 131072 0 -1000 auditd > > > > Oct 10 15:17:01 kernel: [ 4393] 0 4393 5484 526 86016 0 0 irqbalance > > > > Oct 10 15:17:01 kernel: [ 4394] 0 4394 6623 624 102400 0 0 systemd-logind > > > > … > > > > … > > > > Oct 10 15:17:01 kernel: oom-kill:constraint=CONSTRAINT_MEMORY_POLICY,nodemask=0,cpuset=vcpu0,mems_allowed=0,global_oom,task_memcg=/machine.slice/machine-qemu\x2d237\x2dinstance\x2d0000fda8.scope,task=qemu-kvm,pid=25496,uid=42436 > > > > Oct 10 15:17:01 kernel: Out of memory: Killed process 25496 (qemu-kvm) total-vm:67989512kB, anon-rss:66780940kB, file-rss:11052kB, shmem-rss:4kB > > > > Oct 10 15:17:02 kernel: oom_reaper: reaped process 25496 (qemu-kvm), now anon-rss:0kB, file-rss:36kB, shmem-rss:4kB From 1732724715 at qq.com Sat Oct 17 22:02:00 2020 From: 1732724715 at qq.com (=?gb18030?B?SmFtZXM=?=) Date: Sun, 18 Oct 2020 06:02:00 +0800 Subject: [glance][devstack] Problem with stack.sh in devsatck install Message-ID: hi guys:       I met some problems in installing devstack . the details please refers to "Errors" listed as below.     The environment: 1) Win 10 ; 2) VMware workstation ;3) ubuntu-18.04.5-desktop-amd64;4)devstack          The installation processes refer to the devstack quick start (https://docs.openstack.org/devstack/latest/) Could you give me the solution to solve the problem. Error : Installing collected packages: oslo.rootwrap, retrying, os-win, oslo.privsep, os-brick, simplejson, python-cinderclient, glance-store Attempting uninstall: simplejson Found existing installation: simplejson 3.13.2 ERROR: Cannot uninstall 'simplejson'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. +inc/python:pip_install:1 exit_trap +./stack.sh:exit_trap:489 local r=1 ++./stack.sh:exit_trap:490 jobs -p +./stack.sh:exit_trap:490 jobs= +./stack.sh:exit_trap:493 [[ -n '' ]] +./stack.sh:exit_trap:499 '[' -f '' ']' +./stack.sh:exit_trap:504 kill_spinner +./stack.sh:kill_spinner:399 '[' '!' -z '' ']' +./stack.sh:exit_trap:506 [[ 1 -ne 0 ]] +./stack.sh:exit_trap:507 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:509 type -p generate-subunit +./stack.sh:exit_trap:510 generate-subunit 1589476087 780 fail +./stack.sh:exit_trap:512 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Oct 17 22:25:25 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 17 Oct 2020 22:25:25 +0000 Subject: [glance][devstack] Problem with stack.sh in devsatck install In-Reply-To: References: Message-ID: <20201017222525.pamn7sgorgualelf@yuggoth.org> On 2020-10-18 06:02:00 +0800 (+0800), James wrote: > I met some problems in installing devstack [...] > ubuntu-18.04.5-desktop-amd64 [...] For latest versions of DevStack you'll want Ubuntu 20.04 instead, it's not necessarily expected to work on older Ubuntu versions unless you also use an older DevStack. > Attempting uninstall:+simplejson Found existing installation: > simplejson 3.13.2 ERROR: Cannot uninstall 'simplejson'. It is a > distutils installed project and thus we cannot accurately > determine which files belong to it which would lead to only a > partial uninstall. [...] The error you're seeing is because DevStack wants to install simplejson with pip, but there's a distro package of a different version of simplejson already present which it cannot remove. If you continue to see this error on Ubuntu 20.04, try using apt to uninstall the python3-simplejson package before starting DevStack. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From eblock at nde.ag Sun Oct 18 07:18:02 2020 From: eblock at nde.ag (Eugen Block) Date: Sun, 18 Oct 2020 07:18:02 +0000 Subject: [ops] [cinder] __DEFAULT__ volume type In-Reply-To: Message-ID: <20201018071802.Horde.WbOpDseQpdEH5ab01v5mJqO@webmail.nde.ag> Hi, I also was confused about that during our upgrade to Train. I noticed that many volumes in our cloud didn't have any volume type defined at all, so I had to update the respective table before I could continue the upgrade process. I have also defined a different volume type in cinder.conf but I don't think you can just delete the __DEFAULT__ type, I'm not sure anymore but I think I also tried that. But it doesn't really hurt if you have one of your own types as default. Regards, Eugen Zitat von Massimo Sgaravatto : > I have recently updated my Cloud from Rocky to Train (I am running Cinder > v. 15.4.0) > I have a question concerning the __DEFAULT__ volume type, that I don't > remember to have seen before. > > Since: > - I have no volumes using this volume type > - I defined in in the [DEFAULT] section of cinder.conf the attribute > "default_volume_type" to a value different than "__DEFAULT___" > > I assume that I can safely delete the __DEFAULT__ volume type > > Is this correct? > > Thanks, Massimo From rdhasman at redhat.com Sun Oct 18 07:37:58 2020 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Sun, 18 Oct 2020 13:07:58 +0530 Subject: Fwd: [ops] [cinder] __DEFAULT__ volume type In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Rajat Dhasmana Date: Sat, Oct 17, 2020 at 12:31 AM Subject: Re: [ops] [cinder] __DEFAULT__ volume type To: Massimo Sgaravatto Hi Massimo, On Fri, Oct 16, 2020 at 7:42 PM Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > I have recently updated my Cloud from Rocky to Train (I am running Cinder > v. 15.4.0) > I have a question concerning the __DEFAULT__ volume type, that I don't > remember to have seen before. > > Since Train, cinder has decided to discourage untyped volumes (volumes with None volume type) as it doesn't add any value to the volume attributes. To achieve this, we created the __DEFAULT__ type and migrated all untyped volumes to have the __DEFAULT__ type. > Since: > - I have no volumes using this volume type > - I defined in in the [DEFAULT] section of cinder.conf the attribute > "default_volume_type" to a value different than "__DEFAULT___" > > I assume that I can safely delete the __DEFAULT__ volume type > > Is this correct? > > Yes, as long as there are no volumes using it and there is a valid volume type defined with ``default_volume_type`` config in cinder.conf, you can safely delete it. NOTE: ``default_volume_type`` has a default value of ``__DEFAULT__`` to not allow users to further create untyped volumes, if you are deleting this, make sure there is always a valid volume type defined with ``default_volume_type`` in cinder.conf > Thanks, Massimo > Thanks and Regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Sun Oct 18 07:38:32 2020 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Sun, 18 Oct 2020 13:08:32 +0530 Subject: Fwd: [ops] [cinder] __DEFAULT__ volume type In-Reply-To: References: <20201018071802.Horde.WbOpDseQpdEH5ab01v5mJqO@webmail.nde.ag> Message-ID: Sorry i forgot to include openstack discuss mailing list with my reply. ---------- Forwarded message --------- From: Rajat Dhasmana Date: Sun, Oct 18, 2020 at 1:06 PM Subject: Re: [ops] [cinder] __DEFAULT__ volume type To: Eugen Block Hi Eugen, On Sun, Oct 18, 2020 at 12:54 PM Eugen Block wrote: > Hi, > > I also was confused about that during our upgrade to Train. I noticed > that many volumes in our cloud didn't have any volume type defined at > all, so I had to update the respective table before I could continue > the upgrade process. What do you mean by "updating the respective table"? The optimal way to update a volume with a volume type is: 1) create a volume type pointing to the backend in which the volume currently is 2) retype the volume to that volume type > I have also defined a different volume type in > cinder.conf but I don't think you can just delete the __DEFAULT__ > type, The functionality to delete the __DEFAULT__ type is only available since cinder train >= 15.4.0 (See the releasenotes for more info regarding the __DEFAULT__ type[1]) Also you need to have a valid value defined with ``default_volume_type`` in cinder.conf Let me know if you have respective settings and still not able to delete the __DEFAULT__ type. > I'm not sure anymore but I think I also tried that. But it > doesn't really hurt if you have one of your own types as default. > > Regards, > Eugen > > [1] https://docs.openstack.org/releasenotes/cinder/train.html#relnotes-15-4-0-stable-train-bug-fixes Regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From ChenSa at radware.com Sun Oct 18 09:32:09 2020 From: ChenSa at radware.com (Chen Sagi) Date: Sun, 18 Oct 2020 09:32:09 +0000 Subject: approach to deploying rhosp 16.1 with octavia on standalone Message-ID: Hi, I am trying to figure out if I am able to deploy Red Hat Openstack platform with one physical server available, included with the Octavia solution. For now I have been trying to use the "openstack triplo deploy -standalone" command (with various environment files) and it Is unable to install Octavia without an undercloud available. Is it even possible? If so, I would love to hear some suggestions. Thanks, Chen Sagi. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Sun