From whayutin at redhat.com Tue Sep 1 01:09:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 31 Aug 2020 19:09:14 -0600 Subject: [rdo-users] [TripleO][Ussuri] image prepare, cannot download containers image prepare recently In-Reply-To: References: Message-ID: On Mon, Aug 31, 2020 at 1:23 PM Ruslanas Gžibovskis wrote: > Hi all, > > I have noticed, that recently my undercloud is not able to download images > [0]. I have provided newly generated containers-prepare-parameter.yaml and > outputs from container image prepare > > providing --verbose and later beginning of --debug (in the end) [0] > > were there any changes? As "openstack tripleo container image prepare > default --output-env-file containers-prepare-parameter.yaml > --local-push-destination" have prepared a bit different file, compared > what was previously: NEW # namespace: docker.io/tripleou VS namespace: > docker.io/tripleomaster # OLD > > > [0] - http://paste.openstack.org/show/rBCNAQJBEe9y7CKyi9aG/ > -- > Ruslanas Gžibovskis > +370 6030 7030 > hrm.. seems to be there: https://hub.docker.com/r/tripleou/centos-binary-gnocchi-metricd/tags?page=1&name=current-tripleo I see current-tripleo tags Here are some example config and logs for your reference, although what you have seems sane to me. https://logserver.rdoproject.org/61/749161/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/f87b9d9/logs/undercloud/home/zuul/containers-prepare-parameter.yaml.txt.gz https://logserver.rdoproject.org/61/749161/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/f87b9d9/logs/undercloud/var/log/tripleo-container-image-prepare-ansible.log.txt.gz https://logserver.rdoproject.org/61/749161/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/f87b9d9/logs/undercloud/var/log/tripleo-container-image-prepare.log.txt.gz https://647ea51a7c76ba5f17c8-d499b2e7288499505ef93c98963c2e35.ssl.cf5.rackcdn.com/748668/1/gate/tripleo-ci-centos-8-standalone/0c706e1/logs/undercloud/home/zuul/containers-prepare-parameters.yaml > _______________________________________________ > users mailing list > users at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Tue Sep 1 01:49:12 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 1 Sep 2020 10:49:12 +0900 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> Message-ID: Sebastian, Sam, thank you for speaking up. as Slawek said, the first (and probably the biggest) thing is to fix the ci. the major part for it is to make midonet itself to run on ubuntu version used by the ci. (18.04, or maybe directly to 20.04) https://midonet.atlassian.net/browse/MNA-1344 iirc, the remaining blockers are: * libreswan (used by vpnaas) * vpp (used by fip64) maybe it's the easiest to drop those features along with their required components, if it's acceptable for your use cases. alternatively you might want to make midonet run in a container. (so that you can run it with older ubuntu, or even a container trimmed for JVM) there were a few attempts to containerize midonet. i think this is the latest one: https://github.com/midonet/midonet-docker On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: > > We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. > > I’m happy to help too. > > Cheers, > Sam > > > > > On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: > > > > Hi, > > > > Thx Sebastian for stepping in to maintain the project. That is great news. > > I think that at the beginning You should do 2 things: > > - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, > > - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, > > > > I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). > > > >> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: > >> > >> Hi Slawek, > >> > >> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. > >> > >> Please let me know how to proceed and how we can be onboarded easily. > >> > >> Best regards, > >> > >> Sebastian > >> > >> -- > >> Sebastian Saemann > >> Head of Managed Services > >> > >> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > >> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > >> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > >> https://netways.de | sebastian.saemann at netways.de > >> > >> ** NETWAYS Web Services - https://nws.netways.de ** > > > > — > > Slawek Kaplonski > > Principal software engineer > > Red Hat > > > > > From sorrison at gmail.com Tue Sep 1 04:39:03 2020 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 1 Sep 2020 14:39:03 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> Message-ID: <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> > On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: > > Sebastian, Sam, > > thank you for speaking up. > > as Slawek said, the first (and probably the biggest) thing is to fix the ci. > the major part for it is to make midonet itself to run on ubuntu > version used by the ci. (18.04, or maybe directly to 20.04) > https://midonet.atlassian.net/browse/MNA-1344 > iirc, the remaining blockers are: > * libreswan (used by vpnaas) > * vpp (used by fip64) > maybe it's the easiest to drop those features along with their > required components, if it's acceptable for your use cases. We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? I’m keen to do the work but might need a bit of guidance to get started, Sam > > alternatively you might want to make midonet run in a container. (so > that you can run it with older ubuntu, or even a container trimmed for > JVM) > there were a few attempts to containerize midonet. > i think this is the latest one: https://github.com/midonet/midonet-docker > > On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >> >> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >> >> I’m happy to help too. >> >> Cheers, >> Sam >> >> >> >>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >>> >>> Hi, >>> >>> Thx Sebastian for stepping in to maintain the project. That is great news. >>> I think that at the beginning You should do 2 things: >>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>> >>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>> >>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: >>>> >>>> Hi Slawek, >>>> >>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>>> >>>> Please let me know how to proceed and how we can be onboarded easily. >>>> >>>> Best regards, >>>> >>>> Sebastian >>>> >>>> -- >>>> Sebastian Saemann >>>> Head of Managed Services >>>> >>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>>> https://netways.de | sebastian.saemann at netways.de >>>> >>>> ** NETWAYS Web Services - https://nws.netways.de ** >>> >>> — >>> Slawek Kaplonski >>> Principal software engineer >>> Red Hat >>> >>> >> From yamamoto at midokura.com Tue Sep 1 04:59:39 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 1 Sep 2020 13:59:39 +0900 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> Message-ID: hi, On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: > > > > > On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: > > > > Sebastian, Sam, > > > > thank you for speaking up. > > > > as Slawek said, the first (and probably the biggest) thing is to fix the ci. > > the major part for it is to make midonet itself to run on ubuntu > > version used by the ci. (18.04, or maybe directly to 20.04) > > https://midonet.atlassian.net/browse/MNA-1344 > > iirc, the remaining blockers are: > > * libreswan (used by vpnaas) > > * vpp (used by fip64) > > maybe it's the easiest to drop those features along with their > > required components, if it's acceptable for your use cases. > > We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. > > We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? it still exists. but i don't think it's maintained well. let me find and ask someone in midokura who "owns" that part of infra. does it also involve some package-related modifications to midonet repo, right? > > I’m keen to do the work but might need a bit of guidance to get started, > > Sam > > > > > > > > > > alternatively you might want to make midonet run in a container. (so > > that you can run it with older ubuntu, or even a container trimmed for > > JVM) > > there were a few attempts to containerize midonet. > > i think this is the latest one: https://github.com/midonet/midonet-docker > > > > On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: > >> > >> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. > >> > >> I’m happy to help too. > >> > >> Cheers, > >> Sam > >> > >> > >> > >>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: > >>> > >>> Hi, > >>> > >>> Thx Sebastian for stepping in to maintain the project. That is great news. > >>> I think that at the beginning You should do 2 things: > >>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, > >>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, > >>> > >>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). > >>> > >>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: > >>>> > >>>> Hi Slawek, > >>>> > >>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. > >>>> > >>>> Please let me know how to proceed and how we can be onboarded easily. > >>>> > >>>> Best regards, > >>>> > >>>> Sebastian > >>>> > >>>> -- > >>>> Sebastian Saemann > >>>> Head of Managed Services > >>>> > >>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > >>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > >>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > >>>> https://netways.de | sebastian.saemann at netways.de > >>>> > >>>> ** NETWAYS Web Services - https://nws.netways.de ** > >>> > >>> — > >>> Slawek Kaplonski > >>> Principal software engineer > >>> Red Hat > >>> > >>> > >> > From yasufum.o at gmail.com Tue Sep 1 05:23:56 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 1 Sep 2020 14:23:56 +0900 Subject: [tacker] Wallaby vPTG Message-ID: Hi team, I booked our slots, 27-29 Oct 6am-8am UTC, for the next vPTG[1] as we agreed in previous irc meeting. I also prepared an etherpad [2], so please add your name and suggestions. [1] https://ethercalc.openstack.org/7xp2pcbh1ncb [2] https://etherpad.opendev.org/p/Tacker-PTG-Wallaby Thanks, Yasufumi From arnaud.morin at gmail.com Tue Sep 1 05:59:33 2020 From: arnaud.morin at gmail.com (Arnaud MORIN) Date: Tue, 1 Sep 2020 07:59:33 +0200 Subject: [ops] Restructuring OSOPS tools In-Reply-To: References: Message-ID: Hello, I also agree with the merging of everything into one repo. When I discovered those repos, I was surprised that it was split into several repos. Keeping a structure with contrib/ folder like Thierry example is the best imo. You said it, it will ease discovery of tools, but also maintenance. Cheers. Le ven. 28 août 2020 à 12:06, Thierry Carrez a écrit : > Sean McGinnis wrote: > > [...] > > Since these are now owned by an official SIG, we can move this content > > back under the openstack/ namespace. That should help increase > > visibility somewhat, and make things look a little more official. It > > will also allow contributors to tooling to get recognition for > > contributing to an import part of the OpenStack ecosystem. > > > > I do think it's can be a little more difficult to find things spread out > > over several repos though. For simplicity with finding tooling, as well > > as watching for reviews and helping with overall maintenance, I would > > like to move all of these under a common openstack/osops. Under that > > repo, we can then have a folder structure with tools/logging, > > tools/monitoring, etc. > > Also the original setup[1] called for moving things from one repo to > another as they get more mature, which loses history. So I agree a > single repository is better. > > However, one benefit of the original setup was that it made it really > low-friction to land half-baked code in the osops-tools-contrib > repository. The idea was to encourage tools sharing, rather than judge > quality or curate a set. I think it's critical for the success of OSops > that operator code can be brought in with very low friction, and > curation can happen later. > > If we opt for a theme-based directory structure, we could communicate > that a given tool is in "unmaintained/use-at-your-own-risk" status using > metadata. But thinking more about it, I would suggest we keep a > low-friction "contrib/" directory in the repo, which more clearly > communicates "use at your own risk" for anything within it. Then we > could move tools under the "tools/" directory structure if a community > forms within the SIG to support and improve a specific tool. That would > IMHO allow both low-friction landing *and* curation to happen. > > > [...] > > Please let me know if there are any objects to this plan. Otherwise, I > > will start cleaning things up and getting it staged in a new repo to be > > imported as an official repo owned by the SIG. > > I like it! > > [1] https://wiki.openstack.org/wiki/Osops > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Tue Sep 1 06:03:55 2020 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 1 Sep 2020 16:03:55 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> Message-ID: <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> > On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: > > hi, > > On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: >> >> >> >>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: >>> >>> Sebastian, Sam, >>> >>> thank you for speaking up. >>> >>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >>> the major part for it is to make midonet itself to run on ubuntu >>> version used by the ci. (18.04, or maybe directly to 20.04) >>> https://midonet.atlassian.net/browse/MNA-1344 >>> iirc, the remaining blockers are: >>> * libreswan (used by vpnaas) >>> * vpp (used by fip64) >>> maybe it's the easiest to drop those features along with their >>> required components, if it's acceptable for your use cases. >> >> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >> >> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? > > it still exists. but i don't think it's maintained well. > let me find and ask someone in midokura who "owns" that part of infra. > > does it also involve some package-related modifications to midonet repo, right? Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow Sam > >> >> I’m keen to do the work but might need a bit of guidance to get started, >> >> Sam >> >> >> >> >> >> >>> >>> alternatively you might want to make midonet run in a container. (so >>> that you can run it with older ubuntu, or even a container trimmed for >>> JVM) >>> there were a few attempts to containerize midonet. >>> i think this is the latest one: https://github.com/midonet/midonet-docker >>> >>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >>>> >>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >>>> >>>> I’m happy to help too. >>>> >>>> Cheers, >>>> Sam >>>> >>>> >>>> >>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >>>>> >>>>> Hi, >>>>> >>>>> Thx Sebastian for stepping in to maintain the project. That is great news. >>>>> I think that at the beginning You should do 2 things: >>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>>>> >>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>>>> >>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: >>>>>> >>>>>> Hi Slawek, >>>>>> >>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>>>>> >>>>>> Please let me know how to proceed and how we can be onboarded easily. >>>>>> >>>>>> Best regards, >>>>>> >>>>>> Sebastian >>>>>> >>>>>> -- >>>>>> Sebastian Saemann >>>>>> Head of Managed Services >>>>>> >>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>>>>> https://netways.de | sebastian.saemann at netways.de >>>>>> >>>>>> ** NETWAYS Web Services - https://nws.netways.de ** >>>>> >>>>> — >>>>> Slawek Kaplonski >>>>> Principal software engineer >>>>> Red Hat >>>>> >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Sep 1 06:57:55 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 1 Sep 2020 08:57:55 +0200 Subject: [neutron] Bug deputy report August 24 - 30 Message-ID: Hello: This is the list of bugs in the period from August 24 to August 30. Critical: (none) High: - https://bugs.launchpad.net/neutron/+bug/1892861: [neutron-tempest-plugin] If paramiko SSH client connection fails because of authentication, cannot reconnect (unassigned) - https://bugs.launchpad.net/neutron/+bug/1893314: set neutron quota not take effect (ZhouHeng, https://review.opendev.org/#/c/748578/) Medium: - https://bugs.launchpad.net/neutron/+bug/1892496: 500 on SG deletion: Cannot delete or update a parent row (unassigned) - https://bugs.launchpad.net/neutron/+bug/1888878: Data not present in OVN IDL when requested (fix released) - https://bugs.launchpad.net/neutron/+bug/1892846: AttributeError in l3-agent changing HA router to non-HA (Slawek, https://review.opendev.org/#/c/747922/) Low: - https://bugs.launchpad.net/neutron/+bug/1892741: neutron-sriov-nic-agent cannot be disabled (unassigned) - https://bugs.launchpad.net/neutron/+bug/1893188: [tempest] Randomize subnet CIDR to avoid test clashes (Lajos, https://review.opendev.org/#/c/749041/) Wishlist: - https://bugs.launchpad.net/neutron/+bug/1893625: Implement router gateway IP QoS in OVN backend (zhanghao, https://review.opendev.org/#/c/749012/) Incomplete: - https://bugs.launchpad.net/neutron/+bug/1893015: ping with large package size fails Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Tue Sep 1 09:22:32 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Tue, 1 Sep 2020 17:22:32 +0800 Subject: [Magnum][Kayobe] Magnum Kubernetes clusters failing to be created (bugs?) Message-ID: Hi guys, I hope you are all keeping safe and well at the moment. I am trying to launch Kubernetes clusters into Openstack Train which has been deployed via Kayobe (Kayobe as I understand is a wrapper for kolla-ansible). There have been a few strange issues here and I've struggled to isolate them. These issues started recently after a fresh Openstack deployment some months ago (around February 2020) to give some context. This Openstack is not "live" as I've been trying to get to the bottom of the issues: Issue 1. When trying to launch a cluster we get error "Resource Create Failed: Forbidden: Resources.Kube Masters.Resources[0].Resources.Kube-Master: Only Volume-Backed Servers Are Allowed For Flavors With Zero Disk. " Issue 2. After successfully creating a cluster of a smaller node size, the "resize cluster" is failing (however update the cluster is working). Some background on this specific environment: Deployed via Kayobe, with these components: Cinder, Designate, iscsid, Magnum, Multipathd, neutron provider networks The Cinder component integrates with iSCSI SAN storage using the Nimble driver. This is the only storage. In order to prevent Openstack from allocating Compute node local HDD as instance storage, I have all flavours configured with root disk / ephemeral disk / swap disk = "0MB". This then results in all instance data being stored on the backend Cinder storage appliance. I was able to get a cluster deployed by first creating the template as needed, then when launching the cluster Horizon prompts you for items already there in the template such as number of nodes, node flavour and labels etc. I re-supplied all of the info (as to duplicate it) and then tried creating the cluster. After many many times trying over the course of a few weeks to a few months it was successful. I was then able to work around the issue #2 above to get it increased in size. When looking at the logs for issue #2, it looks like some content is missing in the API but I am not certain. I will include a link to the pastebin below [1]. When trying to resize the cluster, Horizon gives error: "Error: Unable to resize given cluster id: 99693dbf-160a-40e0-9ed4-93f3370367ee". I then searched the controller node /var/log directory for this ID and found "horizon.log [:error] [pid 25] Not Found: /api/container_infra/clusters/99693dbf-160a-40e0-9ed4-93f3370367ee/resize". Going to the Horizon menu "update cluster" allows you to increase the number of nodes and then save/apply the config which does indeed resize the cluster. Regarding issue #1, we've been unable to deploy a cluster in a new project and the error is hinting it relates to the flavours having 0MB disk specified, though this error is new and we've been successful previously with deploying clusters (albeit with the hit-and-miss experiences) using the flavour with 0MB disk as described above. Again I searched for the (stack) ID after the failure, in the logs on the controller and I obtained not much more than the error already seen with Horizon [2]. I was able to create new flavours with root disk = 15GB and then successfully deploy a cluster on the next immediate try. Update cluster from 3 nodes to 6 nodes was also immediately successful. However I see the compute nodes "used" disk space increasing after increasing the cluster size which is an issue as the compute node has very limited HDD capacity (32GB SD card). At this point I also checked 1) previously installed cluster using the 0MB disk flavour and 2) new instances using the 0MB disk flavour. I notice that the previous cluster is having host storage allocated but while the new instance is not having host storage allocated. So the cluster create success is using flavour with disk = 0MB while the result is compute HDD storage being consumed. So with the above, please may I clarify on the following? 1. It seems that 0MB disk flavours may not be supported with magnum now? Could the experts confirm? :) Is there another way that I should be configuring this so that compute node disk is not being consumed (because it is slow and has limited capacity). 2. The issue #1 looks like a bug to me, is it known? If not, is this mail enough to get it realised? Pastebin links as mentioned [1] http://paste.openstack.org/show/797316/ [2] http://paste.openstack.org/show/797318/ Many thanks, Regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Tue Sep 1 10:21:31 2020 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 1 Sep 2020 13:21:31 +0300 Subject: [horizon] horizon-integration-tests job is broken In-Reply-To: References: Message-ID: Fix [3] is merged. [3] https://review.opendev.org/748881 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Aug 31, 2020 at 3:51 PM Ivan Kolodyazhny wrote: > Hi team, > > Please, do not trigger recheck if horizon-integration-tests failed like > [1]: > ______________ TestDashboardHelp.test_dashboard_help_redirection > _______________ > 'NoneType' object is not iterable > > While I'm trying to figure out what is happening there [2], any help with > troubleshooting is welcome. > > > [1] > https://51bb980dc10c72928109-9873e0e5415ff38d9f1a5cc3b1681b19.ssl.cf1.rackcdn.com/744847/2/check/horizon-integration-tests/62ace86/job-output.txt > > [2] https://review.opendev.org/#/c/749011 > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Sep 1 12:22:38 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 1 Sep 2020 14:22:38 +0200 Subject: [tripleo][ussuri][undercloud][tripleo_rabbitmq] do not start after reboot Message-ID: Hi all, I think this issue might be known to RHOSP16 [1], but it is same in RDO - ussuri with centos8. if anything is needed, I could try to help/test. my workaround is to run openstack undercloud install --force-stack-update [1] https://access.redhat.com/solutions/5269241 -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 1 12:51:34 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 1 Sep 2020 08:51:34 -0400 Subject: senlin auto scaling question In-Reply-To: References: Message-ID: Hi Satish, I'm interested by this, did you end up finding a solution for this? Thanks, Mohammed On Thu, Aug 27, 2020 at 1:54 PM Satish Patel wrote: > > Folks, > > I have created very simple cluster using following command > > openstack cluster create --profile myserver --desired-capacity 2 > --min-size 2 --max-size 3 --strict my-asg > > It spun up 2 vm immediately now because the desired capacity is 2 so I > am assuming if any node dies in the cluster it should spin up node to > make count 2 right? > > so i killed one of node with "nove delete " but > senlin didn't create node automatically to make desired capacity 2 (In > AWS when you kill node in ASG it will create new node so is this > senlin different then AWS?) > -- Mohammed Naser VEXXHOST, Inc. From fungi at yuggoth.org Tue Sep 1 13:17:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 1 Sep 2020 13:17:53 +0000 Subject: [neutron][security-sig] Bug deputy report August 24 - 30 In-Reply-To: References: Message-ID: <20200901131752.pdd7vs3aihz6xmyu@yuggoth.org> Hopefully it's not a lot of additional work, but the VMT would be thrilled if projects would also keep Public Security vulnerability reports in mind and try to wrap up any they can. For example, the Neutron project on Launchpad has 9 currently unresolved, some opened more than 3 years ago: I'm willing to bet at least a few are either fixed now, related to deprecated/removed functionality, or simply unreproducible. And if they're still real bugs but don't represent an actual exploitable vulnerability, that's good to know too (in which case we'd just switch them to regular Public bugs so the VMT no longer needs to keep track of those). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Tue Sep 1 14:35:45 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 1 Sep 2020 16:35:45 +0200 Subject: [nova] providing a local disk to a Nova instance Message-ID: Hi Nova team! tl;dr: we would like to contribute giving instances access to physical block devices directly on the compute hosts. Would this be accepted? Longer version: About 3 or 4 years ago, someone wrote a spec, so we'd be able to provide a local disk of a compute, directly to a VM to use. This was then rejected, because at the time, Cinder had the blockdevice drive, which was more or less achieving the same thing. Unfortunately, because nobody was maintaining the blockdevice driver in Cinder, and because there was no CI that could test it, the driver got removed. We've investigated how we could otherwise implement it, and one solution would be to use Cinder, but then we'd be going through an iSCSI export, which would drastically reduce performances. Another solution would be to manage KVM instances by hand, not touching anything to libvirt and/or OpenVSwitch, but then we would loose the ease of using the Nova API, so we would prefer to avoid this direction. So we (ie: employees in my company) need to ask the Nova team: would you consider a spec to do what was rejected before, since there's now no other good enough alternative? Our current goal is to be able to provide a disk directly to a VM, so that we could build Ceph clusters with an hyper-converged model (ie: storage hosted on the compute nodes). In this model, we wouldn't need live-migration of a VM with an attached physical block device (though the feature could be added on a later stage). Before we start investigating how this can be done, I need to know if this has at least some chances to be accepted or not. If there is, then we'll probably start an experimental patch locally, then write a spec to properly start this project. So please let us know. Cheers, Thomas Goirand (zigo) From sosogh at 126.com Tue Sep 1 07:23:51 2020 From: sosogh at 126.com (sosogh) Date: Tue, 1 Sep 2020 15:23:51 +0800 (CST) Subject: [kolla] questions when using external mysql Message-ID: <5c644eb6.3cda.174488cf629.Coremail.sosogh@126.com> Hi list: I want to use kolla-ansible to deploy openstack , but using external mysql. I am following these docs: https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html https://docs.openstack.org/kolla-ansible/latest/reference/databases/external-mariadb-guide.html. I have some questions: ################ ## Question 1 ## ################ According to the offical doc , if setting it in inventory file(multinode), kolla-ansible -i ./multinode deploy will throw out error: I guest when kolla-ansible running the playbook against myexternalmariadbloadbalancer.com , the """register: find_custom_fluentd_inputs""" in """TASK [common : Find custom fluentd input config files]""" maybe null . ################ ## Question 2 ## ################ According to the offical doc , If the MariaDB username is not root, set database_username in /etc/kolla/globals.yml file: But in kolla-ansible/ansible/roles/xxxxxx/tasks/bootstrap.yml , they use ''' login_user: "{{ database_user }}" ''' , for example : So at last , I took the following steps: 1. """not""" setting [mariadb] in inventory file(multinode) 2. set "database_user: openstack" for "privillegeduser" PS: My idea is that if using an external ready-to-use mysql (cluster), it is enough to tell kolla-ansible only the address/user/password of the external DB. i.e. setting them in the file /etc/kolla/globals.yml and passwords.yml , no need to add it into inventory file(multinode) Finally , it is successful to deploy openstack via kolla-ansible . So far I have not found any problems. Are the steps what I took good ( enough ) ? Thank you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2938 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 95768 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2508 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 22016 bytes Desc: not available URL: From jimmy at openstack.org Tue Sep 1 15:00:14 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 1 Sep 2020 10:00:14 -0500 Subject: 2020 Virtual Summit: Forum Submissions Now Accepted Message-ID: <591437EB-0E99-4F7E-9ED9-482CCEB77C82@getmailspring.com> Hello Everyone! We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Shanghai Forum page [4]. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. The timeline for submissions is as follows: Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org. Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. Sept 28th | Forum schedule final Oct 19th | Forum begins! If you have questions or concerns, please reach out to speakersupport at openstack.org. Cheers, Jimmy [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/2020/ [3] https://cfp.openstack.org [4]https://wiki.openstack.org/wiki/Forum/Virtual2020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Sep 1 15:13:31 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 1 Sep 2020 10:13:31 -0500 Subject: 2020 Virtual Summit: Forum Submissions Now Accepted In-Reply-To: <591437EB-0E99-4F7E-9ED9-482CCEB77C82@getmailspring.com> References: <591437EB-0E99-4F7E-9ED9-482CCEB77C82@getmailspring.com> Message-ID: <5CFEF7CB-7732-4255-B3FE-CF1105FB994D@getmailspring.com> Please replace the word "Shanghai" with the word "Virtual" :) Cheers, Jimmy "Bad at Email" McArthur On Sep 1 2020, at 10:00 am, Jimmy McArthur wrote: > Hello Everyone! > > We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Shanghai Forum page [4]. > > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. > > The timeline for submissions is as follows: > > Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org. > Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. > Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. > Sept 28th | Forum schedule final > Oct 19th | Forum begins! > > If you have questions or concerns, please reach out to speakersupport at openstack.org. > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/2020/ > [3] https://cfp.openstack.org > [4]https://wiki.openstack.org/wiki/Forum/Virtual2020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Tue Sep 1 15:15:53 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 1 Sep 2020 11:15:53 -0400 Subject: [nova] providing a local disk to a Nova instance In-Reply-To: References: Message-ID: IIUC one of our (Red Hat's) customer-facing folks brought us a similar question recently. In their case they wanted to use PCI passthrough to pass an NVMe disk to an instance. This is technically possible, but there would be major privacy concerns in a multi-tenant cloud, as Nova currently has no way of cleaning up a disk after a VM has left it, so either the guest OS would have to do it itself, or any subsequent VM using that disk would have access to all of the previous VM's data (this could be mitigated by full-disk encryption, though). Cleaning up disks after VM's would probably fall more within Cybor's scope... There's also the question of instance move operations like live and cold migrations - what happens to the passed-through disk in those cases? Does Nova have to copy it to the destination? I think those would be fairly easily addressable though (there are no major technical or political challenges, it's just a matter of someone writing the code and reviewing). The disk cleanup thing is going to be harder, I suspect - more politically than technically. It's a bit of a chicken and egg problem with Nova and Cyborg, at the moment. Nova can refuse features as being out of scope and punt them to Cyborg, but I'm not sure how production-ready Cyborg is... On Tue, Sep 1, 2020 at 10:41 AM Thomas Goirand wrote: > > Hi Nova team! > > tl;dr: we would like to contribute giving instances access to physical > block devices directly on the compute hosts. Would this be accepted? > > Longer version: > > About 3 or 4 years ago, someone wrote a spec, so we'd be able to provide > a local disk of a compute, directly to a VM to use. This was then > rejected, because at the time, Cinder had the blockdevice drive, which > was more or less achieving the same thing. Unfortunately, because nobody > was maintaining the blockdevice driver in Cinder, and because there was > no CI that could test it, the driver got removed. > > We've investigated how we could otherwise implement it, and one solution > would be to use Cinder, but then we'd be going through an iSCSI export, > which would drastically reduce performances. > > Another solution would be to manage KVM instances by hand, not touching > anything to libvirt and/or OpenVSwitch, but then we would loose the ease > of using the Nova API, so we would prefer to avoid this direction. > > So we (ie: employees in my company) need to ask the Nova team: would you > consider a spec to do what was rejected before, since there's now no > other good enough alternative? > > Our current goal is to be able to provide a disk directly to a VM, so > that we could build Ceph clusters with an hyper-converged model (ie: > storage hosted on the compute nodes). In this model, we wouldn't need > live-migration of a VM with an attached physical block device (though > the feature could be added on a later stage). > > Before we start investigating how this can be done, I need to know if > this has at least some chances to be accepted or not. If there is, then > we'll probably start an experimental patch locally, then write a spec to > properly start this project. So please let us know. > > Cheers, > > Thomas Goirand (zigo) > From skaplons at redhat.com Tue Sep 1 16:44:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 1 Sep 2020 18:44:36 +0200 Subject: [neutron][security-sig] Bug deputy report August 24 - 30 In-Reply-To: <20200901131752.pdd7vs3aihz6xmyu@yuggoth.org> References: <20200901131752.pdd7vs3aihz6xmyu@yuggoth.org> Message-ID: <20200901164436.uxshmwdjteudvedo@skaplons-mac> Hi, On Tue, Sep 01, 2020 at 01:17:53PM +0000, Jeremy Stanley wrote: > Hopefully it's not a lot of additional work, but the VMT would be > thrilled if projects would also keep Public Security vulnerability > reports in mind and try to wrap up any they can. For example, the > Neutron project on Launchpad has 9 currently unresolved, some opened > more than 3 years ago: > > Thx for the link Jeremy. I will check those bugs. > > I'm willing to bet at least a few are either fixed now, related to > deprecated/removed functionality, or simply unreproducible. And if > they're still real bugs but don't represent an actual exploitable > vulnerability, that's good to know too (in which case we'd just > switch them to regular Public bugs so the VMT no longer needs to > keep track of those). > -- > Jeremy Stanley -- Slawek Kaplonski Principal software engineer Red Hat From kennelson11 at gmail.com Tue Sep 1 17:02:27 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 1 Sep 2020 10:02:27 -0700 Subject: Mentoring Boston University Students Message-ID: Hello! As you may or may not know, the last two years various community members have mentored students from North Dakota State University for a semester to work on projects in OpenStack. Recently, I learned of a similar program at Boston University and they are still looking for mentors interested for the upcoming semester. Essentially you would have 5 to 7 students for 13 weeks to mentor and work on some feature or effort in your project. The time to apply is running out however as the deadline is Sept 3rd. If you are interested, please let me know ASAP! I am happy to help get the students up to speed with the community and getting their workspaces set up, but the actual work they would do is more up to you :) -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.bell at cern.ch Tue Sep 1 17:47:28 2020 From: tim.bell at cern.ch (Tim Bell) Date: Tue, 1 Sep 2020 19:47:28 +0200 Subject: [nova] providing a local disk to a Nova instance In-Reply-To: References: Message-ID: > On 1 Sep 2020, at 17:15, Artom Lifshitz wrote: > > IIUC one of our (Red Hat's) customer-facing folks brought us a similar > question recently. In their case they wanted to use PCI passthrough to > pass an NVMe disk to an instance. This is technically possible, but > there would be major privacy concerns in a multi-tenant cloud, as Nova > currently has no way of cleaning up a disk after a VM has left it, so > either the guest OS would have to do it itself, or any subsequent VM > using that disk would have access to all of the previous VM's data > (this could be mitigated by full-disk encryption, though). Cleaning up > disks after VM's would probably fall more within Cybor's scope... > > There's also the question of instance move operations like live and > cold migrations - what happens to the passed-through disk in those > cases? Does Nova have to copy it to the destination? I think those > would be fairly easily addressable though (there are no major > technical or political challenges, it's just a matter of someone > writing the code and reviewing). > > The disk cleanup thing is going to be harder, I suspect - more > politically than technically. It's a bit of a chicken and egg problem > with Nova and Cyborg, at the moment. Nova can refuse features as being > out of scope and punt them to Cyborg, but I'm not sure how > production-ready Cyborg is... > Does the LVM pass through option help for direct attach of a local disk ? https://cloudnull.io/2017/12/nova-lvm-an-iop-love-story/ Tim > On Tue, Sep 1, 2020 at 10:41 AM Thomas Goirand wrote: >> >> Hi Nova team! >> >> tl;dr: we would like to contribute giving instances access to physical >> block devices directly on the compute hosts. Would this be accepted? >> >> Longer version: >> >> About 3 or 4 years ago, someone wrote a spec, so we'd be able to provide >> a local disk of a compute, directly to a VM to use. This was then >> rejected, because at the time, Cinder had the blockdevice drive, which >> was more or less achieving the same thing. Unfortunately, because nobody >> was maintaining the blockdevice driver in Cinder, and because there was >> no CI that could test it, the driver got removed. >> >> We've investigated how we could otherwise implement it, and one solution >> would be to use Cinder, but then we'd be going through an iSCSI export, >> which would drastically reduce performances. >> >> Another solution would be to manage KVM instances by hand, not touching >> anything to libvirt and/or OpenVSwitch, but then we would loose the ease >> of using the Nova API, so we would prefer to avoid this direction. >> >> So we (ie: employees in my company) need to ask the Nova team: would you >> consider a spec to do what was rejected before, since there's now no >> other good enough alternative? >> >> Our current goal is to be able to provide a disk directly to a VM, so >> that we could build Ceph clusters with an hyper-converged model (ie: >> storage hosted on the compute nodes). In this model, we wouldn't need >> live-migration of a VM with an attached physical block device (though >> the feature could be added on a later stage). >> >> Before we start investigating how this can be done, I need to know if >> this has at least some chances to be accepted or not. If there is, then >> we'll probably start an experimental patch locally, then write a spec to >> properly start this project. So please let us know. >> >> Cheers, >> >> Thomas Goirand (zigo) >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Sep 1 18:30:20 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 01 Sep 2020 19:30:20 +0100 Subject: [nova] providing a local disk to a Nova instance In-Reply-To: References: Message-ID: On Tue, 2020-09-01 at 19:47 +0200, Tim Bell wrote: > > On 1 Sep 2020, at 17:15, Artom Lifshitz wrote: > > > > IIUC one of our (Red Hat's) customer-facing folks brought us a similar > > question recently. In their case they wanted to use PCI passthrough to > > pass an NVMe disk to an instance. This is technically possible, but > > there would be major privacy concerns in a multi-tenant cloud, as Nova > > currently has no way of cleaning up a disk after a VM has left it, so > > either the guest OS would have to do it itself, or any subsequent VM > > using that disk would have access to all of the previous VM's data > > (this could be mitigated by full-disk encryption, though). Cleaning up > > disks after VM's would probably fall more within Cybor's scope... > > > > There's also the question of instance move operations like live and > > cold migrations - what happens to the passed-through disk in those > > cases? Does Nova have to copy it to the destination? I think those > > would be fairly easily addressable though (there are no major > > technical or political challenges, it's just a matter of someone > > writing the code and reviewing). > > > > The disk cleanup thing is going to be harder, I suspect - more > > politically than technically. It's a bit of a chicken and egg problem > > with Nova and Cyborg, at the moment. Nova can refuse features as being > > out of scope and punt them to Cyborg, but I'm not sure how > > production-ready Cyborg is... on the cyborg front i suggested adding a lvm driver to cyborg a few years ago for this usecase. i wanted to use it for testing in the gate since it would require no hardware and also support "programming" by copying a glance image to the volmen and cleaning by erasing the vloumne when the vm is deleted. it would solve the local attach disk usecase too in a way. 1 by providing lvm volumes as disks to the guest and two by extending nova to support host block devices form cybrog enabling other driver that for example just alloacted an entire blockdevci instead of a volume to be written without needing nova changes. but the production readyness was the catch 22 there. because it was not considered production ready i caned the idea of an lvm driver because there was no path to deliverin gthis to customer downstream so i did not spend time working on the idea. i still think its the right way to go but currently that is not aligned with the feature im working on. > > > > Does the LVM pass through option help for direct attach of a local disk ? > > https://cloudnull.io/2017/12/nova-lvm-an-iop-love-story/ not really since the lvm image backend for nova just use lvm volumns instead of the root/ephmeral disks in the flavor its not actully providing a way to attach a local disk or partion so is not really the same usecase. at least form a down stream peresective its also not supported in the redhat product. from an upstream perfective it is not tested in the gate as far as i am aware so the maintance of that backend is questionably although if we get bug reports wew will fix them but novas lvm backend is not really a solution here i suspect. that said it did in the past out perform the default flat/qcow backedn for write intensive workloads. not sure if that has changed over time but there were pros and cons to it. > > Tim > > On Tue, Sep 1, 2020 at 10:41 AM Thomas Goirand wrote: > > > > > > Hi Nova team! > > > > > > tl;dr: we would like to contribute giving instances access to physical > > > block devices directly on the compute hosts. Would this be accepted? > > > > > > Longer version: > > > > > > About 3 or 4 years ago, someone wrote a spec, so we'd be able to provide > > > a local disk of a compute, directly to a VM to use. This was then > > > rejected, because at the time, Cinder had the blockdevice drive, which > > > was more or less achieving the same thing. Unfortunately, because nobody > > > was maintaining the blockdevice driver in Cinder, and because there was > > > no CI that could test it, the driver got removed. > > > > > > We've investigated how we could otherwise implement it, and one solution > > > would be to use Cinder, but then we'd be going through an iSCSI export, > > > which would drastically reduce performances. > > > > > > Another solution would be to manage KVM instances by hand, not touching > > > anything to libvirt and/or OpenVSwitch, but then we would loose the ease > > > of using the Nova API, so we would prefer to avoid this direction. > > > > > > So we (ie: employees in my company) need to ask the Nova team: would you > > > consider a spec to do what was rejected before, since there's now no > > > other good enough alternative? > > > > > > Our current goal is to be able to provide a disk directly to a VM, so > > > that we could build Ceph clusters with an hyper-converged model (ie: > > > storage hosted on the compute nodes). In this model, we wouldn't need > > > live-migration of a VM with an attached physical block device (though > > > the feature could be added on a later stage). > > > > > > Before we start investigating how this can be done, I need to know if > > > this has at least some chances to be accepted or not. If there is, then > > > we'll probably start an experimental patch locally, then write a spec to > > > properly start this project. So please let us know. > > > > > > Cheers, > > > > > > Thomas Goirand (zigo) > > > > > > > > > From stig at stackhpc.com Tue Sep 1 20:20:44 2020 From: stig at stackhpc.com (Stig Telfer) Date: Tue, 1 Sep 2020 21:20:44 +0100 Subject: [scientific-sig] SIG Meeting: Ironic kexec, CANOPIE-HPC and the PTG Message-ID: Hi All - We have a Scientific SIG meeting on IRC channel #openstack-meeting at 2100 UTC (about 45 minutes). Everyone is welcome. Agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_September_1st_2020 Today we'd like to talk about supporting the development of kexec support for Ironic deployments. Also some up-coming CFPs and PTG participation. Cheers, Stig From kennelson11 at gmail.com Tue Sep 1 21:34:37 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 1 Sep 2020 14:34:37 -0700 Subject: [all][PTL][release] Victoria Cycle Highlights Message-ID: Hello Everyone! It's time to start thinking about calling out 'cycle-highlights' in your deliverables! As PTLs, you probably get many pings and emails from various parties (marketing, management, journalists, etc) asking for highlights of what is new and what significant changes are coming in the new release. By putting them all in the same place it makes them easy to reference because they get compiled into a pretty website like this from the last few releases: Rocky[1], Stein[2], Train[3], Ussuri[4]. As usual, we don't need a fully fledged marketing message, just a few highlights (3-4 ideally), from each project team. Looking through your release notes is a good place to start. *The deadline for cycle highlights is the end of the R-5 week [5] on Sept 11th.* How To Reminder: ------------------------- Simply add them to the deliverables/train/$PROJECT.yaml in the openstack/releases repo like this: cycle-highlights: - Introduced new service to use unused host to mine bitcoin. The formatting options for this tag are the same as what you are probably used to with Reno release notes. Also, you can check on the formatting of the output by either running locally: tox -e docs And then checking the resulting doc/build/html/train/highlights.html file or the output of the build-openstack-sphinx-docs job under html/train/ highlights.html. Can't wait to see what you've accomplished! -Kendall Nelson (diablo_rojo) [1] https://releases.openstack.org/rocky/highlights.html [2] https://releases.openstack.org/stein/highlights.html [3] https://releases.openstack.org/train/highlights.html [4] https://releases.openstack.org/ussuri/highlights.html [5] htt https://releases.openstack.org/victoria/schedule.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed Sep 2 01:56:28 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 1 Sep 2020 17:56:28 -0800 Subject: Mentoring Boston University Students In-Reply-To: References: Message-ID: Hi Kendall, We'd like to help and have help in the manila team. We have a few projects [1] where the on-ramp may be relatively easy - I can work with you and define them. How do we apply? Thanks, Goutham [1] https://etherpad.opendev.org/p/manila-todos On Tue, Sep 1, 2020 at 9:08 AM Kendall Nelson wrote: > Hello! > > As you may or may not know, the last two years various community members > have mentored students from North Dakota State University for a semester to > work on projects in OpenStack. Recently, I learned of a similar program at > Boston University and they are still looking for mentors interested for the > upcoming semester. > > Essentially you would have 5 to 7 students for 13 weeks to mentor and work > on some feature or effort in your project. > > The time to apply is running out however as the deadline is Sept 3rd. If > you are interested, please let me know ASAP! I am happy to help get the > students up to speed with the community and getting their workspaces set > up, but the actual work they would do is more up to you :) > > -Kendall (diablo_rojo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anilj.mailing at gmail.com Wed Sep 2 07:12:33 2020 From: anilj.mailing at gmail.com (Anil Jangam) Date: Wed, 2 Sep 2020 00:12:33 -0700 Subject: Graceful stopping of RabbitMQ AMQP notification listener Message-ID: Hi, I have coded OpenStack AMQP listener following the example and it is working fine. https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py The related code snippets of the NotificationHandler class are shown as follows. # Initialize the AMQP listener def init(self, cluster_ip, user, password, port): cfg.CONF() cluster_url = "rabbit://" + user + ":" + password + "@" + cluster_ip + ":" + port + "/" transport = oslo_messaging.get_notification_transport(cfg.CONF, url=cluster_url) targets = [ oslo_messaging.Target(topic='versioned_notifications'), ] endpoints = [self.__endpoint] # Initialize the notification listener try: self.__amqp_listener = oslo_messaging.get_notification_listener(transport, targets, endpoints, executor='threading') except NotImplementedError as err: LOGGER.error("Failed to initialize the notification listener {}".format(err)) return False LOGGER.debug("Initialized the notification listener {}".format(cluster_url)) return True # Arm the compute event listeners def start_amqp_event_listener(self): # Start the notification handler LOGGER.debug("Started the OpenStack notification handler") self.__amqp_listener.start() # Disarm the compute event listeners def stop_amqp_event_listener(self): LOGGER.debug("Stopping the OpenStack notification handler") if self.__amqp_listener is not None: self.__amqp_listener.stop() I am using this interface from a new process handler function, however, when I invoke the stop_amqp_eent_listener() method, my process hangs. It does not terminate naturally. I verified that the self.__amqp_listener.stop() function is not returning. Is there anything missing in this code? Is there any specific consideration when calling the listener from a new process? Can someone provide a clue? # Stop the worker def stop(self): # Stop the AMQP notification handler self.__amqp_handler.stop_amqp_event_listener() LOGGER.debug("Stopped the worker for {}".format(self.__ops_conn_info.cluster_ip)) /anil. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongiman at gmail.com Wed Sep 2 07:23:03 2020 From: yongiman at gmail.com (=?UTF-8?B?7ZWc7Iq57KeE?=) Date: Wed, 2 Sep 2020 16:23:03 +0900 Subject: [ovn][neutron][ussuri] How to configure for overlay in DPDK with OVN ? Message-ID: Hi, I’m trying to implement dpdk-ovs with ovn for our openstack environment. I create a bond interface(tunbound) with ovs and tun0 and tun1 are slave interfaces of the bond interface named tunbond. Bridge br-int fail_mode: secure datapath_type: netdev Port tunbond Interface tun0 type: dpdk options: {dpdk-devargs="0000:1a:00.1", n_rxq="2"} Interface tun1 type: dpdk options: {dpdk-devargs="0000:d8:00.1", n_rxq="2"} Port br-int Interface br-int type: internal Now, I need to configure remote-ovn options to the ovs. *I am confused how can I define the IP address of the bonded interface with ovs?* Theses are commands for connect remote ovn services. ovs-vsctl set open . external-ids:ovn-remote=tcp:{controller ip}:6642 ovs-vsctl set open . external-ids:ovn-encap-type=geneve *1* This below command make it me confused. ovs-vsctl set open . external-ids:ovn-encap-ip={local ip} How should I resolve this issue? Regards, John Haan -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Wed Sep 2 07:30:10 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Wed, 2 Sep 2020 15:30:10 +0800 Subject: [ptg][cyborg] Cyborg Wallaby PTG schedule and resources References: <3F9B2BCB-2C30-4FB0-BF15-F03FAA039827.ref@yahoo.com> Message-ID: <3F9B2BCB-2C30-4FB0-BF15-F03FAA039827@yahoo.com> Hi team, The next virtual Wallaby PTG will be held next month from Oct 26 to Oct 30. I have booked the same time slots in the ethercalc[0] as our last PTG did. However, these are not fixed, if the booked time does not suit your time, Please don't hesitate to add your availability to the doodle[1]. BTW, PTG Topics are open to collect and you can add to the etherpad[2]. [0]https://ethercalc.openstack.org/7xp2pcbh1ncb. [1]https://doodle.com/poll/mrudbrn87vh48ayq [2]https://etherpad.opendev.org/p/cyborg-wallaby-goals Regards, Yumeng From feilong at catalyst.net.nz Wed Sep 2 07:41:07 2020 From: feilong at catalyst.net.nz (feilong) Date: Wed, 2 Sep 2020 19:41:07 +1200 Subject: Issue with heat and magnum In-Reply-To: <08439410328b4d1ab7ca684d5af2c7c7@stfc.ac.uk> References: <08439410328b4d1ab7ca684d5af2c7c7@stfc.ac.uk> Message-ID: <241e821b-4072-e4b8-0ef3-fae4169977a4@catalyst.net.nz> Hi Alexander, Firstly, could you please help understand your Heat version and Magnum version? Secondly, I don't really think it's related to Magnum. As long as Heat API get the request from Magnum, Magnum conductor just query Heat API to sync the status. On 14/08/20 10:49 pm, Alexander Dibbo - UKRI STFC wrote: > > Hi, > >   > > I am having an issue with magnum creating clusters when I have > multiple active heat-engine daemons running. > >   > > I get the following error in the heat engine logs: > > 2020-08-14 10:36:30.237 598383 INFO heat.engine.resource > [req-a2c862eb-370c-4e91-a2c6-dca32c7872ce - - - - -] signal > SoftwareDeployment "master_config_deployment" [67ba9ce2-aba5-4c15-a7ea > -6b774659a0e2] Stack > "kubernetes-test-26-3uzjqqob47fh-kube_masters-mhctjio2b4gh-0-pbhumflm5mn5" > [dc66e4d9-0c9b-4b18-a2c6-dd9724fa51a9] : Authentication cannot be > scoped to multiple target > s. Pick one of: project, domain, trust or unscoped > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource Traceback > (most recent call last): > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 2462, > in _handle_signal > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     > signal_result = self.handle_signal(details) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/engine/resources/openstack/heat/software_deployment.py", > line 514, in handle_signal > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     > timeutils.utcnow().isoformat()) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/rpc/client.py", line 788, in > signal_software_deployment > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource >     version='1.6') > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/rpc/client.py", line 89, in call > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     return > client.call(ctxt, method, **kwargs) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line > 165, in call > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     msg_ctxt > = self.serializer.serialize_context(ctxt) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/common/messaging.py", line 46, > in serialize_context > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     _context > = ctxt.to_dict() > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/common/context.py", line 185, > in to_dict > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     'roles': > self.roles, > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/common/context.py", line 315, > in roles > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     > self._load_keystone_data() > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 292, in > wrapped_f > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     return > self.call(f, *args, **kw) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 358, in call > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     do = > self.iter(retry_state=retry_state) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 319, in iter > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     return > fut.result() > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line > 422, in result > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     return > self.__get_result() > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 361, in call > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     result = > fn(*args, **kwargs) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/heat/common/context.py", line 306, > in _load_keystone_data > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     auth_ref > = self.auth_plugin.get_access(self.keystone_session) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", > line 134, in get_access > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     > self.auth_ref = self.get_auth_ref(session) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", > line 208, in get_auth_ref > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     return > self._plugin.get_auth_ref(session, **kwargs) > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource   File > "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", > line 144, in get_auth_ref > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource     > message='Authentication cannot be scoped to multiple' > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource > AuthorizationFailure: Authentication cannot be scoped to multiple > targets. Pick one of: project, domain, trust or unscoped > 2020-08-14 10:36:30.237 598383 ERROR heat.engine.resource > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service > [req-a2c862eb-370c-4e91-a2c6-dca32c7872ce - - - - -] Unhandled error > in asynchronous task: ResourceFailure: AuthorizationFailure: > resources.master_config_deployment: Authentication cannot be scoped to > multiple targets. Pick one of: project, domain, trust or unscoped > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service Traceback > (most recent call last): > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 132, > in log_exceptions > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     gt.wait() > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 181, > in wait > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     return > self._exit_event.wait() > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/eventlet/event.py", line 132, in wait > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     > current.throw(*self._exc) > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 221, > in main > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     result = > function(*args, **kwargs) > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 123, > in _start_with_trace > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     return > func(*args, **kwargs) > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 1871, > in _resource_signal > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     > needs_metadata_updates = rsrc.signal(details, need_check) > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 2500, > in signal > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     > self._handle_signal(details) > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service   File > "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 2480, > in _handle_signal > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service     raise failure > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service > ResourceFailure: AuthorizationFailure: > resources.master_config_deployment: Authentication cannot be scoped to > multiple targets. Pi > ck one of: project, domain, trust or unscoped > 2020-08-14 10:36:30.890 598383 ERROR heat.engine.service > >   > > Each of the individual heat-engine daemons create magnum clusters > correctly when they are the only ones online. > >   > > Attached are the heat and magnum config files. > >   > > Any ideas where to look would be appreciated? > >   > >   > > Regards > >   > > Alexander Dibbo – Cloud Architect / Cloud Operations Group Leader > > For STFC Cloud Documentation visit > https://stfc-cloud-docs.readthedocs.io > > > To raise a support ticket with the cloud team please email > cloud-support at gridpp.rl.ac.uk > > To receive notifications about the service please subscribe to our > mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD > > To receive fast notifications or to discuss usage of the cloud please > join our Slack: https://stfc-cloud.slack.com/ > >   > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not > use, disclose, copy or distribute this email or any of its attachments > and should notify the sender immediately and delete this email from > your system. UK Research and Innovation (UKRI) has taken every > reasonable precaution to minimise risk of this email or any > attachments containing viruses or malware but the recipient should > carry out its own virus and malware checks before opening the > attachments. UKRI does not accept any liability for any losses or > damages which the recipient may sustain due to presence of any > viruses. Opinions, conclusions or other information in this message > and attachments that are not related directly to UKRI business are > solely those of the author and do not represent the views of UKRI. > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Wed Sep 2 07:44:01 2020 From: feilong at catalyst.net.nz (feilong) Date: Wed, 2 Sep 2020 19:44:01 +1200 Subject: [Magnum][Kayobe] Magnum Kubernetes clusters failing to be created (bugs?) In-Reply-To: References: Message-ID: <2b199738-2fe4-631b-6f4e-4050d040abed@catalyst.net.nz> Hi Tony, My comments about your two issues: 1. I'm not sure it's a Magnum issue. Did you try to draft a simple Heat template to use that flavor and same image to create instance? Does it work? 2. When you say "resize cluster" failed, what's the error you got from magnum conductor log? On 1/09/20 9:22 pm, Tony Pearce wrote: > Hi guys, I hope you are all keeping safe and well at the moment.  > > I am trying to launch Kubernetes clusters into Openstack Train which > has been deployed via Kayobe (Kayobe as I understand is a wrapper for > kolla-ansible). There have been a few strange issues here and I've > struggled to isolate them. These issues started recently after a fresh > Openstack deployment some months ago (around February 2020) to give > some context. This Openstack is not "live" as I've been trying to get > to the bottom of the issues: > > Issue 1. When trying to launch a cluster we get error "Resource Create > Failed: Forbidden: Resources.Kube > Masters.Resources[0].Resources.Kube-Master: Only Volume-Backed Servers > Are Allowed For Flavors With Zero Disk. " > > Issue 2. After successfully creating a cluster of a smaller node size, > the "resize cluster" is failing (however update the cluster is working).  > > Some background on this specific environment:  > Deployed via Kayobe, with these components:  > Cinder, Designate, iscsid, Magnum, Multipathd, neutron provider networks > > The Cinder component integrates with iSCSI SAN storage using the > Nimble driver. This is the only storage. In order to prevent Openstack > from allocating Compute node local HDD as instance storage, I have all > flavours configured with root disk / ephemeral disk / swap disk = > "0MB". This then results in all instance data being stored on the > backend Cinder storage appliance.  > > I was able to get a cluster deployed by first creating the template as > needed, then when launching the cluster Horizon prompts you for items > already there in the template such as number of nodes, node flavour > and labels etc. I re-supplied all of the info (as to duplicate it) and > then tried creating the cluster. After many many times trying over the > course of a few weeks to a few months it was successful. I was then > able to work around the issue #2 above to get it increased in size.  > > When looking at the logs for issue #2, it looks like some content is > missing in the API but I am not certain. I will include a link to the > pastebin below [1].  > When trying to resize the cluster, Horizon gives error: "Error: Unable > to resize given cluster id: 99693dbf-160a-40e0-9ed4-93f3370367ee". I > then searched the controller node /var/log directory for this ID and > found "horizon.log  [:error] [pid 25] Not Found: > /api/container_infra/clusters/99693dbf-160a-40e0-9ed4-93f3370367ee/resize".  > Going to the Horizon menu "update cluster" allows you to increase the > number of nodes and then save/apply the config which does indeed > resize the cluster.  > > > > Regarding issue #1, we've been unable to deploy a cluster in a new > project and the error is hinting it relates to the flavours having 0MB > disk specified, though this error is new and we've been successful > previously with deploying clusters (albeit with the hit-and-miss > experiences) using the flavour with 0MB disk as described above. Again > I searched for the (stack) ID after the failure, in the logs on the > controller and I obtained not much more than the error already seen > with Horizon [2].  > > I was able to create new flavours with root disk = 15GB and then > successfully deploy a cluster on the next immediate try. Update > cluster from 3 nodes to 6 nodes was also immediately successful. > However I see the compute nodes "used" disk space increasing after > increasing the cluster size which is an issue as the compute node has > very limited HDD capacity (32GB SD card).  > > At this point I also checked 1) previously installed cluster using the > 0MB disk flavour and 2) new instances using the 0MB disk flavour. I > notice that the previous cluster is having host storage allocated but > while the new instance is not having host storage allocated. So the > cluster create success is using flavour with disk = 0MB while the > result is compute HDD storage being consumed.   > > So with the above, please may I clarify on the following?  > 1. It seems that 0MB disk flavours may not be supported with magnum > now? Could the experts confirm? :) Is there another way that I should > be configuring this so that compute node disk is not being consumed > (because it is slow and has limited capacity).  > 2. The issue #1 looks like a bug to me, is it known? If not, is this > mail enough to get it realised?  > > Pastebin links as mentioned  > [1] http://paste.openstack.org/show/797316/ > > [2] http://paste.openstack.org/show/797318/ > > > Many thanks, > > Regards, > > > Tony Pearce > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Wed Sep 2 08:12:05 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 2 Sep 2020 16:12:05 +0800 Subject: [Magnum][Kayobe] Magnum Kubernetes clusters failing to be created (bugs?) In-Reply-To: <2b199738-2fe4-631b-6f4e-4050d040abed@catalyst.net.nz> References: <2b199738-2fe4-631b-6f4e-4050d040abed@catalyst.net.nz> Message-ID: Hi Feilong, thank you for replying to my message. "1. I'm not sure it's a Magnum issue. Did you try to draft a simple Heat template to use that flavor and same image to create instance? Does it work?" No I didn't try it and dont think that I know enough about it to try. I am using Magnum which in turn signals Heat but I've never used Heat directly. When I use the new flavour with root disk = 15GB then I dont have any issue with launching the cluster. But I have a future issue of consuming all available disk space on the compute node. "2. When you say "resize cluster" failed, what's the error you got from magnum conductor log?" I did not see any error in conductor log. Only the Magnum API and Horizon log as mentioned. It looks like horizon was calling bad URLs so maybe this is the reason why there was no conductor log? Just to mention again though, that "update cluster" option is working fine to increase the size of the cluster. However my main issue here is with regards to the flavour being used. Can you or anyone confirm about the root disk = 0MB? OR can you or anyone share any information about how to utilise Magnum/Kubernetes without consuming Compute node HDD storage? I've been unable to achieve this and the docs do not give any information about this specifically (unless of course I have missed it?). The documentation says I can use any flavour [1]. [1] https://docs.openstack.org/magnum/latest/user/ Regards, Tony Pearce On Wed, 2 Sep 2020 at 15:44, feilong wrote: > Hi Tony, > > My comments about your two issues: > > 1. I'm not sure it's a Magnum issue. Did you try to draft a simple Heat > template to use that flavor and same image to create instance? Does it work? > > 2. When you say "resize cluster" failed, what's the error you got from > magnum conductor log? > > > On 1/09/20 9:22 pm, Tony Pearce wrote: > > Hi guys, I hope you are all keeping safe and well at the moment. > > I am trying to launch Kubernetes clusters into Openstack Train which has > been deployed via Kayobe (Kayobe as I understand is a wrapper for > kolla-ansible). There have been a few strange issues here and I've > struggled to isolate them. These issues started recently after a fresh > Openstack deployment some months ago (around February 2020) to give some > context. This Openstack is not "live" as I've been trying to get to the > bottom of the issues: > > Issue 1. When trying to launch a cluster we get error "Resource Create > Failed: Forbidden: Resources.Kube > Masters.Resources[0].Resources.Kube-Master: Only Volume-Backed Servers Are > Allowed For Flavors With Zero Disk. " > > Issue 2. After successfully creating a cluster of a smaller node size, the > "resize cluster" is failing (however update the cluster is working). > > Some background on this specific environment: > Deployed via Kayobe, with these components: > Cinder, Designate, iscsid, Magnum, Multipathd, neutron provider networks > > The Cinder component integrates with iSCSI SAN storage using the Nimble > driver. This is the only storage. In order to prevent Openstack from > allocating Compute node local HDD as instance storage, I have all flavours > configured with root disk / ephemeral disk / swap disk = "0MB". This then > results in all instance data being stored on the backend Cinder storage > appliance. > > I was able to get a cluster deployed by first creating the template as > needed, then when launching the cluster Horizon prompts you for items > already there in the template such as number of nodes, node flavour and > labels etc. I re-supplied all of the info (as to duplicate it) and then > tried creating the cluster. After many many times trying over the course of > a few weeks to a few months it was successful. I was then able to work > around the issue #2 above to get it increased in size. > > When looking at the logs for issue #2, it looks like some content is > missing in the API but I am not certain. I will include a link to the > pastebin below [1]. > When trying to resize the cluster, Horizon gives error: "Error: Unable to > resize given cluster id: 99693dbf-160a-40e0-9ed4-93f3370367ee". I then > searched the controller node /var/log directory for this ID and found > "horizon.log [:error] [pid 25] Not Found: > /api/container_infra/clusters/99693dbf-160a-40e0-9ed4-93f3370367ee/resize". > Going to the Horizon menu "update cluster" allows you to increase the > number of nodes and then save/apply the config which does indeed resize the > cluster. > > > > Regarding issue #1, we've been unable to deploy a cluster in a new project > and the error is hinting it relates to the flavours having 0MB disk > specified, though this error is new and we've been successful previously > with deploying clusters (albeit with the hit-and-miss experiences) using > the flavour with 0MB disk as described above. Again I searched for the > (stack) ID after the failure, in the logs on the controller and I obtained > not much more than the error already seen with Horizon [2]. > > I was able to create new flavours with root disk = 15GB and then > successfully deploy a cluster on the next immediate try. Update cluster > from 3 nodes to 6 nodes was also immediately successful. However I see the > compute nodes "used" disk space increasing after increasing the cluster > size which is an issue as the compute node has very limited HDD capacity > (32GB SD card). > > At this point I also checked 1) previously installed cluster using the 0MB > disk flavour and 2) new instances using the 0MB disk flavour. I notice that > the previous cluster is having host storage allocated but while the new > instance is not having host storage allocated. So the cluster create > success is using flavour with disk = 0MB while the result is compute HDD > storage being consumed. > > So with the above, please may I clarify on the following? > 1. It seems that 0MB disk flavours may not be supported with magnum now? > Could the experts confirm? :) Is there another way that I should be > configuring this so that compute node disk is not being consumed (because > it is slow and has limited capacity). > 2. The issue #1 looks like a bug to me, is it known? If not, is this mail > enough to get it realised? > > Pastebin links as mentioned > [1] http://paste.openstack.org/show/797316/ > > [2] http://paste.openstack.org/show/797318/ > > > Many thanks, > > Regards, > > > Tony Pearce > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Sep 2 08:12:21 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 2 Sep 2020 13:42:21 +0530 Subject: [ptg][glance] Wallaby PTG planning etherpad Message-ID: Hi All, The Wallaby PTG will be held next month from Oct 26 to Oct 30. I have booked some time slots in the ethercalc[0], however these are not fixed, if the booked time does not suit your time, please let me know. I have also created a planning etherpad [1] where you can add your topics for discussion during PTG. [0]https://ethercalc.openstack.org/7xp2pcbh1ncb. [1] https://etherpad.opendev.org/p/Glance-Wallaby-PTG-planning Thank you, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Wed Sep 2 08:53:59 2020 From: feilong at catalyst.net.nz (feilong) Date: Wed, 2 Sep 2020 20:53:59 +1200 Subject: [Magnum][Kayobe] Magnum Kubernetes clusters failing to be created (bugs?) In-Reply-To: References: <2b199738-2fe4-631b-6f4e-4050d040abed@catalyst.net.nz> Message-ID: Hi Tony, Let me answer #2 first. Did you try to use CLI? Please make sure using the latest python-magnumclient version. It should work. As for the dashboard issue, please try to use the latest version of magnum-ui. I encourage using resize because the node update is not recommended to use. As for #1, I probably missed something. If the root disk=0MB, where the will operating system be installed? It would be nice if you can share your original requirement to help me understand the issue. e.g why do you have concern the node disk being used? On 2/09/20 8:12 pm, Tony Pearce wrote: > Hi Feilong, thank you for replying to my message. > > "1. I'm not sure it's a Magnum issue. Did you try to draft a simple > Heat template to use that flavor and same image to create instance? > Does it work?" > > No I didn't try it and dont think that I know enough about it to try. > I am using Magnum which in turn signals Heat but I've never used Heat > directly. When I use the new flavour with root disk = 15GB then I dont > have any issue with launching the cluster. But I have a future issue > of consuming all available disk space on the compute node. > > "2. When you say "resize cluster" failed, what's the error you got > from magnum conductor log?" > > I did not see any error in conductor log. Only the Magnum API and > Horizon log as mentioned. It looks like horizon was calling bad URLs > so maybe this is the reason why there was no conductor log? Just to > mention again though, that "update cluster" option is working fine to > increase the size of the cluster. > > However my main issue here is with regards to the flavour being used. > Can you or anyone confirm about the root disk = 0MB?  > OR can you or anyone share any information about how to utilise > Magnum/Kubernetes without consuming Compute node HDD storage? I've > been unable to achieve this and the docs do not give any information > about this specifically (unless of course I have missed it?). The > documentation says I can use any flavour [1]. > > [1] https://docs.openstack.org/magnum/latest/user/   > > Regards,  > > Tony Pearce > > > On Wed, 2 Sep 2020 at 15:44, feilong > wrote: > > Hi Tony, > > My comments about your two issues: > > 1. I'm not sure it's a Magnum issue. Did you try to draft a simple > Heat template to use that flavor and same image to create > instance? Does it work? > > 2. When you say "resize cluster" failed, what's the error you got > from magnum conductor log? > > > On 1/09/20 9:22 pm, Tony Pearce wrote: >> Hi guys, I hope you are all keeping safe and well at the moment.  >> >> I am trying to launch Kubernetes clusters into Openstack Train >> which has been deployed via Kayobe (Kayobe as I understand is a >> wrapper for kolla-ansible). There have been a few strange issues >> here and I've struggled to isolate them. These issues started >> recently after a fresh Openstack deployment some months ago >> (around February 2020) to give some context. This Openstack is >> not "live" as I've been trying to get to the bottom of the issues: >> >> Issue 1. When trying to launch a cluster we get error "Resource >> Create Failed: Forbidden: Resources.Kube >> Masters.Resources[0].Resources.Kube-Master: Only Volume-Backed >> Servers Are Allowed For Flavors With Zero Disk. " >> >> Issue 2. After successfully creating a cluster of a smaller node >> size, the "resize cluster" is failing (however update the cluster >> is working).  >> >> Some background on this specific environment:  >> Deployed via Kayobe, with these components:  >> Cinder, Designate, iscsid, Magnum, Multipathd, neutron provider >> networks >> >> The Cinder component integrates with iSCSI SAN storage using the >> Nimble driver. This is the only storage. In order to prevent >> Openstack from allocating Compute node local HDD as instance >> storage, I have all flavours configured with root disk / >> ephemeral disk / swap disk = "0MB". This then results in all >> instance data being stored on the backend Cinder storage appliance.  >> >> I was able to get a cluster deployed by first creating the >> template as needed, then when launching the cluster Horizon >> prompts you for items already there in the template such as >> number of nodes, node flavour and labels etc. I re-supplied all >> of the info (as to duplicate it) and then tried creating the >> cluster. After many many times trying over the course of a few >> weeks to a few months it was successful. I was then able to work >> around the issue #2 above to get it increased in size.  >> >> When looking at the logs for issue #2, it looks like some content >> is missing in the API but I am not certain. I will include a link >> to the pastebin below [1].  >> When trying to resize the cluster, Horizon gives error: "Error: >> Unable to resize given cluster id: >> 99693dbf-160a-40e0-9ed4-93f3370367ee". I then searched the >> controller node /var/log directory for this ID and found >> "horizon.log  [:error] [pid 25] Not Found: >> /api/container_infra/clusters/99693dbf-160a-40e0-9ed4-93f3370367ee/resize".  >> Going to the Horizon menu "update cluster" allows you to increase >> the number of nodes and then save/apply the config which does >> indeed resize the cluster.  >> >> >> >> Regarding issue #1, we've been unable to deploy a cluster in a >> new project and the error is hinting it relates to the flavours >> having 0MB disk specified, though this error is new and we've >> been successful previously with deploying clusters (albeit with >> the hit-and-miss experiences) using the flavour with 0MB disk as >> described above. Again I searched for the (stack) ID after the >> failure, in the logs on the controller and I obtained not much >> more than the error already seen with Horizon [2].  >> >> I was able to create new flavours with root disk = 15GB and then >> successfully deploy a cluster on the next immediate try. Update >> cluster from 3 nodes to 6 nodes was also immediately successful. >> However I see the compute nodes "used" disk space increasing >> after increasing the cluster size which is an issue as the >> compute node has very limited HDD capacity (32GB SD card).  >> >> At this point I also checked 1) previously installed cluster >> using the 0MB disk flavour and 2) new instances using the 0MB >> disk flavour. I notice that the previous cluster is having host >> storage allocated but while the new instance is not having host >> storage allocated. So the cluster create success is using flavour >> with disk = 0MB while the result is compute HDD storage being >> consumed.   >> >> So with the above, please may I clarify on the following?  >> 1. It seems that 0MB disk flavours may not be supported with >> magnum now? Could the experts confirm? :) Is there another way >> that I should be configuring this so that compute node disk is >> not being consumed (because it is slow and has limited capacity).  >> 2. The issue #1 looks like a bug to me, is it known? If not, is >> this mail enough to get it realised?  >> >> Pastebin links as mentioned  >> [1] http://paste.openstack.org/show/797316/ >> >> [2] http://paste.openstack.org/show/797318/ >> >> >> Many thanks, >> >> Regards, >> >> >> Tony Pearce >> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Wed Sep 2 09:17:06 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 2 Sep 2020 17:17:06 +0800 Subject: [Magnum][Kayobe] Magnum Kubernetes clusters failing to be created (bugs?) In-Reply-To: References: <2b199738-2fe4-631b-6f4e-4050d040abed@catalyst.net.nz> Message-ID: Hi Feilong, "Let me answer #2 first. Did you try to use CLI? Please make sure using the latest python-magnumclient version. It should work. As for the dashboard issue, please try to use the latest version of magnum-ui. I encourage using resize because the node update is not recommended to use." I did not attempt the resize via CLI but I can try it. Thank you for your guidance on this :) "As for #1, I probably missed something. If the root disk=0MB, where the will operating system be installed? t would be nice if you can share your original requirement to help me understand the issue. e.g why do you have concern the node disk being used?" Sure, although I'd like to understand why you have no concern that the node disk is being used :) I may be missing something here... In this environment I have this setup: controller node compute node network storage appliance, integrated with Cinder iscsi. All VM/Instance data needs to be on the network storage appliance for the reasons; - it's faster than node storage (Flash storage backed array of disks, provides write-cache and read-cache) - resilience built into the array - has much higher storage capacity - is designed for multi-access (ie many connections from hosts) There are other reasons as well, such as deploying compute nodes as disposable services. Example, a compute node dies resulting in a new node being deployed. Instances are not locked to any node and can be started again on other nodes. Going back to 2016 when I deployed Openstack Pike, when running post-deployment tests I noticed that the node storage was being consumed even though I have this network storage array. I done some research online and came to the understanding that the reason was the flavors having "root disk" (and swap) having some positive value other than 0MB. So since 2016 I have been using all flavors with disk = 0MB to force the network storage to be used for instance disks and storage. This is working since 2016 Pike, Queens and Train for launching instances (but not Magnum). The requirement is to utilise network storage (not node storage) - is there some other way that this is achieved today? I dont understand the point of shared storage options in Openstack if node storage is being consumed for instances. Could you help me understand if this specific environment is just not considered by the Openstack devs? Or some other reason unknown to me? For example, in my (limited) experience with other Virtualisation systems (vmware, ovirt for example) they avoid consuming compute storage for a number of similar reasons to mine. So to summarise on this one, I'm not stating that "I am right" here but I am politely asking for more info on the same so I can better understand what I am possibly doing wrong with this deployment or other reasons. Lastly thank you again for your time to reply to me, I really appreciate this. Regards, Tony Pearce On Wed, 2 Sep 2020 at 16:54, feilong wrote: > Hi Tony, > > Let me answer #2 first. Did you try to use CLI? Please make sure using the > latest python-magnumclient version. It should work. As for the dashboard > issue, please try to use the latest version of magnum-ui. I encourage using > resize because the node update is not recommended to use. > > As for #1, I probably missed something. If the root disk=0MB, where the > will operating system be installed? It would be nice if you can share your > original requirement to help me understand the issue. e.g why do you have > concern the node disk being used? > > > On 2/09/20 8:12 pm, Tony Pearce wrote: > > Hi Feilong, thank you for replying to my message. > > "1. I'm not sure it's a Magnum issue. Did you try to draft a simple Heat > template to use that flavor and same image to create instance? Does it > work?" > > No I didn't try it and dont think that I know enough about it to try. I am > using Magnum which in turn signals Heat but I've never used Heat directly. > When I use the new flavour with root disk = 15GB then I dont have any issue > with launching the cluster. But I have a future issue of consuming all > available disk space on the compute node. > > "2. When you say "resize cluster" failed, what's the error you got from > magnum conductor log?" > > I did not see any error in conductor log. Only the Magnum API and Horizon > log as mentioned. It looks like horizon was calling bad URLs so maybe this > is the reason why there was no conductor log? Just to mention again though, > that "update cluster" option is working fine to increase the size of the > cluster. > > However my main issue here is with regards to the flavour being used. Can > you or anyone confirm about the root disk = 0MB? > OR can you or anyone share any information about how to utilise > Magnum/Kubernetes without consuming Compute node HDD storage? I've been > unable to achieve this and the docs do not give any information about this > specifically (unless of course I have missed it?). The documentation says > I can use any flavour [1]. > > [1] https://docs.openstack.org/magnum/latest/user/ > > Regards, > > Tony Pearce > > > On Wed, 2 Sep 2020 at 15:44, feilong wrote: > >> Hi Tony, >> >> My comments about your two issues: >> >> 1. I'm not sure it's a Magnum issue. Did you try to draft a simple Heat >> template to use that flavor and same image to create instance? Does it work? >> >> 2. When you say "resize cluster" failed, what's the error you got from >> magnum conductor log? >> >> >> On 1/09/20 9:22 pm, Tony Pearce wrote: >> >> Hi guys, I hope you are all keeping safe and well at the moment. >> >> I am trying to launch Kubernetes clusters into Openstack Train which has >> been deployed via Kayobe (Kayobe as I understand is a wrapper for >> kolla-ansible). There have been a few strange issues here and I've >> struggled to isolate them. These issues started recently after a fresh >> Openstack deployment some months ago (around February 2020) to give some >> context. This Openstack is not "live" as I've been trying to get to the >> bottom of the issues: >> >> Issue 1. When trying to launch a cluster we get error "Resource Create >> Failed: Forbidden: Resources.Kube >> Masters.Resources[0].Resources.Kube-Master: Only Volume-Backed Servers Are >> Allowed For Flavors With Zero Disk. " >> >> Issue 2. After successfully creating a cluster of a smaller node size, >> the "resize cluster" is failing (however update the cluster is working). >> >> Some background on this specific environment: >> Deployed via Kayobe, with these components: >> Cinder, Designate, iscsid, Magnum, Multipathd, neutron provider networks >> >> The Cinder component integrates with iSCSI SAN storage using the Nimble >> driver. This is the only storage. In order to prevent Openstack from >> allocating Compute node local HDD as instance storage, I have all flavours >> configured with root disk / ephemeral disk / swap disk = "0MB". This then >> results in all instance data being stored on the backend Cinder storage >> appliance. >> >> I was able to get a cluster deployed by first creating the template as >> needed, then when launching the cluster Horizon prompts you for items >> already there in the template such as number of nodes, node flavour and >> labels etc. I re-supplied all of the info (as to duplicate it) and then >> tried creating the cluster. After many many times trying over the course of >> a few weeks to a few months it was successful. I was then able to work >> around the issue #2 above to get it increased in size. >> >> When looking at the logs for issue #2, it looks like some content is >> missing in the API but I am not certain. I will include a link to the >> pastebin below [1]. >> When trying to resize the cluster, Horizon gives error: "Error: Unable to >> resize given cluster id: 99693dbf-160a-40e0-9ed4-93f3370367ee". I then >> searched the controller node /var/log directory for this ID and found >> "horizon.log [:error] [pid 25] Not Found: >> /api/container_infra/clusters/99693dbf-160a-40e0-9ed4-93f3370367ee/resize". >> Going to the Horizon menu "update cluster" allows you to increase the >> number of nodes and then save/apply the config which does indeed resize the >> cluster. >> >> >> >> Regarding issue #1, we've been unable to deploy a cluster in a new >> project and the error is hinting it relates to the flavours having 0MB disk >> specified, though this error is new and we've been successful previously >> with deploying clusters (albeit with the hit-and-miss experiences) using >> the flavour with 0MB disk as described above. Again I searched for the >> (stack) ID after the failure, in the logs on the controller and I obtained >> not much more than the error already seen with Horizon [2]. >> >> I was able to create new flavours with root disk = 15GB and then >> successfully deploy a cluster on the next immediate try. Update cluster >> from 3 nodes to 6 nodes was also immediately successful. However I see the >> compute nodes "used" disk space increasing after increasing the cluster >> size which is an issue as the compute node has very limited HDD capacity >> (32GB SD card). >> >> At this point I also checked 1) previously installed cluster using the >> 0MB disk flavour and 2) new instances using the 0MB disk flavour. I notice >> that the previous cluster is having host storage allocated but while the >> new instance is not having host storage allocated. So the cluster create >> success is using flavour with disk = 0MB while the result is compute HDD >> storage being consumed. >> >> So with the above, please may I clarify on the following? >> 1. It seems that 0MB disk flavours may not be supported with magnum now? >> Could the experts confirm? :) Is there another way that I should be >> configuring this so that compute node disk is not being consumed (because >> it is slow and has limited capacity). >> 2. The issue #1 looks like a bug to me, is it known? If not, is this mail >> enough to get it realised? >> >> Pastebin links as mentioned >> [1] http://paste.openstack.org/show/797316/ >> >> [2] http://paste.openstack.org/show/797318/ >> >> >> Many thanks, >> >> Regards, >> >> >> Tony Pearce >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> >> -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Sep 2 09:40:25 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 2 Sep 2020 11:40:25 +0200 Subject: [ironic] Announcing deprecation of the iSCSI deploy interface Message-ID: Hi all, Following up to the previous mailing list [1] and virtual meetup [2] discussions, I would like to announce the plans to deprecate the 'iscsi' deploy interface. This is the updated plan discussed on the virtual meetup: 1) In the Victoria cycle (i.e. right now): - Fill in the detected feature gaps [3]. - Switch off the iscsi deploy interface by default. - Change [agent]image_dowload_source to HTTP by default. - Give the direct deploy a higher priority, so that it's used by default unless disabled. - Mark it as deprecated in the code (causing warnings when enabled). - Release a major version of ironic to highlight the defaults changes. 2) In the W cycle: - Keep the iscsi deploy deprecated. - Listen to operators' feedback. 3) In the X cycle - Remove the iscsi deploy completely from ironic and IPA. - Remove support code from ironic-lib with a major version bump. Please let us know if you have any questions or concerns. Dmitry [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016681.html [2] https://etherpad.opendev.org/p/Ironic-Victoria-midcycle [3] https://storyboard.openstack.org/#!/story/2008075 -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From anilj.mailing at gmail.com Wed Sep 2 09:47:27 2020 From: anilj.mailing at gmail.com (Anil Jangam) Date: Wed, 2 Sep 2020 02:47:27 -0700 Subject: Graceful stopping of RabbitMQ AMQP notification listener In-Reply-To: References: Message-ID: Hi, I found the issue and it was the way I was starting and stopping the listener. I got some more logs below that helped me understand this deadlock behavior. I have fixed issue in my app. OPS_WORKER_0--DEBUG-2020-09-02 02:39:30,309-amqpdriver.py-322 - AMQPListener connection consume OPS_WORKER_0--DEBUG-2020-09-02 02:39:30,310-connection.py-712 - heartbeat_tick : for connection b616c1205a61466a94e4ae2e79e6ba84 OPS_WORKER_0--DEBUG-2020-09-02 02:39:30,310-connection.py-734 - heartbeat_tick : Prev sent/recv: 8/8, now - 8/8, monotonic - 32.014996083, last_heartbeat_sent - 0.97555832, heartbeat int. - 60 for connection b616c1205a61466a94e4ae2e79e6ba84 MainProcess--WARNING-2020-09-02 02:39:34,108-server.py-127 - Possible hang: stop is waiting for start to complete MainProcess--DEBUG-2020-09-02 02:39:34,117-server.py-128 - File "/testprogs/python/oslo_notif/main.py", line 33, in main() File "/testprogs/python/oslo_notif/main.py", line 29, in main worker.stop() File "/testprogs/python/oslo_notif/oslo_worker.py", line 85, in stop self.__amqp_handler.stop_amqp_event_listener() File "/testprogs/python/oslo_notif/oslo_notif_handler.py", line 184, in stop_amqp_event_listener self.__amqp_listener.stop() File "/pyvenv37/lib/python3.7/site-packages/oslo_messaging/server.py", line 264, in wrapper log_after, timeout_timer) File "/pyvenv37/lib/python3.7/site-packages/oslo_messaging/server.py", line 163, in wait_for_completion msg, log_after, timeout_timer) File "/pyvenv37/lib/python3.7/site-packages/oslo_messaging/server.py", line 128, in _wait /anil. On Wed, Sep 2, 2020 at 12:12 AM Anil Jangam wrote: > Hi, > > I have coded OpenStack AMQP listener following the example and it is > working fine. > > https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py > > The related code snippets of the NotificationHandler class are shown as > follows. > > # Initialize the AMQP listener > def init(self, cluster_ip, user, password, port): > cfg.CONF() > cluster_url = "rabbit://" + user + ":" + password + "@" + cluster_ip + ":" + port + "/" > transport = oslo_messaging.get_notification_transport(cfg.CONF, url=cluster_url) > targets = [ > oslo_messaging.Target(topic='versioned_notifications'), > ] > endpoints = [self.__endpoint] > > # Initialize the notification listener > try: > self.__amqp_listener = oslo_messaging.get_notification_listener(transport, > targets, > endpoints, > executor='threading') > except NotImplementedError as err: > LOGGER.error("Failed to initialize the notification listener {}".format(err)) > return False > > LOGGER.debug("Initialized the notification listener {}".format(cluster_url)) > return True > > # Arm the compute event listeners > def start_amqp_event_listener(self): > # Start the notification handler > LOGGER.debug("Started the OpenStack notification handler") > self.__amqp_listener.start() > > # Disarm the compute event listeners > def stop_amqp_event_listener(self): > LOGGER.debug("Stopping the OpenStack notification handler") > if self.__amqp_listener is not None: > self.__amqp_listener.stop() > > I am using this interface from a new process handler function, however, > when I invoke the stop_amqp_eent_listener() method, my process hangs. It > does not terminate naturally. > I verified that the self.__amqp_listener.stop() function is not > returning. Is there anything missing in this code? Is there any specific > consideration when calling the listener from a new process? > > Can someone provide a clue? > > # Stop the worker > def stop(self): > # Stop the AMQP notification handler > self.__amqp_handler.stop_amqp_event_listener() > LOGGER.debug("Stopped the worker for {}".format(self.__ops_conn_info.cluster_ip)) > > > /anil. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Sep 2 10:05:46 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 2 Sep 2020 22:05:46 +1200 Subject: Trove images for Cluster testing. In-Reply-To: References: Message-ID: Hi Arunkumar, For how to join IRC channel, please see https://docs.opendev.org/opendev/infra-manual/latest/developers.html#irc-account Currently there is no trove team meeting because we don't have any other people interested (for some historical reasons), but if needed, we can schedule a time suitable for both. I'm in UTC+12 btw. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Wed, Sep 2, 2020 at 2:56 AM ARUNKUMAR PALANISAMY < arunkumar.palanisamy at tcs.com> wrote: > Hi Lingxian, > > > > Hope you are doing Good. > > > > Thank you for your mail and detailed information. > > > > We would like to join #openstack-trove IRC channel for discussions. Could > you please advise us the process to join IRC channel. > > > > We came to know that currently there is no IRC channel meeting happening > for Trove, if there is any meeting scheduled and happening. we would like > to join and understand the works and progress towards Trove and contribute > further. > > > > Regards, > > Arunkumar Palanisamy > > > > *From:* Lingxian Kong > *Sent:* Friday, August 28, 2020 12:09 AM > *To:* ARUNKUMAR PALANISAMY > *Cc:* openstack-discuss at lists.openstack.org; Pravin Mohan < > pravin.mohan at tcs.com> > *Subject:* Re: Trove images for Cluster testing. > > > "External email. Open with Caution" > > Hi Arunkumar, > > > > Unfortunately, for now Trove only supports MySQL and MariaDB, I'm working > on adding PostgreSQL support. All other datastores are unmaintained right > now. > > > > Since this(Victoria) dev cycle, docker container was introduced in Trove > guest agent in order to remove the maintenance overhead for multiple Trove > guest images. We only need to maintain one single guest image but could > support different datastores. We have to do that as such a small Trove team > in the community. > > > > If supporting Redis, Cassandra, MongoDB or Couchbase is in your feature > request, you are welcome to contribute to Trove. > > > > Please let me know if you have any other questions. You are also welcome > to join #openstack-trove IRC channel for discussion. > > > > --- > > Lingxian Kong > > Senior Software Engineer > > Catalyst Cloud > > www.catalystcloud.nz > > > > > > On Fri, Aug 28, 2020 at 6:45 AM ARUNKUMAR PALANISAMY < > arunkumar.palanisamy at tcs.com> wrote: > > Hello Team, > > > > My name is ARUNKUMAR PALANISAMY, > > > > As part of our project requirement, we are evaluating trove components and > need your support for experimental datastore Image for testing cluster. > (Redis, Cassandra, MongoDB, Couchbase) > > > > 1.) We are running devstack enviorment with Victoria Openstack release > and with this image (trove-master-guest-ubuntu-bionic-dev.qcow2 > ), > we are able to deploy mysql instance and and getting below error while > creating mongoDB instances. > > > > *“ModuleNotFoundError: No module named > 'trove.guestagent.datastore.experimental' “* > > > > 2.) While tried creating mongoDB image with diskimage-builder > tool, but we are > getting “Block device ” element error. > > > > > > Regards, > > Arunkumar Palanisamy > > Cell: +49 172 6972490 > > > > =====-----=====-----===== > Notice: The information contained in this e-mail > message and/or attachments to it may contain > confidential or privileged information. If you are > not the intended recipient, any dissemination, use, > review, distribution, printing or copying of the > information contained in this e-mail message > and/or attachments to it are strictly prohibited. If > you have received this communication in error, > please notify us by reply e-mail or telephone and > immediately and permanently delete the message > and any attachments. Thank you > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Sep 2 11:54:29 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 2 Sep 2020 05:54:29 -0600 Subject: [tripleo] docker.io rate limiting Message-ID: Greetings, Some of you have contacted me regarding the recent news regarding docker.io's new policy with regards to container pull rate limiting [1]. I wanted to take the opportunity to further socialize our plan that will completely remove docker.io from our upstream workflows and avoid any rate limiting issues. We will continue to upload containers to docker.io for some time so that individuals and the community can access the containers. We will also start exploring other registries like quay and newly announced github container registry. These other public registries will NOT be used in our upstream jobs and will only serve the communities individual contributors. Our test jobs have been successful and patches are starting to merge to convert our upstream jobs and remove docker.io from our upstream workflow. [2]. Standalone and multinode jobs are working quite well. We are doing some design work around branchful, update/upgrade jobs at this time. Thanks 0/ [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From viroel at gmail.com Wed Sep 2 12:16:22 2020 From: viroel at gmail.com (Douglas) Date: Wed, 2 Sep 2020 09:16:22 -0300 Subject: [manila] Victoria Collab Review next Tuesday (Sep 1st) In-Reply-To: References: Message-ID: Hi all, The recording of our collaborative review meeting is available in the OpenStack/Manila channel[1], and the meetings notes are available at the meeting etherpad[2]. Thank you everybody that was able to join and participate. [1] https://youtu.be/CCIhHVKPTx4 [2] https://etherpad.opendev.org/p/manila-victoria-collab-review On Thu, Aug 27, 2020 at 6:49 PM Douglas wrote: > Hi everybody > > We will have a new edition of our collaborative review next Tuesday, > September 1st, where we'll go through the code and review the proposed > feature Share Server Migration[1][2]. > This meeting is scheduled for two hours, starting at 5:00PM UTC. Meeting > notes and videoconference links will be available here[3]. > Feel free to attend if you are interested and available. > > Hoping to see you there, > > - dviroel > > [1] > https://opendev.org/openstack/manila-specs/src/branch/master/specs/victoria/share-server-migration.rst > [2] > https://review.opendev.org/#/q/topic:bp/share-server-migration+(status:open) > [3] https://etherpad.opendev.org/p/manila-victoria-collab-review > -- Douglas Viroel (dviroel) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Sep 2 08:35:24 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 2 Sep 2020 09:35:24 +0100 Subject: [kolla] questions when using external mysql In-Reply-To: <5c644eb6.3cda.174488cf629.Coremail.sosogh@126.com> References: <5c644eb6.3cda.174488cf629.Coremail.sosogh@126.com> Message-ID: On Tue, 1 Sep 2020 at 15:48, sosogh wrote: > Hi list: > > I want to use kolla-ansible to deploy openstack , but using external > mysql. > I am following these docs: > > https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html > > > https://docs.openstack.org/kolla-ansible/latest/reference/databases/external-mariadb-guide.html > . > Hi, could you start by telling us which version or branch of Kolla Ansible you are using? > > I have some questions: > > ################ > ## Question 1 ## > ################ > > According to the offical doc , if setting it in inventory > file(multinode), > > > kolla-ansible -i ./multinode deploy will throw out error: > > > I guest when kolla-ansible running the playbook against > myexternalmariadbloadbalancer.com , > > the """register: find_custom_fluentd_inputs""" in """TASK [common : Find > custom fluentd input config files]""" maybe null . > I think this could be an issue with a recent change to the common role, where the condition for the 'Find custom fluentd input config files' task changed slightly. I have proposed a potential fix for this, could you try it out and report back? https://review.opendev.org/749463 > > ################ > ## Question 2 ## > ################ > > According to the offical doc , If the MariaDB username is not root, set > database_username in /etc/kolla/globals.yml file: > > > But in kolla-ansible/ansible/roles/xxxxxx/tasks/bootstrap.yml , they use > ''' login_user: "{{ database_user }}" ''' , for example : > > You are correct, this is an issue in the documentation. I have proposed a fix here: https://review.opendev.org/749464 > So at last , I took the following steps: > 1. """not""" setting [mariadb] in inventory file(multinode) > 2. set "database_user: openstack" for "privillegeduser" > > PS: > My idea is that if using an external ready-to-use mysql (cluster), > it is enough to tell kolla-ansible only the address/user/password of the > external DB. > i.e. setting them in the file /etc/kolla/globals.yml and passwords.yml , > no need to add it into inventory file(multinode) > I agree, I did not expect to need to change the inventory for this use case. > > Finally , it is successful to deploy openstack via kolla-ansible . > So far I have not found any problems. > Are the steps what I took good ( enough ) ? > Thank you ! > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2938 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 95768 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2508 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 22016 bytes Desc: not available URL: From sosogh at 126.com Wed Sep 2 10:00:24 2020 From: sosogh at 126.com (sosogh) Date: Wed, 2 Sep 2020 18:00:24 +0800 (CST) Subject: Reply:Re: [kolla] questions when using external mysql In-Reply-To: References: <5c644eb6.3cda.174488cf629.Coremail.sosogh@126.com> Message-ID: Hi Mark: >> Hi, could you start by telling us which version or branch of Kolla Ansible you are using? root at ubt-1804:~# pip show kolla-ansible Name: kolla-ansible Version: 10.1.0.dev260 I download them via "git clone" , it is master branch by default . git clone https://github.com/openstack/kolla git clone https://github.com/openstack/kolla-ansible >> could you try it out and report back? https://review.opendev.org/749463 I will try it later. 在 2020-09-02 16:35:24,"Mark Goddard" 写道: On Tue, 1 Sep 2020 at 15:48, sosogh wrote: Hi list: I want to use kolla-ansible to deploy openstack , but using external mysql. I am following these docs: https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html https://docs.openstack.org/kolla-ansible/latest/reference/databases/external-mariadb-guide.html. Hi, could you start by telling us which version or branch of Kolla Ansible you are using? I have some questions: ################ ## Question 1 ## ################ According to the offical doc , if setting it in inventory file(multinode), kolla-ansible -i ./multinode deploy will throw out error: I guest when kolla-ansible running the playbook against myexternalmariadbloadbalancer.com , the """register: find_custom_fluentd_inputs""" in """TASK [common : Find custom fluentd input config files]""" maybe null . I think this could be an issue with a recent change to the common role, where the condition for the 'Find custom fluentd input config files' task changed slightly. I have proposed a potential fix for this, could you try it out and report back? https://review.opendev.org/749463 ################ ## Question 2 ## ################ According to the offical doc , If the MariaDB username is not root, set database_username in /etc/kolla/globals.yml file: But in kolla-ansible/ansible/roles/xxxxxx/tasks/bootstrap.yml , they use ''' login_user: "{{ database_user }}" ''' , for example : You are correct, this is an issue in the documentation. I have proposed a fix here: https://review.opendev.org/749464 So at last , I took the following steps: 1. """not""" setting [mariadb] in inventory file(multinode) 2. set "database_user: openstack" for "privillegeduser" PS: My idea is that if using an external ready-to-use mysql (cluster), it is enough to tell kolla-ansible only the address/user/password of the external DB. i.e. setting them in the file /etc/kolla/globals.yml and passwords.yml , no need to add it into inventory file(multinode) I agree, I did not expect to need to change the inventory for this use case. Finally , it is successful to deploy openstack via kolla-ansible . So far I have not found any problems. Are the steps what I took good ( enough ) ? Thank you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2938 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 95768 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2508 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 22016 bytes Desc: not available URL: From smooney at redhat.com Wed Sep 2 12:58:55 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 02 Sep 2020 13:58:55 +0100 Subject: [ovn][neutron][ussuri] How to configure for overlay in DPDK with OVN ? In-Reply-To: References: Message-ID: On Wed, 2020-09-02 at 16:23 +0900, 한승진 wrote: > Hi, > > > I’m trying to implement dpdk-ovs with ovn for our openstack environment. > > > I create a bond interface(tunbound) with ovs and tun0 and tun1 are slave > interfaces of the bond interface named tunbond. > > > Bridge br-int > > fail_mode: secure > > datapath_type: netdev > > Port tunbond > > Interface tun0 > > type: dpdk > > options: {dpdk-devargs="0000:1a:00.1", n_rxq="2"} > > Interface tun1 > > type: dpdk > > options: {dpdk-devargs="0000:d8:00.1", n_rxq="2"} > > Port br-int > > Interface br-int > > type: internal > > Now, I need to configure remote-ovn options to the ovs. > > > *I am confused how can I define the IP address of the bonded interface with > ovs?* you dont you set the tunnel local ip on the ovs bridge containing the bond. when a inteface is managed by ovs (with the excption of the bridge port or type internal ports) asinging an ip to the interface will do nothing as ovs hooks the packets beofre they reach the ip layer of the kernel networking stack. so to make sure that the kernel routes the network packets vi the ovs bridge you need to assign the tunnel local ip to the ovs bridge with the bound. so br-int in this case this will cause that bridge to respond to arps form the tunnel local ip and that will cause the back leaning table in ovs to be populated correctly with the remote mac address for the remote tunnel endpoints via the bound. if you use ovs-appctl and look at the dataplane(not oepnflow) flows you will see that the tunnel endcap flow is followed by an out_port action instead of an output action which renques the packet as if it arrived on the port so when the packet is encaped with the geneve header it will be reporced as if it came form the bridge local port and then mac learing/the normal action woudl forward it to the bond. at least that is how it would work for ml2/ovs if we do not have the normal action for that case we would need explcit openflow rules to send the packet to the bound. if you dont have the local tunnel endpoint ip on the brige however the tunnel traffic will not be dpdk acclerated so that is important to have set correctly. > > > Theses are commands for connect remote ovn services. > > > ovs-vsctl set open . external-ids:ovn-remote=tcp:{controller ip}:6642 > > ovs-vsctl set open . external-ids:ovn-encap-type=geneve > > *1* > > > This below command make it me confused. > > > ovs-vsctl set open . external-ids:ovn-encap-ip={local ip} > > > How should I resolve this issue? > > > Regards, > > > John Haan From CAPSEY at augusta.edu Wed Sep 2 13:29:48 2020 From: CAPSEY at augusta.edu (Apsey, Christopher) Date: Wed, 2 Sep 2020 13:29:48 +0000 Subject: [neutron][ovn] OVN Performance Message-ID: All, Just wanted to loop back here and give an update. For reference, [1] (blue means successful action, red means failed action) is the result we got when booting 5000 instances in rally [2] before the Red Hat OVN devs poked around inside our environment, and [3] is the result after. The differences are obviously pretty significant. I think the biggest change was setting metadata_workers = 2 in neutron_ovn_metadata_agent.ini on the compute nodes per https://bugs.launchpad.net/neutron/+bug/1893656. We have 64C/128T on all compute nodes, so the default neutron calculation of scaling metadata workers based on available cores created 900+ connections to the southbound db at idle; after the control plane got loaded up it just quit around 2500 instances (my guess is it hit the open file limit, although I don’t think increasing it would have made it better for much longer since the number of connections were increasing exponentially). Capping the number of metadata workers decreased open southbound connections by 90%. Even more telling was that rally was able to successfully clean up after itself after we made that change, whereas previously it wasn’t even able to successfully tear down any of the instances that were made, indicating that the control plane was completely toast. Note that the choppiness towards the end of [3] had nothing to do with OVN – our compute nodes had a loadavg approaching 1000 at that point, so they were just starved for cpu cycles. This would have scaled even better with additional compute nodes. The other piece was RAFT. Currently, RDO is shipping with ovs 2.12, but 2.13 has a bunch of RAFT fixes in it that improve stability and knock out some bugs. We were having issues with chassis registration on 2.12, but after using the 2.13 package from cbs, all those issues went away. Big thanks to the great people at Red Hat on the cc line for volunteering their valuable time to take a look. I’m now significantly more comfortable with defaulting to OVN as the backend of choice as the performance delta is now gone. That said, should the community consider dropping linuxbridge as the backend in the official upstream docs and jump straight to OVN rather than ml2/OVS? I think that would increase the test base and help shine light on other issues as time goes on. My org can devote some time to doing this work if the community agrees that it’s the right action to take. Hope that’s helpful! [1] https://ibb.co/GTjZP2y [2] https://pastebin.com/5pEDZ7dY [3] https://ibb.co/pfB9KTV Chris Apsey GEORGIA CYBER CENTER From: Apsey, Christopher Sent: Thursday, August 27, 2020 11:33 AM To: Assaf Muller Cc: openstack-discuss at lists.openstack.org; Lucas Alvares Gomes Martins ; Jakub Libosvar ; Daniel Alvarez Sanchez Subject: RE: [EXTERNAL] Re: [neutron][ovn] OVN Performance Assaf, We can absolutely support engineering poking around in our environment (and possibly an even larger one at my previous employer that was experiencing similar issues during testing). We can take this offline so we don’t spam the mailing list. Just let me know how to proceed, Thanks! Chris Apsey GEORGIA CYBER CENTER From: Assaf Muller > Sent: Thursday, August 27, 2020 11:18 AM To: Apsey, Christopher > Cc: openstack-discuss at lists.openstack.org; Lucas Alvares Gomes Martins >; Jakub Libosvar >; Daniel Alvarez Sanchez > Subject: [EXTERNAL] Re: [neutron][ovn] OVN Performance CAUTION: EXTERNAL SENDER This email originated from an external source. Please exercise caution before opening attachments, clicking links, replying, or providing information to the sender. If you believe it to be fraudulent, contact the AU Cybersecurity Hotline at 72-CYBER (2-9237 / 706-722-9237) or 72CYBER at augusta.edu The most efficient way about this is to give one or more of the Engineers working on OpenStack OVN upstream (I've added a few to this thread) temporary access to an environment that can reproduce issues you're seeing, we could then document the issues and work towards solutions. If that's not possible, if you could provide reproducer scripts, or alternatively sharpen the reproduction method, we'll take a look. What you've described is not something that's 'acceptable', OVN should definitely not scale worse than Neutron with the Linux Bridge agent. It's possible that the particular issues you ran in to is something that we've already seen internally at Red Hat, or with our customers, and we're already working on fixes in future versions of OVN - I can't tell you until you elaborate on the details of the issues you're seeing. In any case, the upstream community is committed to improving OVN scale and fixing scale issues as they pop up. Coincidentally, Red Hat scale engineers just published an article [1] about work they've done to scale RH-OSP 16.1 (== OpenStack Train on CentOS 8, with OVN 2.13 and TripleO) to 700 compute nodes. [1] https://www.redhat.com/en/blog/scaling-red-hat-openstack-platform-161-more-700-nodes?source=bloglisting On Thu, Aug 27, 2020 at 10:44 AM Apsey, Christopher > wrote: > > All, > > > > I know that OVN is going to become the default neutron backend at some point and displace linuxbridge as the default configuration option in the docs, but we have noticed a pretty significant performance disparity between OVN and linuxbridge on identical hardware over the past year or so in a few different environments[1]. I know that example is unscientific, but similar results have been borne out in many different scenarios from what we have observed. There are three main problems from what we see: > > > > 1. OVN does not handle large concurrent requests as well as linuxbridge. Additionally, linuxbridge concurrent capacity grows (not linearly, but grows nonetheless) by adding additional neutron API endpoints and RPC agents. OVN does not really horizontally scale by adding additional API endpoints, from what we have observed. > > 2. OVN gets significantly slower as load on the system grows. We have observed a soft cap of about 2000-2500 instances in a given deployment before ovn-backed neutron stops responding altogether to nova requests (even for booting a single instance). We have observed linuxbridge get to 5000+ instances before it starts to struggle on the same hardware (and we think that linuxbridge can go further with improved provider network design in that particular case). > > 3. Once the southbound database process hits 100% CPU usage on the leader in the ovn cluster, it’s game over (probably causes 1+2) > > > > It's entirely possible that we just don’t understand OVN well enough to tune it [2][3][4], but then the question becomes how do we get that tuning knowledge into the docs so people don’t scratch their heads when their cool new OVN deployment scales 40% as well as their ancient linuxbridge-based one? > > > > If it is ‘known’ that OVN has some scaling challenges, is there a plan to fix it, and what is the best way to contribute to doing so? > > > > We have observed similar results on Ubuntu 18.04/20.04 and CentOS 7/8 on Stein, Train, and Ussuri. > > > > [1] https://pastebin.com/kyyURTJm > > [2] https://github.com/GeorgiaCyber/kinetic/tree/master/formulas/ovsdb > > [3] https://github.com/GeorgiaCyber/kinetic/tree/master/formulas/neutron > > [4] https://github.com/GeorgiaCyber/kinetic/tree/master/formulas/compute > > > > Chris Apsey > > GEORGIA CYBER CENTER > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Wed Sep 2 13:42:34 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 2 Sep 2020 15:42:34 +0200 Subject: [neutron][ovn] OVN Performance In-Reply-To: References: Message-ID: Hey Chris, thanks for sharing this :) On Wed, Sep 2, 2020 at 3:30 PM Apsey, Christopher wrote: > All, > > > > Just wanted to loop back here and give an update. > > > > For reference, [1] (blue means successful action, red means failed action) > is the result we got when booting 5000 instances in rally [2] before the > Red Hat OVN devs poked around inside our environment, and [3] is the result > after. The differences are obviously pretty significant. I think the > biggest change was setting metadata_workers = 2 in > neutron_ovn_metadata_agent.ini on the compute nodes per > https://bugs.launchpad.net/neutron/+bug/1893656. We have 64C/128T on all > compute nodes, so the default neutron calculation of scaling metadata > workers based on available cores created 900+ connections to the southbound > db at idle; after the control plane got loaded up it just quit around 2500 > instances (my guess is it hit the open file limit, although I don’t think > increasing it would have made it better for much longer since the number of > connections were increasing exponentially). Capping the number of metadata > workers decreased open southbound connections by 90%. Even more telling > was that rally was able to successfully clean up after itself after we made > that change, whereas previously it wasn’t even able to successfully tear > down any of the instances that were made, indicating that the control plane > was completely toast. > > > > Note that the choppiness towards the end of [3] had nothing to do with OVN > – our compute nodes had a loadavg approaching 1000 at that point, so they > were just starved for cpu cycles. This would have scaled even better with > additional compute nodes. > > > > The other piece was RAFT. Currently, RDO is shipping with ovs 2.12, but > 2.13 has a bunch of RAFT fixes in it that improve stability and knock out > some bugs. We were having issues with chassis registration on 2.12, but > after using the 2.13 package from cbs, all those issues went away. > > > > Big thanks to the great people at Red Hat on the cc line for volunteering > their valuable time to take a look. > Happy to help, it was fun :) Thanks to you for all the details that made it easier to debug > > > I’m now significantly more comfortable with defaulting to OVN as the > backend of choice as the performance delta is now gone. That said, should > the community consider dropping linuxbridge as the backend in the official > upstream docs and jump straight to OVN rather than ml2/OVS? I think that > would increase the test base and help shine light on other issues as time > goes on. My org can devote some time to doing this work if the community > agrees that it’s the right action to take. > ++!! > > > Hope that’s helpful! > > > > [1] https://ibb.co/GTjZP2y > > [2] https://pastebin.com/5pEDZ7dY > > [3] https://ibb.co/pfB9KTV > Do you have some baseline to compare against? Also I'm curious to see if you pulled results with and without raft :) Thanks once again! > > > *Chris Apsey* > > *GEORGIA CYBER CENTER* > > > > *From:* Apsey, Christopher > *Sent:* Thursday, August 27, 2020 11:33 AM > *To:* Assaf Muller > *Cc:* openstack-discuss at lists.openstack.org; Lucas Alvares Gomes Martins < > lmartins at redhat.com>; Jakub Libosvar ; Daniel > Alvarez Sanchez > *Subject:* RE: [EXTERNAL] Re: [neutron][ovn] OVN Performance > > > > Assaf, > > > > We can absolutely support engineering poking around in our environment > (and possibly an even larger one at my previous employer that was > experiencing similar issues during testing). We can take this offline so > we don’t spam the mailing list. > > > > Just let me know how to proceed, > > > > Thanks! > > > > *Chris Apsey* > > *GEORGIA CYBER CENTER* > > > > *From:* Assaf Muller > *Sent:* Thursday, August 27, 2020 11:18 AM > *To:* Apsey, Christopher > *Cc:* openstack-discuss at lists.openstack.org; Lucas Alvares Gomes Martins < > lmartins at redhat.com>; Jakub Libosvar ; Daniel > Alvarez Sanchez > *Subject:* [EXTERNAL] Re: [neutron][ovn] OVN Performance > > > > CAUTION: EXTERNAL SENDER This email originated from an external source. > Please exercise caution before opening attachments, clicking links, > replying, or providing information to the sender. If you believe it to be > fraudulent, contact the AU Cybersecurity Hotline at 72-CYBER (2-9237 / > 706-722-9237) or 72CYBER at augusta.edu > > The most efficient way about this is to give one or more of the > Engineers working on OpenStack OVN upstream (I've added a few to this > thread) temporary access to an environment that can reproduce issues > you're seeing, we could then document the issues and work towards > solutions. If that's not possible, if you could provide reproducer > scripts, or alternatively sharpen the reproduction method, we'll take > a look. What you've described is not something that's 'acceptable', > OVN should definitely not scale worse than Neutron with the Linux > Bridge agent. It's possible that the particular issues you ran in to > is something that we've already seen internally at Red Hat, or with > our customers, and we're already working on fixes in future versions > of OVN - I can't tell you until you elaborate on the details of the > issues you're seeing. In any case, the upstream community is committed > to improving OVN scale and fixing scale issues as they pop up. > Coincidentally, Red Hat scale engineers just published an article [1] > about work they've done to scale RH-OSP 16.1 (== OpenStack Train on > CentOS 8, with OVN 2.13 and TripleO) to 700 compute nodes. > > [1] > https://www.redhat.com/en/blog/scaling-red-hat-openstack-platform-161-more-700-nodes?source=bloglisting > > On Thu, Aug 27, 2020 at 10:44 AM Apsey, Christopher > wrote: > > > > All, > > > > > > > > I know that OVN is going to become the default neutron backend at some > point and displace linuxbridge as the default configuration option in the > docs, but we have noticed a pretty significant performance disparity > between OVN and linuxbridge on identical hardware over the past year or so > in a few different environments[1]. I know that example is unscientific, > but similar results have been borne out in many different scenarios from > what we have observed. There are three main problems from what we see: > > > > > > > > 1. OVN does not handle large concurrent requests as well as linuxbridge. > Additionally, linuxbridge concurrent capacity grows (not linearly, but > grows nonetheless) by adding additional neutron API endpoints and RPC > agents. OVN does not really horizontally scale by adding additional API > endpoints, from what we have observed. > > > > 2. OVN gets significantly slower as load on the system grows. We have > observed a soft cap of about 2000-2500 instances in a given deployment > before ovn-backed neutron stops responding altogether to nova requests > (even for booting a single instance). We have observed linuxbridge get to > 5000+ instances before it starts to struggle on the same hardware (and we > think that linuxbridge can go further with improved provider network design > in that particular case). > > > > 3. Once the southbound database process hits 100% CPU usage on the > leader in the ovn cluster, it’s game over (probably causes 1+2) > > > > > > > > It's entirely possible that we just don’t understand OVN well enough to > tune it [2][3][4], but then the question becomes how do we get that tuning > knowledge into the docs so people don’t scratch their heads when their cool > new OVN deployment scales 40% as well as their ancient linuxbridge-based > one? > > > > > > > > If it is ‘known’ that OVN has some scaling challenges, is there a plan > to fix it, and what is the best way to contribute to doing so? > > > > > > > > We have observed similar results on Ubuntu 18.04/20.04 and CentOS 7/8 on > Stein, Train, and Ussuri. > > > > > > > > [1] https://pastebin.com/kyyURTJm > > > > [2] https://github.com/GeorgiaCyber/kinetic/tree/master/formulas/ovsdb > > > > [3] https://github.com/GeorgiaCyber/kinetic/tree/master/formulas/neutron > > > > [4] https://github.com/GeorgiaCyber/kinetic/tree/master/formulas/compute > > > > > > > > Chris Apsey > > > > GEORGIA CYBER CENTER > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed Sep 2 14:17:49 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 2 Sep 2020 16:17:49 +0200 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: Message-ID: Sorry for the stupid question, but maybe there is some parameter for tripleo deployment not to generate and download images from docker io each time? since I already have it downloaded and working? Or, as I understand, I should be able to create my own snapshot of images and specify it as a source? On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin wrote: > Greetings, > > Some of you have contacted me regarding the recent news regarding > docker.io's new policy with regards to container pull rate limiting [1]. > I wanted to take the opportunity to further socialize our plan that will > completely remove docker.io from our upstream workflows and avoid any > rate limiting issues. > > We will continue to upload containers to docker.io for some time so that > individuals and the community can access the containers. We will also > start exploring other registries like quay and newly announced github > container registry. These other public registries will NOT be used in our > upstream jobs and will only serve the communities individual contributors. > > Our test jobs have been successful and patches are starting to merge to > convert our upstream jobs and remove docker.io from our upstream > workflow. [2]. > > Standalone and multinode jobs are working quite well. We are doing some > design work around branchful, update/upgrade jobs at this time. > > Thanks 0/ > > > [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ > [2] > https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 2 16:38:19 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 2 Sep 2020 09:38:19 -0700 Subject: Mentoring Boston University Students In-Reply-To: References: Message-ID: Hey Goutham! Here is the form: https://docs.google.com/forms/d/e/1FAIpQLSdehzBYqJeJ8x4RlPvQjTZpJ-LXs2A9vPrmRUPZNdawn1LgMg/viewform -Kendall (diablo_rojo) On Tue, Sep 1, 2020 at 6:56 PM Goutham Pacha Ravi wrote: > Hi Kendall, > > We'd like to help and have help in the manila team. We have a few projects > [1] where the on-ramp may be relatively easy - I can work with you and > define them. How do we apply? > > Thanks, > Goutham > > > [1] https://etherpad.opendev.org/p/manila-todos > > > > > > On Tue, Sep 1, 2020 at 9:08 AM Kendall Nelson > wrote: > >> Hello! >> >> As you may or may not know, the last two years various community members >> have mentored students from North Dakota State University for a semester to >> work on projects in OpenStack. Recently, I learned of a similar program at >> Boston University and they are still looking for mentors interested for the >> upcoming semester. >> >> Essentially you would have 5 to 7 students for 13 weeks to mentor and >> work on some feature or effort in your project. >> >> The time to apply is running out however as the deadline is Sept 3rd. If >> you are interested, please let me know ASAP! I am happy to help get the >> students up to speed with the community and getting their workspaces set >> up, but the actual work they would do is more up to you :) >> >> -Kendall (diablo_rojo) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 2 17:19:24 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 2 Sep 2020 10:19:24 -0700 Subject: [tacker] Wallaby vPTG In-Reply-To: References: Message-ID: Hello Yasufumi! Can I get you to fill out the survey for Tacker as well? https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey Thanks! -Kendall (diablo_rojo) On Mon, Aug 31, 2020 at 10:25 PM Yasufumi Ogawa wrote: > Hi team, > > I booked our slots, 27-29 Oct 6am-8am UTC, for the next vPTG[1] as we > agreed in previous irc meeting. I also prepared an etherpad [2], so > please add your name and suggestions. > > [1] https://ethercalc.openstack.org/7xp2pcbh1ncb > [2] https://etherpad.opendev.org/p/Tacker-PTG-Wallaby > > Thanks, > Yasufumi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Wed Sep 2 18:10:14 2020 From: zaitcev at redhat.com (Pete Zaitcev) Date: Wed, 2 Sep 2020 13:10:14 -0500 Subject: [tripleo, ironic] Error: Could not retrieve ... pxelinux.0 In-Reply-To: References: <20200828144844.7787707d@suzdal.zaitcev.lan> Message-ID: <20200902131014.59f2553d@suzdal.zaitcev.lan> Dear Alex, thanks for the reply. In my case, the undercloud is running on CenOS 8: [stack at undercloud ~]$ cat /etc/redhat-release CentOS Linux release 8.2.2004 (Core) What I'm trying to install is "upstream TripleO", if such a thing even exists. The packages are built for RHEL 8/CentOS 8, at least: [stack at undercloud ~]$ rpm -qf /usr/bin/openstack python3-openstackclient-4.0.1-0.20200817052906.bff556c.el8.noarch [stack at undercloud ~]$ rpm -qf /usr/share/python-tripleoclient/undercloud.conf.sample python3-tripleoclient-12.3.2-0.20200820055917.c15d0d0.el8.noarch I'm most curious about what a TripleO developer would do in this case. Surely there's a way to have some local repository of some yet unknown component, which I can then instrument or modify, and which is responsible for dealing with PXE. But what is it? Yours, -- Pete On Fri, 28 Aug 2020 14:00:11 -0600 Alex Schultz wrote: > I've seen this in the past if there is a mismatch between the host OS > and the Containers. Centos7 host with centos8 containers or vice > versa. Ussuri should be CentOS8 host OS and make sure you're pulling > the correct containers. The Ironic containers have some pathing > mismatches when the configuration gets generated around this. It used > to be compatible but we broke it at some point when switching some of > the tftp location bits. > > Thanks, > -Alex > > On Fri, Aug 28, 2020 at 1:55 PM Pete Zaitcev wrote: > > > > Hello: > > > > I wanted to give the TripleO a try, so started follow our > > installation guide for Ussuri, and eventually made it to > > "openstack undercloud install". It fails with something like this: > > > > Aug 28 10:10:53 undercloud puppet-user[48657]: Error: /Stage[main]/Ironic::Pxe/File[/var/lib/ironic/tftpboot/ipxe.efi]: Could not evaluate: Could not retrieve information from environment production source(s) file:/usr/share/ipxe/ipxe-x86_64.efi > > Aug 27 20:05:42 undercloud puppet-user[37048]: Error: /Stage[main]/Ironic::Pxe/Ironic::Pxe::Tftpboot_file[pxelinux.0]/File[/var/lib/ironic/tftpboot/pxelinux.0]: Could not evaluate: Could not retrieve information from environment production source(s) file:/tftpboot/pxelinux.0 > > > > Does anyone have an idea what it wants? > > > > I added a couple of packages on the host system that provided > > the files mentioned in the message, but it made no difference. > > Ussuri is conteinerized anyway. > > > > Since I'm very new to this, I have no clue where to look at all. > > The nearest task is a wrapper of some kind, so the install-undercloud.log > > looks like this: > > > > 2020-08-28 14:11:31.397 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] TASK [Run container-puppet tasks (generate config) during step 1 with paunch] *** > > 2020-08-28 14:11:31.397 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] Friday 28 August 2020 14:11:31 -0400 (0:00:00.302) 0:06:28.734 ********* > > 2020-08-28 14:11:32.223 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] changed: [undercloud] > > 2020-08-28 14:11:32.325 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] > > 2020-08-28 14:11:32.326 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] TASK [Wait for container-puppet tasks (generate config) to finish] ************* > > 2020-08-28 14:11:32.326 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] Friday 28 August 2020 14:11:32 -0400 (0:00:00.928) 0:06:29.663 ********* > > 2020-08-28 14:11:32.948 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] WAITING FOR COMPLETION: Wait for container-puppet tasks (generate config) to finish (1200 retries left). > > . . . > > > > If anyone could tell roughly what is supposed to be going on here, > > it would be great. I may be able figure out the rest. > > > > Greetings, > > -- Pete From aschultz at redhat.com Wed Sep 2 18:28:26 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 2 Sep 2020 12:28:26 -0600 Subject: [tripleo, ironic] Error: Could not retrieve ... pxelinux.0 In-Reply-To: <20200902131014.59f2553d@suzdal.zaitcev.lan> References: <20200828144844.7787707d@suzdal.zaitcev.lan> <20200902131014.59f2553d@suzdal.zaitcev.lan> Message-ID: On Wed, Sep 2, 2020 at 12:10 PM Pete Zaitcev wrote: > > Dear Alex, > > thanks for the reply. In my case, the undercloud is running on CenOS 8: > > [stack at undercloud ~]$ cat /etc/redhat-release > CentOS Linux release 8.2.2004 (Core) > > What I'm trying to install is "upstream TripleO", if such a thing > even exists. The packages are built for RHEL 8/CentOS 8, at least: > > [stack at undercloud ~]$ rpm -qf /usr/bin/openstack > python3-openstackclient-4.0.1-0.20200817052906.bff556c.el8.noarch > [stack at undercloud ~]$ rpm -qf /usr/share/python-tripleoclient/undercloud.conf.sample > python3-tripleoclient-12.3.2-0.20200820055917.c15d0d0.el8.noarch > > I'm most curious about what a TripleO developer would do in this case. > Surely there's a way to have some local repository of some yet unknown > component, which I can then instrument or modify, and which is > responsible for dealing with PXE. But what is it? > Since this is a puppet error, looking in puppet ironic for what to fix/address would be the first place to do this. Since we mount the puppet modules from the host system, you can debug by modifying the local modules under /usr/share/openstack-puppet/modules. In this particular error, it's trying to copy the ipxe file from /usr/share/ipxe/ in the container to a different folder. It seems like either the ipxe location has changed or the package is not installed in the container. You could launch an instance to inspect the contents of the container to troubleshoot. That being said, since we're not seeing this in any of the CI jobs, I wonder what your configuration(s) looks like and if you are pulling the correct containers/repos/etc. > Yours, > -- Pete > > On Fri, 28 Aug 2020 14:00:11 -0600 > Alex Schultz wrote: > > > I've seen this in the past if there is a mismatch between the host OS > > and the Containers. Centos7 host with centos8 containers or vice > > versa. Ussuri should be CentOS8 host OS and make sure you're pulling > > the correct containers. The Ironic containers have some pathing > > mismatches when the configuration gets generated around this. It used > > to be compatible but we broke it at some point when switching some of > > the tftp location bits. > > > > Thanks, > > -Alex > > > > On Fri, Aug 28, 2020 at 1:55 PM Pete Zaitcev wrote: > > > > > > Hello: > > > > > > I wanted to give the TripleO a try, so started follow our > > > installation guide for Ussuri, and eventually made it to > > > "openstack undercloud install". It fails with something like this: > > > > > > Aug 28 10:10:53 undercloud puppet-user[48657]: Error: /Stage[main]/Ironic::Pxe/File[/var/lib/ironic/tftpboot/ipxe.efi]: Could not evaluate: Could not retrieve information from environment production source(s) file:/usr/share/ipxe/ipxe-x86_64.efi > > > Aug 27 20:05:42 undercloud puppet-user[37048]: Error: /Stage[main]/Ironic::Pxe/Ironic::Pxe::Tftpboot_file[pxelinux.0]/File[/var/lib/ironic/tftpboot/pxelinux.0]: Could not evaluate: Could not retrieve information from environment production source(s) file:/tftpboot/pxelinux.0 > > > > > > Does anyone have an idea what it wants? > > > > > > I added a couple of packages on the host system that provided > > > the files mentioned in the message, but it made no difference. > > > Ussuri is conteinerized anyway. > > > > > > Since I'm very new to this, I have no clue where to look at all. > > > The nearest task is a wrapper of some kind, so the install-undercloud.log > > > looks like this: > > > > > > 2020-08-28 14:11:31.397 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] TASK [Run container-puppet tasks (generate config) during step 1 with paunch] *** > > > 2020-08-28 14:11:31.397 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] Friday 28 August 2020 14:11:31 -0400 (0:00:00.302) 0:06:28.734 ********* > > > 2020-08-28 14:11:32.223 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] changed: [undercloud] > > > 2020-08-28 14:11:32.325 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] > > > 2020-08-28 14:11:32.326 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] TASK [Wait for container-puppet tasks (generate config) to finish] ************* > > > 2020-08-28 14:11:32.326 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] Friday 28 August 2020 14:11:32 -0400 (0:00:00.928) 0:06:29.663 ********* > > > 2020-08-28 14:11:32.948 60599 WARNING tripleoclient.v1.tripleo_deploy.Deploy [ ] WAITING FOR COMPLETION: Wait for container-puppet tasks (generate config) to finish (1200 retries left). > > > . . . > > > > > > If anyone could tell roughly what is supposed to be going on here, > > > it would be great. I may be able figure out the rest. > > > > > > Greetings, > > > -- Pete > From mnaser at vexxhost.com Wed Sep 2 19:06:42 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 2 Sep 2020 15:06:42 -0400 Subject: [tc] monthly meeting Message-ID: Hi everyone, Here’s the agenda for our monthly TC meeting. It will happen tomorrow (Thursday the 3rd) at 1400 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting ## ACTIVE INITIATIVES * Follow up on past action items * OpenStack User-facing APIs and CLIs (belmoreira) * W cycle goal selection start * Completion of retirement cleanup (gmann): https://etherpad.opendev.org/p/tc-retirement-cleanup * Audit and clean-up tags (gmann) + Remove tc:approved-release tag https://review.opendev.org/#/c/749363 Thank you, Mohammed -- Mohammed Naser VEXXHOST, Inc. From whayutin at redhat.com Wed Sep 2 19:33:03 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 2 Sep 2020 13:33:03 -0600 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: Message-ID: On Wed, Sep 2, 2020 at 8:18 AM Ruslanas Gžibovskis wrote: > Sorry for the stupid question, but maybe there is some parameter for > tripleo deployment not to generate and download images from docker io each > time? since I already have it downloaded and working? > > Or, as I understand, I should be able to create my own snapshot of images > and specify it as a source? > Yes, as a user you can download the images and push into your own local registry and then specify your custom registry in the container-prepare-parameters.yaml file. > > On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin wrote: > >> Greetings, >> >> Some of you have contacted me regarding the recent news regarding >> docker.io's new policy with regards to container pull rate limiting >> [1]. I wanted to take the opportunity to further socialize our plan that >> will completely remove docker.io from our upstream workflows and avoid >> any rate limiting issues. >> >> We will continue to upload containers to docker.io for some time so that >> individuals and the community can access the containers. We will also >> start exploring other registries like quay and newly announced github >> container registry. These other public registries will NOT be used in our >> upstream jobs and will only serve the communities individual contributors. >> >> Our test jobs have been successful and patches are starting to merge to >> convert our upstream jobs and remove docker.io from our upstream >> workflow. [2]. >> >> Standalone and multinode jobs are working quite well. We are doing some >> design work around branchful, update/upgrade jobs at this time. >> >> Thanks 0/ >> >> >> [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ >> [2] >> https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongiman at gmail.com Thu Sep 3 00:18:28 2020 From: yongiman at gmail.com (=?UTF-8?B?7ZWc7Iq57KeE?=) Date: Thu, 3 Sep 2020 09:18:28 +0900 Subject: [ovn][neutron][ussuri] How to configure for overlay in DPDK with OVN ? In-Reply-To: References: Message-ID: Hi, Sean Thank you for your reply. So I should assign an ip address to the br-int bridge, if I am understanding correctly. Thanks, John Haan. 2020년 9월 2일 (수) 오후 9:59, Sean Mooney 님이 작성: > On Wed, 2020-09-02 at 16:23 +0900, 한승진 wrote: > > Hi, > > > > > > I’m trying to implement dpdk-ovs with ovn for our openstack environment. > > > > > > I create a bond interface(tunbound) with ovs and tun0 and tun1 are slave > > interfaces of the bond interface named tunbond. > > > > > > Bridge br-int > > > > fail_mode: secure > > > > datapath_type: netdev > > > > Port tunbond > > > > Interface tun0 > > > > type: dpdk > > > > options: {dpdk-devargs="0000:1a:00.1", n_rxq="2"} > > > > Interface tun1 > > > > type: dpdk > > > > options: {dpdk-devargs="0000:d8:00.1", n_rxq="2"} > > > > Port br-int > > > > Interface br-int > > > > type: internal > > > > Now, I need to configure remote-ovn options to the ovs. > > > > > > *I am confused how can I define the IP address of the bonded interface > with > > ovs?* > you dont you set the tunnel local ip on the ovs bridge containing the bond. > when a inteface is managed by ovs (with the excption of the bridge port or > type internal ports) > asinging an ip to the interface will do nothing as ovs hooks the packets > beofre they reach the ip layer of the kernel > networking stack. > > so to make sure that the kernel routes the network packets vi the ovs > bridge you need to assign the tunnel local ip to > the ovs bridge with the bound. so br-int in this case > > this will cause that bridge to respond to arps form the tunnel local ip > and that will cause the back leaning table in > ovs to be populated correctly with the remote mac address for the remote > tunnel endpoints via the bound. > > if you use ovs-appctl and look at the dataplane(not oepnflow) flows you > will see that the tunnel endcap flow is followed > by an out_port action instead of an output action which renques the packet > as if it arrived on the port > so when the packet is encaped with the geneve header it will be reporced > as if it came form the bridge local port and > then mac learing/the normal action woudl forward it to the bond. at least > that is how it would work for ml2/ovs > > if we do not have the normal action for that case we would need explcit > openflow rules to send the packet to the bound. > > if you dont have the local tunnel endpoint ip on the brige however the > tunnel traffic will not be dpdk acclerated so > that is important to have set correctly. > > > > > > Theses are commands for connect remote ovn services. > > > > > > ovs-vsctl set open . external-ids:ovn-remote=tcp:{controller ip}:6642 > > > > ovs-vsctl set open . external-ids:ovn-encap-type=geneve > > > > *1* > > > > > > This below command make it me confused. > > > > > > ovs-vsctl set open . external-ids:ovn-encap-ip={local ip} > > > > > > How should I resolve this issue? > > > > > > Regards, > > > > > > John Haan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Sep 3 04:30:51 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 3 Sep 2020 00:30:51 -0400 Subject: senlin auto scaling question In-Reply-To: References: Message-ID: Mohammed, Dis-regard my earlier emails. i found senlin does auto-healing. you need to create a health policy and attach it to your cluster. Here is my policy I created to monitor nodes' heath and if for some reason it dies or crashes, senlin will auto create that instance to fulfill the need. type: senlin.policy.health version: 1.1 description: A policy for maintaining node health from a cluster. properties: detection: # Number of seconds between two adjacent checking interval: 60 detection_modes: # Type for health checking, valid values include: # NODE_STATUS_POLLING, NODE_STATUS_POLL_URL, LIFECYCLE_EVENTS - type: NODE_STATUS_POLLING recovery: # Action that can be retried on a failed node, will improve to # support multiple actions in the future. Valid values include: # REBOOT, REBUILD, RECREATE actions: - name: RECREATE ** Here is the POC [root at os-infra-1-utility-container-e139058e ~]# nova list +--------------------------------------+---------------+--------+------------+-------------+-------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------------+--------+------------+-------------+-------------------+ | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - | Running | net1=192.168.1.26 | | ba55deb6-9488-4455-a472-a0a957cb388a | cirros_server | ACTIVE | - | Running | net1=192.168.1.14 | +--------------------------------------+---------------+--------+------------+-------------+-------------------+ ** Lets delete one of the nodes. [root at os-infra-1-utility-container-e139058e ~]# nova delete ba55deb6-9488-4455-a472-a0a957cb388a Request to delete server ba55deb6-9488-4455-a472-a0a957cb388a has been accepted. ** After a few min i can see RECOVERING nodes. [root at os-infra-1-utility-container-e139058e ~]# openstack cluster node list +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ | id | name | index | status | cluster_id | physical_id | profile_name | created_at | updated_at | tainted | +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ | d4a8f219 | node-YPsjB6bV | 6 | RECOVERING | 091fbd52 | ba55deb6 | myserver | 2020-09-02T21:01:47Z | 2020-09-03T04:01:58Z | False | | bc50c0b9 | node-hoiHkRcS | 7 | ACTIVE | 091fbd52 | 38ba7f7c | myserver | 2020-09-03T03:40:29Z | 2020-09-03T03:57:58Z | False | +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ ** Finally it's up and running with a new ip address. [root at os-infra-1-utility-container-e139058e ~]# nova list +--------------------------------------+---------------+--------+------------+-------------+-------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------------+--------+------------+-------------+-------------------+ | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - | Running | net1=192.168.1.26 | | 73a658cd-c40a-45d8-9b57-cc9e6c2b4dc1 | cirros_server | ACTIVE | - | Running | net1=192.168.1.17 | +--------------------------------------+---------------+--------+------------+-------------+-------------------+ On Tue, Sep 1, 2020 at 8:51 AM Mohammed Naser wrote: > > Hi Satish, > > I'm interested by this, did you end up finding a solution for this? > > Thanks, > Mohammed > > On Thu, Aug 27, 2020 at 1:54 PM Satish Patel wrote: > > > > Folks, > > > > I have created very simple cluster using following command > > > > openstack cluster create --profile myserver --desired-capacity 2 > > --min-size 2 --max-size 3 --strict my-asg > > > > It spun up 2 vm immediately now because the desired capacity is 2 so I > > am assuming if any node dies in the cluster it should spin up node to > > make count 2 right? > > > > so i killed one of node with "nove delete " but > > senlin didn't create node automatically to make desired capacity 2 (In > > AWS when you kill node in ASG it will create new node so is this > > senlin different then AWS?) > > > > > -- > Mohammed Naser > VEXXHOST, Inc. From duc.openstack at gmail.com Thu Sep 3 05:51:37 2020 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 2 Sep 2020 22:51:37 -0700 Subject: senlin auto scaling question In-Reply-To: References: Message-ID: Satish, I’m glad you were able to find the answer. Just to clarify, your original email mentioned auto scaling in its title. Auto scaling means creating or deleting nodes as load goes up or down. Senlin supports scaling clusters but requires another service to perform the decision making and triggering of the scaling (i.e. the auto in auto scaling). But as you correctly pointed out auto healing is fully supported by Senlin on its own with its health policy. Duc Truong On Wed, Sep 2, 2020 at 9:31 PM Satish Patel wrote: > Mohammed, > > Dis-regard my earlier emails. i found senlin does auto-healing. you > need to create a health policy and attach it to your cluster. > > Here is my policy I created to monitor nodes' heath and if for some > reason it dies or crashes, senlin will auto create that instance to > fulfill the need. > > type: senlin.policy.health > version: 1.1 > description: A policy for maintaining node health from a cluster. > properties: > detection: > # Number of seconds between two adjacent checking > interval: 60 > > detection_modes: > # Type for health checking, valid values include: > # NODE_STATUS_POLLING, NODE_STATUS_POLL_URL, LIFECYCLE_EVENTS > - type: NODE_STATUS_POLLING > > recovery: > # Action that can be retried on a failed node, will improve to > # support multiple actions in the future. Valid values include: > # REBOOT, REBUILD, RECREATE > actions: > - name: RECREATE > > > ** Here is the POC > > [root at os-infra-1-utility-container-e139058e ~]# nova list > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > | ID | Name | Status | Task > State | Power State | Networks | > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - > | Running | net1=192.168.1.26 | > | ba55deb6-9488-4455-a472-a0a957cb388a | cirros_server | ACTIVE | - > | Running | net1=192.168.1.14 | > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > ** Lets delete one of the nodes. > > [root at os-infra-1-utility-container-e139058e ~]# nova delete > ba55deb6-9488-4455-a472-a0a957cb388a > Request to delete server ba55deb6-9488-4455-a472-a0a957cb388a has been > accepted. > > ** After a few min i can see RECOVERING nodes. > > [root at os-infra-1-utility-container-e139058e ~]# openstack cluster node > list > > +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ > | id | name | index | status | cluster_id | > physical_id | profile_name | created_at | updated_at > | tainted | > > +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ > | d4a8f219 | node-YPsjB6bV | 6 | RECOVERING | 091fbd52 | > ba55deb6 | myserver | 2020-09-02T21:01:47Z | > 2020-09-03T04:01:58Z | False | > | bc50c0b9 | node-hoiHkRcS | 7 | ACTIVE | 091fbd52 | > 38ba7f7c | myserver | 2020-09-03T03:40:29Z | > 2020-09-03T03:57:58Z | False | > > +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ > > ** Finally it's up and running with a new ip address. > > [root at os-infra-1-utility-container-e139058e ~]# nova list > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > | ID | Name | Status | Task > State | Power State | Networks | > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - > | Running | net1=192.168.1.26 | > | 73a658cd-c40a-45d8-9b57-cc9e6c2b4dc1 | cirros_server | ACTIVE | - > | Running | net1=192.168.1.17 | > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > On Tue, Sep 1, 2020 at 8:51 AM Mohammed Naser wrote: > > > > Hi Satish, > > > > I'm interested by this, did you end up finding a solution for this? > > > > Thanks, > > Mohammed > > > > On Thu, Aug 27, 2020 at 1:54 PM Satish Patel > wrote: > > > > > > Folks, > > > > > > I have created very simple cluster using following command > > > > > > openstack cluster create --profile myserver --desired-capacity 2 > > > --min-size 2 --max-size 3 --strict my-asg > > > > > > It spun up 2 vm immediately now because the desired capacity is 2 so I > > > am assuming if any node dies in the cluster it should spin up node to > > > make count 2 right? > > > > > > so i killed one of node with "nove delete " but > > > senlin didn't create node automatically to make desired capacity 2 (In > > > AWS when you kill node in ASG it will create new node so is this > > > senlin different then AWS?) > > > > > > > > > -- > > Mohammed Naser > > VEXXHOST, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Sep 3 05:54:22 2020 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 3 Sep 2020 07:54:22 +0200 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: Message-ID: <01b6ff63-1929-599f-a492-4837edc44312@redhat.com> On 9/2/20 9:33 PM, Wesley Hayutin wrote: > > > On Wed, Sep 2, 2020 at 8:18 AM Ruslanas Gžibovskis > wrote: > > Sorry for the stupid question, but maybe there is some parameter for > tripleo deployment not to generate and download images from docker > io each time? since I already have it downloaded and working? > > Or, as I understand, I should be able to create my own snapshot of > images and specify it as a source? > > > Yes, as a user you can download the images and push into your own local > registry and then specify your custom registry in the > container-prepare-parameters.yaml file.  that's basically what I'm doing at home, in order to avoid the network overhead when deploying N times. Now, there's a new thing with github that could also be leveraged at some point: https://github.blog/2020-09-01-introducing-github-container-registry/ Though the solution proposed by Wes and his Team will be more efficient imho - fresh build of containers within CI makes perfectly sense. And using TCIB[1] for that task also provides a new layer of CI for this central tool, which is just about perfect! Cheers, C. [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/3rd_party.html#building-new-containers-with-tripleo-container-image-build > >   > > > On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin > wrote: > > Greetings, > > Some of you have contacted me regarding the recent news > regarding docker.io 's new policy with regards > to container pull rate limiting [1].  I wanted to take the > opportunity to further socialize our plan that will completely > remove docker.io from our upstream workflows > and avoid any rate limiting issues. > > We will continue to upload containers to docker.io > for some time so that individuals and the > community can access the containers.  We will also start > exploring other registries like quay and newly announced github > container registry. These other public registries will NOT be > used in our upstream jobs and will only serve the communities > individual contributors. > > Our test jobs have been successful and patches are starting to > merge to convert our upstream jobs and remove docker.io > from our upstream workflow.  [2]. > > Standalone and multinode jobs are working quite well.  We are > doing some design work around branchful, update/upgrade jobs at > this time. > > Thanks 0/ > > > [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ > [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From yasufum.o at gmail.com Thu Sep 3 06:02:05 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Thu, 3 Sep 2020 15:02:05 +0900 Subject: [tacker] Wallaby vPTG In-Reply-To: References: Message-ID: Hi Kendall, Thanks for the notice! I submitted my survey. Yasufumi On 2020/09/03 2:19, Kendall Nelson wrote: > Hello Yasufumi! > > Can I get you to fill out the survey for Tacker as well? > > https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey > > Thanks! > > -Kendall (diablo_rojo) > > On Mon, Aug 31, 2020 at 10:25 PM Yasufumi Ogawa > wrote: > > Hi team, > > I booked our slots, 27-29 Oct 6am-8am UTC, for the next vPTG[1] as we > agreed in previous irc meeting. I also prepared an etherpad [2], so > please add your name and suggestions. > > [1] https://ethercalc.openstack.org/7xp2pcbh1ncb > [2] https://etherpad.opendev.org/p/Tacker-PTG-Wallaby > > Thanks, > Yasufumi > From satish.txt at gmail.com Thu Sep 3 06:09:29 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 3 Sep 2020 02:09:29 -0400 Subject: senlin auto scaling question In-Reply-To: References: Message-ID: <4CD8DEAE-CF2D-42CB-BCAF-D8BB08B0DD54@gmail.com> Sorry about email subject title if that got you confused. But look like now we are on same page :) Sent from my iPhone > On Sep 3, 2020, at 1:51 AM, Duc Truong wrote: > >  > Satish, > > I’m glad you were able to find the answer. Just to clarify, your original email mentioned auto scaling in its title. Auto scaling means creating or deleting nodes as load goes up or down. Senlin supports scaling clusters but requires another service to perform the decision making and triggering of the scaling (i.e. the auto in auto scaling). > > But as you correctly pointed out auto healing is fully supported by Senlin on its own with its health policy. > > Duc Truong > > >> On Wed, Sep 2, 2020 at 9:31 PM Satish Patel wrote: >> Mohammed, >> >> Dis-regard my earlier emails. i found senlin does auto-healing. you >> need to create a health policy and attach it to your cluster. >> >> Here is my policy I created to monitor nodes' heath and if for some >> reason it dies or crashes, senlin will auto create that instance to >> fulfill the need. >> >> type: senlin.policy.health >> version: 1.1 >> description: A policy for maintaining node health from a cluster. >> properties: >> detection: >> # Number of seconds between two adjacent checking >> interval: 60 >> >> detection_modes: >> # Type for health checking, valid values include: >> # NODE_STATUS_POLLING, NODE_STATUS_POLL_URL, LIFECYCLE_EVENTS >> - type: NODE_STATUS_POLLING >> >> recovery: >> # Action that can be retried on a failed node, will improve to >> # support multiple actions in the future. Valid values include: >> # REBOOT, REBUILD, RECREATE >> actions: >> - name: RECREATE >> >> >> ** Here is the POC >> >> [root at os-infra-1-utility-container-e139058e ~]# nova list >> +--------------------------------------+---------------+--------+------------+-------------+-------------------+ >> | ID | Name | Status | Task >> State | Power State | Networks | >> +--------------------------------------+---------------+--------+------------+-------------+-------------------+ >> | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - >> | Running | net1=192.168.1.26 | >> | ba55deb6-9488-4455-a472-a0a957cb388a | cirros_server | ACTIVE | - >> | Running | net1=192.168.1.14 | >> +--------------------------------------+---------------+--------+------------+-------------+-------------------+ >> >> ** Lets delete one of the nodes. >> >> [root at os-infra-1-utility-container-e139058e ~]# nova delete >> ba55deb6-9488-4455-a472-a0a957cb388a >> Request to delete server ba55deb6-9488-4455-a472-a0a957cb388a has been accepted. >> >> ** After a few min i can see RECOVERING nodes. >> >> [root at os-infra-1-utility-container-e139058e ~]# openstack cluster node list >> +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ >> | id | name | index | status | cluster_id | >> physical_id | profile_name | created_at | updated_at >> | tainted | >> +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ >> | d4a8f219 | node-YPsjB6bV | 6 | RECOVERING | 091fbd52 | >> ba55deb6 | myserver | 2020-09-02T21:01:47Z | >> 2020-09-03T04:01:58Z | False | >> | bc50c0b9 | node-hoiHkRcS | 7 | ACTIVE | 091fbd52 | >> 38ba7f7c | myserver | 2020-09-03T03:40:29Z | >> 2020-09-03T03:57:58Z | False | >> +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ >> >> ** Finally it's up and running with a new ip address. >> >> [root at os-infra-1-utility-container-e139058e ~]# nova list >> +--------------------------------------+---------------+--------+------------+-------------+-------------------+ >> | ID | Name | Status | Task >> State | Power State | Networks | >> +--------------------------------------+---------------+--------+------------+-------------+-------------------+ >> | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - >> | Running | net1=192.168.1.26 | >> | 73a658cd-c40a-45d8-9b57-cc9e6c2b4dc1 | cirros_server | ACTIVE | - >> | Running | net1=192.168.1.17 | >> +--------------------------------------+---------------+--------+------------+-------------+-------------------+ >> >> On Tue, Sep 1, 2020 at 8:51 AM Mohammed Naser wrote: >> > >> > Hi Satish, >> > >> > I'm interested by this, did you end up finding a solution for this? >> > >> > Thanks, >> > Mohammed >> > >> > On Thu, Aug 27, 2020 at 1:54 PM Satish Patel wrote: >> > > >> > > Folks, >> > > >> > > I have created very simple cluster using following command >> > > >> > > openstack cluster create --profile myserver --desired-capacity 2 >> > > --min-size 2 --max-size 3 --strict my-asg >> > > >> > > It spun up 2 vm immediately now because the desired capacity is 2 so I >> > > am assuming if any node dies in the cluster it should spin up node to >> > > make count 2 right? >> > > >> > > so i killed one of node with "nove delete " but >> > > senlin didn't create node automatically to make desired capacity 2 (In >> > > AWS when you kill node in ASG it will create new node so is this >> > > senlin different then AWS?) >> > > >> > >> > >> > -- >> > Mohammed Naser >> > VEXXHOST, Inc. >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Thu Sep 3 06:10:43 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 3 Sep 2020 09:10:43 +0300 Subject: [tripleo] docker.io rate limiting In-Reply-To: <01b6ff63-1929-599f-a492-4837edc44312@redhat.com> References: <01b6ff63-1929-599f-a492-4837edc44312@redhat.com> Message-ID: I am a complete noob in containers, and especially in images. is there a small "howto" get all OSP related images and (upload to local storage is podman pull docker.io/tripleou/*:current-tripelo ), but how to get a full list? Cause when I specify undercloud itself :) it do not have ceilometer-compute, but I have ceilometer disabled, so I believe this is why it did not download that image. but in general, as I understood, it checks all and then selects what it needs? On Thu, 3 Sep 2020 at 08:57, Cédric Jeanneret wrote: > > > On 9/2/20 9:33 PM, Wesley Hayutin wrote: > > > > > > On Wed, Sep 2, 2020 at 8:18 AM Ruslanas Gžibovskis > > wrote: > > > > Sorry for the stupid question, but maybe there is some parameter for > > tripleo deployment not to generate and download images from docker > > io each time? since I already have it downloaded and working? > > > > Or, as I understand, I should be able to create my own snapshot of > > images and specify it as a source? > > > > > > Yes, as a user you can download the images and push into your own local > > registry and then specify your custom registry in the > > container-prepare-parameters.yaml file. > > that's basically what I'm doing at home, in order to avoid the network > overhead when deploying N times. > > Now, there's a new thing with github that could also be leveraged at > some point: > https://github.blog/2020-09-01-introducing-github-container-registry/ > > Though the solution proposed by Wes and his Team will be more efficient > imho - fresh build of containers within CI makes perfectly sense. And > using TCIB[1] for that task also provides a new layer of CI for this > central tool, which is just about perfect! > > Cheers, > > C. > > [1] > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/3rd_party.html#building-new-containers-with-tripleo-container-image-build > > > > > > > > > > > On Wed, 2 Sep 2020 at 13:58, Wesley Hayutin > > wrote: > > > > Greetings, > > > > Some of you have contacted me regarding the recent news > > regarding docker.io 's new policy with regards > > to container pull rate limiting [1]. I wanted to take the > > opportunity to further socialize our plan that will completely > > remove docker.io from our upstream workflows > > and avoid any rate limiting issues. > > > > We will continue to upload containers to docker.io > > for some time so that individuals and the > > community can access the containers. We will also start > > exploring other registries like quay and newly announced github > > container registry. These other public registries will NOT be > > used in our upstream jobs and will only serve the communities > > individual contributors. > > > > Our test jobs have been successful and patches are starting to > > merge to convert our upstream jobs and remove docker.io > > from our upstream workflow. [2]. > > > > Standalone and multinode jobs are working quite well. We are > > doing some design work around branchful, update/upgrade jobs at > > this time. > > > > Thanks 0/ > > > > > > [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ > > [2] > https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) > > > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Sep 3 09:47:58 2020 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 3 Sep 2020 11:47:58 +0200 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: Message-ID: <38d87625-d81a-9fb7-f43c-bb75a14e2984@redhat.com> Hey Wes, stupid question: what about the molecule tests? Since they are running within containers (centos-8, centos-7, maybe/probably ubi8 soon), we might hit some limitations there.... Unless we're NOT using docker.io already? Cheers, C. On 9/2/20 1:54 PM, Wesley Hayutin wrote: > Greetings, > > Some of you have contacted me regarding the recent news regarding > docker.io 's new policy with regards to container pull > rate limiting [1].  I wanted to take the opportunity to further > socialize our plan that will completely remove docker.io > from our upstream workflows and avoid any rate > limiting issues. > > We will continue to upload containers to docker.io > for some time so that individuals and the community can access the > containers.  We will also start exploring other registries like quay and > newly announced github container registry. These other public registries > will NOT be used in our upstream jobs and will only serve the > communities individual contributors. > > Our test jobs have been successful and patches are starting to merge to > convert our upstream jobs and remove docker.io from > our upstream workflow.  [2]. > > Standalone and multinode jobs are working quite well.  We are doing some > design work around branchful, update/upgrade jobs at this time. > > Thanks 0/ > > > [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ > [2] https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From grant at civo.com Thu Sep 3 09:59:25 2020 From: grant at civo.com (Grant Morley) Date: Thu, 3 Sep 2020 10:59:25 +0100 Subject: Issue with libvirt unable to kill processes Message-ID: <2151e1e5-ffeb-52b2-bd8f-68d2f8d93d36@civo.com> Hi All, I was wondering if anyone has come across an issue with libvirt seemingly having an issue with instances all of a sudden "locking up" with the following error: Failed to terminate process 2263874 with SIGKILL: Device or resource busy In the nova logs I am seeing: 2020-09-03 09:13:43.208 2659995 INFO nova.compute.manager [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Rebooting instance 2020-09-03 09:15:52.429 2659995 WARNING nova.virt.libvirt.driver [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Failed to soft reboot instance. Trying hard reboot. 2020-09-03 09:16:32.450 2659995 WARNING nova.virt.libvirt.driver [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Error from libvirt during destroy. Code=38 Error=Failed to terminate process 2263874 with SIGKILL: Device or resource busy; attempt 1 of 3: libvirtError: Failed to terminate process 2263874 with SIGKILL: Device or resource busy 2020-09-03 09:17:12.484 2659995 WARNING nova.virt.libvirt.driver [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Error from libvirt during destroy. Code=38 Error=Failed to terminate process 2263874 with SIGKILL: Device or resource busy; attempt 2 of 3: libvirtError: Failed to terminate process 2263874 with SIGKILL: Device or resource busy 2020-09-03 09:17:52.516 2659995 WARNING nova.virt.libvirt.driver [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Error from libvirt during destroy. Code=38 Error=Failed to terminate process 2263874 with SIGKILL: Device or resource busy; attempt 3 of 3: libvirtError: Failed to terminate process 2263874 with SIGKILL: Device or resource busy 2020-09-03 09:17:52.526 2659995 ERROR nova.compute.manager [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Cannot reboot instance: Failed to terminate process 2263874 with SIGKILL: Device or resource busy: libvirtError: Failed to terminate process 2263874 with SIGKILL: Device or resource busy 2020-09-03 09:17:53.026 2659995 INFO nova.compute.manager [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d 8cce5e1532e6435a90b168077664bbdf - default default] [instance: f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Successfully reverted task state from reboot_started on failure for instance. It seems to be caused when a reboot happens to an instance. If you reset the state and try again, the same error occurs.  You also seemingly cannot kill off any libvirt process that is attached to that instance. To me it looks like it could be a kernel issue with libvirt but I could be wrong? Does anyone know of a workaround for this other than maybe restarting a compute host? Many thanks, From mnaser at vexxhost.com Thu Sep 3 10:23:12 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 3 Sep 2020 06:23:12 -0400 Subject: senlin auto scaling question In-Reply-To: References: Message-ID: That’s awesome. Thank you. On Thu, Sep 3, 2020 at 12:31 AM Satish Patel wrote: > Mohammed, > > > > Dis-regard my earlier emails. i found senlin does auto-healing. you > > need to create a health policy and attach it to your cluster. > > > > Here is my policy I created to monitor nodes' heath and if for some > > reason it dies or crashes, senlin will auto create that instance to > > fulfill the need. > > > > type: senlin.policy.health > > version: 1.1 > > description: A policy for maintaining node health from a cluster. > > properties: > > detection: > > # Number of seconds between two adjacent checking > > interval: 60 > > > > detection_modes: > > # Type for health checking, valid values include: > > # NODE_STATUS_POLLING, NODE_STATUS_POLL_URL, LIFECYCLE_EVENTS > > - type: NODE_STATUS_POLLING > > > > recovery: > > # Action that can be retried on a failed node, will improve to > > # support multiple actions in the future. Valid values include: > > # REBOOT, REBUILD, RECREATE > > actions: > > - name: RECREATE > > > > > > ** Here is the POC > > > > [root at os-infra-1-utility-container-e139058e ~]# nova list > > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > | ID | Name | Status | Task > > State | Power State | Networks | > > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - > > | Running | net1=192.168.1.26 | > > | ba55deb6-9488-4455-a472-a0a957cb388a | cirros_server | ACTIVE | - > > | Running | net1=192.168.1.14 | > > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > > > ** Lets delete one of the nodes. > > > > [root at os-infra-1-utility-container-e139058e ~]# nova delete > > ba55deb6-9488-4455-a472-a0a957cb388a > > Request to delete server ba55deb6-9488-4455-a472-a0a957cb388a has been > accepted. > > > > ** After a few min i can see RECOVERING nodes. > > > > [root at os-infra-1-utility-container-e139058e ~]# openstack cluster node > list > > > +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ > > | id | name | index | status | cluster_id | > > physical_id | profile_name | created_at | updated_at > > | tainted | > > > +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ > > | d4a8f219 | node-YPsjB6bV | 6 | RECOVERING | 091fbd52 | > > ba55deb6 | myserver | 2020-09-02T21:01:47Z | > > 2020-09-03T04:01:58Z | False | > > | bc50c0b9 | node-hoiHkRcS | 7 | ACTIVE | 091fbd52 | > > 38ba7f7c | myserver | 2020-09-03T03:40:29Z | > > 2020-09-03T03:57:58Z | False | > > > +----------+---------------+-------+------------+------------+-------------+--------------+----------------------+----------------------+---------+ > > > > ** Finally it's up and running with a new ip address. > > > > [root at os-infra-1-utility-container-e139058e ~]# nova list > > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > | ID | Name | Status | Task > > State | Power State | Networks | > > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > | 38ba7f7c-2f5f-4502-a5d0-6c4841d6d145 | cirros_server | ACTIVE | - > > | Running | net1=192.168.1.26 | > > | 73a658cd-c40a-45d8-9b57-cc9e6c2b4dc1 | cirros_server | ACTIVE | - > > | Running | net1=192.168.1.17 | > > > +--------------------------------------+---------------+--------+------------+-------------+-------------------+ > > > > On Tue, Sep 1, 2020 at 8:51 AM Mohammed Naser wrote: > > > > > > Hi Satish, > > > > > > I'm interested by this, did you end up finding a solution for this? > > > > > > Thanks, > > > Mohammed > > > > > > On Thu, Aug 27, 2020 at 1:54 PM Satish Patel > wrote: > > > > > > > > Folks, > > > > > > > > I have created very simple cluster using following command > > > > > > > > openstack cluster create --profile myserver --desired-capacity 2 > > > > --min-size 2 --max-size 3 --strict my-asg > > > > > > > > It spun up 2 vm immediately now because the desired capacity is 2 so I > > > > am assuming if any node dies in the cluster it should spin up node to > > > > make count 2 right? > > > > > > > > so i killed one of node with "nove delete " but > > > > senlin didn't create node automatically to make desired capacity 2 (In > > > > AWS when you kill node in ASG it will create new node so is this > > > > senlin different then AWS?) > > > > > > > > > > > > > -- > > > Mohammed Naser > > > VEXXHOST, Inc. > > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Sep 3 10:30:33 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 3 Sep 2020 12:30:33 +0200 Subject: [ironic] [stable] Bifrost stable/stein is broken (by eventlet?): help needed Message-ID: Hi folks, I'm trying to revive the Bifrost stable/stein CI, and after fixing a bunch of issues in https://review.opendev.org/749014 I've hit a wall with what seems an eventlet problem: ironic-inspector fails to start with: Exception AttributeError: "'_SocketDuckForFd' object has no attribute '_closed'" in ignored I've managed to find similar issues, but they should have been resolved in the eventlet version in stein (0.24.1). Any ideas? If we cannot fix it, we'll have to EOL stein and earlier on bifrost. Dmitry -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Sep 3 11:55:16 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 3 Sep 2020 05:55:16 -0600 Subject: [tripleo] centos-binary -> openstack- Message-ID: Greetings, The container names in master have changed from centos-binary* to openstack*. https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_containers.yaml https://opendev.org/openstack/tripleo-common/commit/90f6de7a7fab15e9161c1f03acecaf98726298f1 If your patches are failing to pull https://registry-1.docker.io/v2/tripleomaster/centos-binary* it's not going to be fixed in a recheck. Check that your patches are rebased and your dependent patches are rebased. Thanks 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Thu Sep 3 12:59:53 2020 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Thu, 3 Sep 2020 14:59:53 +0200 Subject: [all][TC] OpenStack Client (OSC) vs python-*clients In-Reply-To: References: <1668118.VLH7GnMWUR@whitebase.usersys.redhat.com> <9cbf9d69a9beb30d03af71e42a3e2446a516292a.camel@redhat.com> <20200813164131.bdmhankpd2qxycux@yuggoth.org> <2956d6bd-320e-34ea-64a0-1001e102d75c@gmail.com> Message-ID: Hi everyone, thank you for all your comments. However, I don't think we have reached any conclusion. It would be great if the SDK/openstackclient team and the different projects that raised some concerns can collaborate and move forward. Personally, I believe that the current situation is a very bad user experience. Let us know how the TC can help. cheers, Belmiro On Fri, Aug 14, 2020 at 3:08 PM Sean McGinnis wrote: > > > And if it's to provide one CLI that rules them all, the individual > > projects (well, Cinder, anyway) can't stop adding functionality to > > cinderclient CLI until the openstackclient CLI has feature parity. At > > least now, you can use one CLI to do all cinder-related stuff. If we > > stop cinderclient CLI development, then you'll need to use > > openstackclient for some things (old features + the latest features) > > and the cinderclient for all the in between features, which doesn't > > seem like progress to me. > And in reality, I don't think Cinder can even drop cinderclient even if > we get feature parity. We have python-brick-cinderclient-ext that is > used in conjunction with python-cinderclient for some standalone use cases. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Thu Sep 3 15:10:01 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 3 Sep 2020 11:10:01 -0400 Subject: [all][TC] OpenStack Client (OSC) vs python-*clients In-Reply-To: References: <1668118.VLH7GnMWUR@whitebase.usersys.redhat.com> <9cbf9d69a9beb30d03af71e42a3e2446a516292a.camel@redhat.com> <20200813164131.bdmhankpd2qxycux@yuggoth.org> <2956d6bd-320e-34ea-64a0-1001e102d75c@gmail.com> Message-ID: On Thu, Sep 3, 2020 at 9:05 AM Belmiro Moreira wrote: > > Hi everyone, > thank you for all your comments. > However, I don't think we have reached any conclusion. > > It would be great if the SDK/openstackclient team and the different projects that raised some concerns can collaborate and move forward. > Personally, I believe that the current situation is a very bad user experience. > > Let us know how the TC can help. Can we start by agreeing (or can the TC just top-down mandate?) an end state we want to get to? The way I've understood it and see it, what we're aiming for is: A. osc is user-facing CLI shell around sdk B. sdk is the only official client library for interacting with the OpenStack REST APIs I've been working with those assumptions for [1] and addressing point B, leaving point A to the osc team. If we take B to be true, patches like [2] would get blocked and redirected to the SDK for the API logic, with only the CLI parts in the osc. That doesn't seem to be the case, so I don't know what to think anymore. [1] https://review.opendev.org/#/q/status:open+project:openstack/openstacksdk+branch:master+topic:story/2007929 [2] https://review.opendev.org/#/c/675304/ > > cheers, > Belmiro > > On Fri, Aug 14, 2020 at 3:08 PM Sean McGinnis wrote: >> >> >> > And if it's to provide one CLI that rules them all, the individual >> > projects (well, Cinder, anyway) can't stop adding functionality to >> > cinderclient CLI until the openstackclient CLI has feature parity. At >> > least now, you can use one CLI to do all cinder-related stuff. If we >> > stop cinderclient CLI development, then you'll need to use >> > openstackclient for some things (old features + the latest features) >> > and the cinderclient for all the in between features, which doesn't >> > seem like progress to me. >> And in reality, I don't think Cinder can even drop cinderclient even if >> we get feature parity. We have python-brick-cinderclient-ext that is >> used in conjunction with python-cinderclient for some standalone use cases. >> From oliver.wenz at dhbw-mannheim.de Thu Sep 3 15:18:20 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Thu, 3 Sep 2020 17:18:20 +0200 Subject: [openstack-ansible] OpenStack Ansible deployment fails due to lxc containers not having network connection Message-ID: <152bb1fa-a218-d6b3-5920-1f6867b75726@dhbw-mannheim.de> I'm trying to deploy OpenStack Ansible. When running the first playbook ```openstack-ansible setup-hosts.yml```, there are errors for all containers during the task ```[openstack_hosts : Remove the blacklisted packages]``` (see below) and the playbook fails. ``` fatal: [infra1_repo_container-1f1565cd]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file. E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file. E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file. E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file.", "rc": 100, "stderr": "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file. E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file. E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file. E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file. ", "stderr_lines": ["E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file."], "stdout": "Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable) Err:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable) Err:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable) Err:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable) Reading package lists... ", "stdout_lines": ["Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease", "Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease", "Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease", "Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease", "Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Reading package lists..."]} ``` When I attach to any container and run ```ping 192.168.100.6``` (local DNS), I get the same error (```connect: Network is unreachable```). However, when I specify an interface by running ```ping -I eth1 192.168.100.6``` there is a successful connection. Running ```ip r``` on the infra_cinder container yields: ``` 10.0.3.0/24 dev eth2 proto kernel scope link src 10.0.3.5 192.168.110.0/24 dev eth1 proto kernel scope link src 192.168.110.232 ``` so there seems to be no default route which is why the connection fails (similar for the other infra containers). Shouldn't OSA automatically configure this? I didn't find anything regarding a default route on containers in the Docs. Here's my openstack_user_config.yml: ``` cidr_networks:   container: 192.168.110.0/24   tunnel: 192.168.32.0/24   storage: 10.0.3.0/24 used_ips:   - "192.168.110.1,192.168.110.2"   - "192.168.110.111"   - "192.168.110.115"   - "192.168.110.117,192.168.110.118"   - "192.168.110.131,192.168.110.140"   - "192.168.110.201,192.168.110.207"   - "192.168.32.1,192.168.32.2"   - "192.168.32.201,192.168.32.207"   - "10.0.3.1"   - "10.0.3.11,10.0.3.14"   - "10.0.3.21,10.0.3.24"   - "10.0.3.31,10.0.3.42"   - "10.0.3.201,10.0.3.207" global_overrides:   # The internal and external VIP should be different IPs, however they   # do not need to be on separate networks.   external_lb_vip_address: 192.168.100.168   internal_lb_vip_address: 192.168.110.201   management_bridge: "br-mgmt"   provider_networks:     - network:         container_bridge: "br-mgmt"         container_type: "veth"         container_interface: "eth1"         ip_from_q: "container"         type: "raw"         group_binds:           - all_containers           - hosts         is_container_address: true     - network:         container_bridge: "br-vxlan"         container_type: "veth"         container_interface: "eth10"         ip_from_q: "tunnel"         type: "vxlan"         range: "1:1000"         net_name: "vxlan"         group_binds:           - neutron_linuxbridge_agent     - network:         container_bridge: "br-ext1"         container_type: "veth"         container_interface: "eth12"         host_bind_override: "eth12"         type: "flat"         net_name: "ext_net"         group_binds:           - neutron_linuxbridge_agent     - network:         container_bridge: "br-storage"         container_type: "veth"         container_interface: "eth2"         ip_from_q: "storage"         type: "raw"         group_binds:           - glance_api           - cinder_api           - cinder_volume           - nova_compute           - swift-proxy ### ### Infrastructure ### # galera, memcache, rabbitmq, utility shared-infra_hosts:   infra1:     ip: 192.168.110.201 # repository (apt cache, python packages, etc) repo-infra_hosts:   infra1:     ip: 192.168.110.201 # load balancer haproxy_hosts:   infra1:     ip: 192.168.110.201 ### ### OpenStack ### os-infra_hosts:    infra1:      ip: 192.168.110.201 identity_hosts:    infra1:      ip: 192.168.110.201 network_hosts:    infra1:      ip: 192.168.110.201 compute_hosts:    compute1:      ip: 192.168.110.204    compute2:      ip: 192.168.110.205    compute3:      ip: 192.168.110.206    compute4:      ip: 192.168.110.207 storage-infra_hosts:    infra1:      ip: 192.168.110.201 storage_hosts:    lvm-storage1:      ip: 192.168.110.202      container_vars:        cinder_backends:          lvm:            volume_backend_name: LVM_iSCSI            volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver            volume_group: cinder_volumes            iscsi_ip_address: "{{ cinder_storage_address }}"          limit_container_types: cinder_volume ``` I also asked this question on the server fault stackexchange: https://serverfault.com/questions/1032573/openstack-ansible-deployment-fails-due-to-lxc-containers-not-having-network-conn Kind regards, Oliver From elmiko at redhat.com Thu Sep 3 15:41:00 2020 From: elmiko at redhat.com (Michael McCune) Date: Thu, 3 Sep 2020 11:41:00 -0400 Subject: [API SIG] Ending office hours for the API-SIG Message-ID: hello all, the API-SIG has held meetings and office hours for several years now. since our migration from a weekly meeting to an office hour we have monitored a continual decrease in attendance for these hours, aside from the core membership. after discussion over the course of the last month or two, we have agreed to end the API-SIG office hours. we will hold office hours this week as our final meeting. if you need to contact the SIG in the future, we are still available in the #openstack-sdks channel on freenode as well as here on the mailing list. in the future we would like to see continued migration of the API-SIG closer to the SDKs group. these two groups will essentially become a singular place to discuss all things related to SDKs and APIs in OpenStack. thank you all for the years of participation and discussion that helped us as a community to define a set of best practices and guidelines for API work in OpenStack. peace o/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Thu Sep 3 15:45:00 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 3 Sep 2020 17:45:00 +0200 Subject: [all][TC] OpenStack Client (OSC) vs python-*clients In-Reply-To: References: <1668118.VLH7GnMWUR@whitebase.usersys.redhat.com> <9cbf9d69a9beb30d03af71e42a3e2446a516292a.camel@redhat.com> <20200813164131.bdmhankpd2qxycux@yuggoth.org> <2956d6bd-320e-34ea-64a0-1001e102d75c@gmail.com> Message-ID: > On 3. Sep 2020, at 17:10, Artom Lifshitz wrote: > > On Thu, Sep 3, 2020 at 9:05 AM Belmiro Moreira > wrote: >> >> Hi everyone, >> thank you for all your comments. >> However, I don't think we have reached any conclusion. >> >> It would be great if the SDK/openstackclient team and the different projects that raised some concerns can collaborate and move forward. >> Personally, I believe that the current situation is a very bad user experience. >> >> Let us know how the TC can help. > > Can we start by agreeing (or can the TC just top-down mandate?) an end > state we want to get to? The way I've understood it and see it, what > we're aiming for is: > > A. osc is user-facing CLI shell around sdk > B. sdk is the only official client library for interacting with the > OpenStack REST APIs > > I've been working with those assumptions for [1] and addressing point > B, leaving point A to the osc team. > > If we take B to be true, patches like [2] would get blocked and > redirected to the SDK for the API logic, with only the CLI parts in > the osc. That doesn't seem to be the case, so I don't know what to > think anymore. > From all the discussions we held over the time, yes, A is definitely our target (while there might be still special cases) As SDK/OSC team we can say: B is true in a perfect world, but it is not a valid statement today. Somebody need to invest pretty huge effort in making this happen (I know this since I already invested in switching image part). During this time all the changes to OSC for things not based on SDK would need to be blocked. Amount of active people deep in the code of SDK/CLI is too small currently to handle this fast. On the other side, if nova team commits to do their patches to SDK first (what I see you guys are definitely doing, a great Thanks!) - we would be able to switch CLI for nova to SDK much easier. The more teams would be doing that, the easier would it be to clean OSC up. Very unfortunately since last PTG there were only very minor activities in SDK/OSC team, but I would like to change this now (unfortunately there are still just 24 hours in the day). Let me see where I can find another few hours a day for repairing things and set a personal (or hopefully SDK team) target to move at least few nova resources onto CLI at SDK until next PTG. ;-) Regards, Another Artem From jonathan.rosser at rd.bbc.co.uk Thu Sep 3 15:51:51 2020 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Thu, 3 Sep 2020 16:51:51 +0100 Subject: [openstack-ansible] OpenStack Ansible deployment fails due to lxc containers not having network connection In-Reply-To: <152bb1fa-a218-d6b3-5920-1f6867b75726@dhbw-mannheim.de> References: <152bb1fa-a218-d6b3-5920-1f6867b75726@dhbw-mannheim.de> Message-ID: Hi Oliver, The default route would normally be via eth0 in the container, which I suspect has some issue. This is given an address by dnsmasq/dhcp on the host and attached to lxcbr0. This is where I would start to look. I am straight seeing that the default address range used for eth0 is in conflict with your storage network, so perhaps this is also something to look at. See https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/defaults/main.yml#L104 You join us on irc at #openstack-ansible for some 'real-time' assistance if necessary. Regards, Jonathan. On 03/09/2020 16:18, Oliver Wenz wrote: > I'm trying to deploy OpenStack Ansible. When running the first playbook > ```openstack-ansible setup-hosts.yml```, there are errors for all > containers during the task ```[openstack_hosts : Remove the blacklisted > packages]``` (see below) and the playbook fails. > > ``` > fatal: [infra1_repo_container-1f1565cd]: FAILED! => {"changed": false, > "cmd": "apt-get update", "msg": "E: The repository > 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a > Release file. > E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates > Release' no longer has a Release file. > E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports > Release' no longer has a Release file. > E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security > Release' no longer has a Release file.", "rc": 100, "stderr": "E: The > repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer > has a Release file. > E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates > Release' no longer has a Release file. > E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports > Release' no longer has a Release file. > E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security > Release' no longer has a Release file. > ", "stderr_lines": ["E: The repository > 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a > Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu > bionic-updates Release' no longer has a Release file.", "E: The > repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' > no longer has a Release file.", "E: The repository > 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer > has a Release file."], "stdout": "Ign:1 > http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease > Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease > Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease > Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease > Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release >   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). > - connect (101: Network is unreachable) > Err:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release >   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). > - connect (101: Network is unreachable) > Err:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release >   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). > - connect (101: Network is unreachable) > Err:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release >   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). > - connect (101: Network is unreachable) > Reading package lists... > ", "stdout_lines": ["Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic > InRelease", "Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates > InRelease", "Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports > InRelease", "Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security > InRelease", "Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release", > "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). > - connect (101: Network is unreachable)", "Err:6 > http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release", "  Cannot > initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect > (101: Network is unreachable)", "Err:7 > http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release", "  Cannot > initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect > (101: Network is unreachable)", "Err:8 > http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release", "  Cannot > initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect > (101: Network is unreachable)", "Reading package lists..."]} > > ``` > > When I attach to any container and run ```ping 192.168.100.6``` (local > DNS), I get the same error (```connect: Network is unreachable```). > However, when I specify an interface by running ```ping -I eth1 > 192.168.100.6``` there is a successful connection. > Running ```ip r``` on the infra_cinder container yields: > ``` > 10.0.3.0/24 dev eth2 proto kernel scope link src 10.0.3.5 > 192.168.110.0/24 dev eth1 proto kernel scope link src 192.168.110.232 > ``` > so there seems to be no default route which is why the connection fails > (similar for the other infra containers). Shouldn't OSA automatically > configure this? I didn't find anything regarding a default route on > containers in the Docs. > > Here's my openstack_user_config.yml: > > ``` > cidr_networks: >   container: 192.168.110.0/24 >   tunnel: 192.168.32.0/24 >   storage: 10.0.3.0/24 > > used_ips: >   - "192.168.110.1,192.168.110.2" >   - "192.168.110.111" >   - "192.168.110.115" >   - "192.168.110.117,192.168.110.118" >   - "192.168.110.131,192.168.110.140" >   - "192.168.110.201,192.168.110.207" >   - "192.168.32.1,192.168.32.2" >   - "192.168.32.201,192.168.32.207" >   - "10.0.3.1" >   - "10.0.3.11,10.0.3.14" >   - "10.0.3.21,10.0.3.24" >   - "10.0.3.31,10.0.3.42" >   - "10.0.3.201,10.0.3.207" > > global_overrides: >   # The internal and external VIP should be different IPs, however they >   # do not need to be on separate networks. >   external_lb_vip_address: 192.168.100.168 >   internal_lb_vip_address: 192.168.110.201 >   management_bridge: "br-mgmt" >   provider_networks: >     - network: >         container_bridge: "br-mgmt" >         container_type: "veth" >         container_interface: "eth1" >         ip_from_q: "container" >         type: "raw" >         group_binds: >           - all_containers >           - hosts >         is_container_address: true >     - network: >         container_bridge: "br-vxlan" >         container_type: "veth" >         container_interface: "eth10" >         ip_from_q: "tunnel" >         type: "vxlan" >         range: "1:1000" >         net_name: "vxlan" >         group_binds: >           - neutron_linuxbridge_agent >     - network: >         container_bridge: "br-ext1" >         container_type: "veth" >         container_interface: "eth12" >         host_bind_override: "eth12" >         type: "flat" >         net_name: "ext_net" >         group_binds: >           - neutron_linuxbridge_agent >     - network: >         container_bridge: "br-storage" >         container_type: "veth" >         container_interface: "eth2" >         ip_from_q: "storage" >         type: "raw" >         group_binds: >           - glance_api >           - cinder_api >           - cinder_volume >           - nova_compute >           - swift-proxy > > ### > ### Infrastructure > ### > > # galera, memcache, rabbitmq, utility > shared-infra_hosts: >   infra1: >     ip: 192.168.110.201 > > # repository (apt cache, python packages, etc) > repo-infra_hosts: >   infra1: >     ip: 192.168.110.201 > > # load balancer > haproxy_hosts: >   infra1: >     ip: 192.168.110.201 > > ### > ### OpenStack > ### > > os-infra_hosts: >    infra1: >      ip: 192.168.110.201 > > identity_hosts: >    infra1: >      ip: 192.168.110.201 > > network_hosts: >    infra1: >      ip: 192.168.110.201 > > compute_hosts: >    compute1: >      ip: 192.168.110.204 >    compute2: >      ip: 192.168.110.205 >    compute3: >      ip: 192.168.110.206 >    compute4: >      ip: 192.168.110.207 > > storage-infra_hosts: >    infra1: >      ip: 192.168.110.201 > > storage_hosts: >    lvm-storage1: >      ip: 192.168.110.202 >      container_vars: >        cinder_backends: >          lvm: >            volume_backend_name: LVM_iSCSI >            volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver >            volume_group: cinder_volumes >            iscsi_ip_address: "{{ cinder_storage_address }}" >          limit_container_types: cinder_volume > > ``` > > I also asked this question on the server fault stackexchange: > https://serverfault.com/questions/1032573/openstack-ansible-deployment-fails-due-to-lxc-containers-not-having-network-conn > > > Kind regards, > Oliver > > > From sosogh at 126.com Thu Sep 3 04:03:47 2020 From: sosogh at 126.com (sosogh) Date: Thu, 3 Sep 2020 12:03:47 +0800 (CST) Subject: Reply:Re: [kolla] questions when using external mysql In-Reply-To: References: <5c644eb6.3cda.174488cf629.Coremail.sosogh@126.com> Message-ID: <64f5b918.2516.17452228292.Coremail.sosogh@126.com> Hi Mark : I have read further on external-mariadb-guide.html . if setting "use_preconfigured_databases = yes" and and "use_common_mariadb_user = yes " , people has to create the " new fresh " DB (nova , glance ,keystone etc) manually . the more openstack serivces enabled , the more databases have to be created . so why people will want to create the " new fresh " DB manually rather than letting kolla-ansible to do it ? what would be this case ? IMHO, the blueprint (https://blueprints.launchpad.net/kolla-ansible/+spec/external-mariadb-support) maybe more likely to be: people create ready-to-use mysql account , and apply the mysql address/account to kolla-ansible , then kolla-ansible use it as a common user across databases 在 2020-09-02 16:35:24,"Mark Goddard" 写道: On Tue, 1 Sep 2020 at 15:48, sosogh wrote: Hi list: I want to use kolla-ansible to deploy openstack , but using external mysql. I am following these docs: https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html https://docs.openstack.org/kolla-ansible/latest/reference/databases/external-mariadb-guide.html. Hi, could you start by telling us which version or branch of Kolla Ansible you are using? I have some questions: ################ ## Question 1 ## ################ According to the offical doc , if setting it in inventory file(multinode), kolla-ansible -i ./multinode deploy will throw out error: I guest when kolla-ansible running the playbook against myexternalmariadbloadbalancer.com , the """register: find_custom_fluentd_inputs""" in """TASK [common : Find custom fluentd input config files]""" maybe null . I think this could be an issue with a recent change to the common role, where the condition for the 'Find custom fluentd input config files' task changed slightly. I have proposed a potential fix for this, could you try it out and report back? https://review.opendev.org/749463 ################ ## Question 2 ## ################ According to the offical doc , If the MariaDB username is not root, set database_username in /etc/kolla/globals.yml file: But in kolla-ansible/ansible/roles/xxxxxx/tasks/bootstrap.yml , they use ''' login_user: "{{ database_user }}" ''' , for example : You are correct, this is an issue in the documentation. I have proposed a fix here: https://review.opendev.org/749464 So at last , I took the following steps: 1. """not""" setting [mariadb] in inventory file(multinode) 2. set "database_user: openstack" for "privillegeduser" PS: My idea is that if using an external ready-to-use mysql (cluster), it is enough to tell kolla-ansible only the address/user/password of the external DB. i.e. setting them in the file /etc/kolla/globals.yml and passwords.yml , no need to add it into inventory file(multinode) I agree, I did not expect to need to change the inventory for this use case. Finally , it is successful to deploy openstack via kolla-ansible . So far I have not found any problems. Are the steps what I took good ( enough ) ? Thank you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2938 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 95768 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2508 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 22016 bytes Desc: not available URL: From mark at stackhpc.com Thu Sep 3 07:33:56 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Sep 2020 08:33:56 +0100 Subject: Reply:Re: [kolla] questions when using external mysql In-Reply-To: <64f5b918.2516.17452228292.Coremail.sosogh@126.com> References: <5c644eb6.3cda.174488cf629.Coremail.sosogh@126.com> <64f5b918.2516.17452228292.Coremail.sosogh@126.com> Message-ID: On Thu, 3 Sep 2020 at 05:04, sosogh wrote: > Hi Mark : > > I have read further on external-mariadb-guide.html . > > if setting "use_preconfigured_databases = yes" and and > "use_common_mariadb_user = yes " , > people has to create the " new fresh " DB (nova , glance ,keystone etc) > manually . > the more openstack serivces enabled , the more databases have to be > created . > so why people will want to create the " new fresh " DB manually rather > than letting kolla-ansible to do it ? > what would be this case ? > The use case is where the DBMS is entirely managed outside of Kolla Ansible, and the DB user does not have privileges required to create a database. If you set use_preconfigured_databases to no (the default), Kolla Ansible will create the necessary databases. > > IMHO, the blueprint ( > https://blueprints.launchpad.net/kolla-ansible/+spec/external-mariadb-support) > maybe more likely to be: > people create ready-to-use mysql account , and apply the mysql > address/account to kolla-ansible , > then kolla-ansible use it as a common user across databases > > > > 在 2020-09-02 16:35:24,"Mark Goddard" 写道: > > > > On Tue, 1 Sep 2020 at 15:48, sosogh wrote: > >> Hi list: >> >> I want to use kolla-ansible to deploy openstack , but using external >> mysql. >> I am following these docs: >> >> https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html >> >> >> https://docs.openstack.org/kolla-ansible/latest/reference/databases/external-mariadb-guide.html >> . >> > > Hi, could you start by telling us which version or branch of Kolla Ansible > you are using? > >> >> I have some questions: >> >> ################ >> ## Question 1 ## >> ################ >> >> According to the offical doc , if setting it in inventory >> file(multinode), >> >> >> kolla-ansible -i ./multinode deploy will throw out error: >> >> >> I guest when kolla-ansible running the playbook against >> myexternalmariadbloadbalancer.com , >> >> the """register: find_custom_fluentd_inputs""" in """TASK [common : Find >> custom fluentd input config files]""" maybe null . >> > > I think this could be an issue with a recent change to the common role, > where the condition for the 'Find custom fluentd input config files' task > changed slightly. I have proposed a potential fix for this, could you try > it out and report back? https://review.opendev.org/749463 > >> >> ################ >> ## Question 2 ## >> ################ >> >> According to the offical doc , If the MariaDB username is not root, set >> database_username in /etc/kolla/globals.yml file: >> >> >> But in kolla-ansible/ansible/roles/xxxxxx/tasks/bootstrap.yml , they >> use ''' login_user: "{{ database_user }}" ''' , for example : >> >> You are correct, this is an issue in the documentation. I have proposed a > fix here: https://review.opendev.org/749464 > > >> So at last , I took the following steps: >> 1. """not""" setting [mariadb] in inventory file(multinode) >> 2. set "database_user: openstack" for "privillegeduser" >> >> PS: >> My idea is that if using an external ready-to-use mysql (cluster), >> it is enough to tell kolla-ansible only the address/user/password of the >> external DB. >> i.e. setting them in the file /etc/kolla/globals.yml and passwords.yml , >> no need to add it into inventory file(multinode) >> > > I agree, I did not expect to need to change the inventory for this use > case. > >> >> Finally , it is successful to deploy openstack via kolla-ansible . >> So far I have not found any problems. >> Are the steps what I took good ( enough ) ? >> Thank you ! >> >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2938 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 95768 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 2508 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 22016 bytes Desc: not available URL: From mahdi.abbasi.2013 at gmail.com Thu Sep 3 16:10:42 2020 From: mahdi.abbasi.2013 at gmail.com (mahdi abbasi) Date: Thu, 3 Sep 2020 20:40:42 +0430 Subject: Kuryr openstack Message-ID: Hi development team, Do i need a special configuration to use kuryr when using openvswitch? I get the following error when starting Docker container in zun. Please help me. Best regards Mahdi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahdi.abbasi.2013 at gmail.com Thu Sep 3 16:13:48 2020 From: mahdi.abbasi.2013 at gmail.com (mahdi abbasi) Date: Thu, 3 Sep 2020 20:43:48 +0430 Subject: Kuryr openstack Message-ID: Hi development team, Do i need a special configuration to use kuryr when using openvswitch? I get the following error when starting Docker container in zun: Unable to create the network.No tenant network is availble for allocation. Please help me. Best regards Mahdi -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Thu Sep 3 17:00:52 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 3 Sep 2020 17:00:52 +0000 Subject: Issue with libvirt unable to kill processes In-Reply-To: <2151e1e5-ffeb-52b2-bd8f-68d2f8d93d36@civo.com> References: <2151e1e5-ffeb-52b2-bd8f-68d2f8d93d36@civo.com> Message-ID: <20200903170052.GE31915@sync> Hello, Did you check /var/log/libvirt/libvirtd.log? Cheers, -- Arnaud Morin On 03.09.20 - 10:59, Grant Morley wrote: > Hi All, > > I was wondering if anyone has come across an issue with libvirt seemingly > having an issue with instances all of a sudden "locking up" with the > following error: > > Failed to terminate process 2263874 with SIGKILL: Device or resource busy > > In the nova logs I am seeing: > > 2020-09-03 09:13:43.208 2659995 INFO nova.compute.manager > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Rebooting instance > 2020-09-03 09:15:52.429 2659995 WARNING nova.virt.libvirt.driver > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Failed to soft reboot instance. Trying > hard reboot. > 2020-09-03 09:16:32.450 2659995 WARNING nova.virt.libvirt.driver > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Error from libvirt during destroy. > Code=38 Error=Failed to terminate process 2263874 with SIGKILL: Device or > resource busy; attempt 1 of 3: libvirtError: Failed to terminate process > 2263874 with SIGKILL: Device or resource busy > 2020-09-03 09:17:12.484 2659995 WARNING nova.virt.libvirt.driver > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Error from libvirt during destroy. > Code=38 Error=Failed to terminate process 2263874 with SIGKILL: Device or > resource busy; attempt 2 of 3: libvirtError: Failed to terminate process > 2263874 with SIGKILL: Device or resource busy > 2020-09-03 09:17:52.516 2659995 WARNING nova.virt.libvirt.driver > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Error from libvirt during destroy. > Code=38 Error=Failed to terminate process 2263874 with SIGKILL: Device or > resource busy; attempt 3 of 3: libvirtError: Failed to terminate process > 2263874 with SIGKILL: Device or resource busy > 2020-09-03 09:17:52.526 2659995 ERROR nova.compute.manager > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Cannot reboot instance: Failed to > terminate process 2263874 with SIGKILL: Device or resource busy: > libvirtError: Failed to terminate process 2263874 with SIGKILL: Device or > resource busy > 2020-09-03 09:17:53.026 2659995 INFO nova.compute.manager > [req-7ffb9b7c-799b-40dc-be04-598bfda2e2fc 6ddb647baf9343b09d7f8f7a32b0b43d > 8cce5e1532e6435a90b168077664bbdf - default default] [instance: > f3a8c916-28f5-432d-9b8e-c3056d2dee5a] Successfully reverted task state from > reboot_started on failure for instance. > > It seems to be caused when a reboot happens to an instance. > > If you reset the state and try again, the same error occurs.  You also > seemingly cannot kill off any libvirt process that is attached to that > instance. > > To me it looks like it could be a kernel issue with libvirt but I could be > wrong? > > Does anyone know of a workaround for this other than maybe restarting a > compute host? > > Many thanks, > > > From CAPSEY at augusta.edu Thu Sep 3 19:11:03 2020 From: CAPSEY at augusta.edu (Apsey, Christopher) Date: Thu, 3 Sep 2020 19:11:03 +0000 Subject: [nova] Session Recording for Feedback Message-ID: Nova supports using the websockify -record functionality to capture frames sent over arbitrary console sessions. Currently, the record = path/to/file option in nova.conf saves these sessions on nova-(spice|vnc)proxy endpoints at the specified path in the format of /path/to/file.(session number). These session numbers appear to be incrementally created starting from 1 and don't really have an association with their respective instance from what I can tell. It would be useful to have these files saved using the instance UUID instead. Ignoring that minor problem, playing back these files is a challenge. I've tried using noVNCs vnc_playback.html function from both the master branch and stable/v0.6 with no luck - it looks like the libraries and the utilities haven't been maintained at the same cadence and there are missing functions that the player is expecting. My goal here is to be able to capture the activities of students as they progress through various exercises and then have instructors be able to go over their work with them after the fact if they have questions. Has anyone been able to do this (or something like this) successfully, or are we just better off trying to do this in-band per instance? Chris Apsey GEORGIA CYBER CENTER -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Sep 3 20:02:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 3 Sep 2020 22:02:36 +0200 Subject: [neutron] Drivers meeting - 04.09.2020 Message-ID: <20200903200236.efjy5jii3w3rtt4b@skaplons-mac> Hi, As there is no agenda for tomorrow's meeting, I will cancel it. Lets focus on reviewing patches related to the RFE scheduled for Victoria-3: https://launchpad.net/neutron/+milestone/victoria-3 See You all online and have a great weekend o/ -- Slawek Kaplonski Principal software engineer Red Hat From openstack at nemebean.com Thu Sep 3 21:04:39 2020 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 3 Sep 2020 16:04:39 -0500 Subject: [oslo] Feature Freeze Message-ID: This is just a heads up that Oslo is now in feature freeze. Any changes that would require a feature release of an Oslo library will need to request an FFE if they want to make the Victoria release. Please send an email tagged "[oslo][ffe]" to the list to make such a request. I expect we'll branch the Oslo repos in the near future, so some feature development can continue then. However, we generally prefer not to merge invasive changes between feature freeze and release just in case a last-minute bugfix backport is needed. Disentangling a bug fix from a major feature could be a chore. Thanks. -Ben From whayutin at redhat.com Thu Sep 3 23:40:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 3 Sep 2020 17:40:14 -0600 Subject: [tripleo] docker.io rate limiting In-Reply-To: <38d87625-d81a-9fb7-f43c-bb75a14e2984@redhat.com> References: <38d87625-d81a-9fb7-f43c-bb75a14e2984@redhat.com> Message-ID: On Thu, Sep 3, 2020 at 3:48 AM Cédric Jeanneret wrote: > Hey Wes, > > stupid question: what about the molecule tests? Since they are running > within containers (centos-8, centos-7, maybe/probably ubi8 soon), we > might hit some limitations there.... Unless we're NOT using docker.io > already? > > Cheers, > OK.. so easy answer 1. we're still going to push to docker.io 2. any content ci uses from docker.io will be mirrored in quay and the rdo registry including base images. So I would switch the molecule / tox config to use quay as soon as we have images there. I'm searching around for that code in tripleo-ansible and validations and it's not where I thought it was. Do you have pointers to where docker.io is configured. Thanks > > C. > > On 9/2/20 1:54 PM, Wesley Hayutin wrote: > > Greetings, > > > > Some of you have contacted me regarding the recent news regarding > > docker.io 's new policy with regards to container pull > > rate limiting [1]. I wanted to take the opportunity to further > > socialize our plan that will completely remove docker.io > > from our upstream workflows and avoid any rate > > limiting issues. > > > > We will continue to upload containers to docker.io > > for some time so that individuals and the community can access the > > containers. We will also start exploring other registries like quay and > > newly announced github container registry. These other public registries > > will NOT be used in our upstream jobs and will only serve the > > communities individual contributors. > > > > Our test jobs have been successful and patches are starting to merge to > > convert our upstream jobs and remove docker.io from > > our upstream workflow. [2]. > > > > Standalone and multinode jobs are working quite well. We are doing some > > design work around branchful, update/upgrade jobs at this time. > > > > Thanks 0/ > > > > > > [1] https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ > > [2] > https://review.opendev.org/#/q/topic:new-ci-job+(status:open+OR+status:merged) > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Sep 4 01:12:41 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 4 Sep 2020 01:12:41 +0000 (UTC) Subject: [InteropWG] Brainstorming and looking for Ideas from cross project teams References: <1687801759.3070448.1599181961844.ref@mail.yahoo.com> Message-ID: <1687801759.3070448.1599181961844@mail.yahoo.com> Hi all, Interop WG i seeking cross platform participation to enable Baremetal and Kubernetes ready Modules in OpenStack & Open Infrastructure. Please add what you would like to get help from Interop Group and how in turn you can help Interop WG goals of enabling Multi-Cloud & Hybrid Cloud Logo Programs to be initiated in CY Q4 2019. We like to discuss and get any new ideas whetted and supported by community. Please add and join us in Forum, PTG and Summit. https://etherpad.opendev.org/p/2020-Wallaby-interop-brainstorming Look forward to your Projects valued participation in Interop Branding efforts.Our next Interop Call is shceduled for Sept 11th 10 AM PDT, please add before that all you can bring to table. For Interop WG Prakash RamchandranIndi Member BoD -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Fri Sep 4 01:28:23 2020 From: sorrison at gmail.com (Sam Morrison) Date: Fri, 4 Sep 2020 11:28:23 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> Message-ID: <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> OK have been making some good progress, I now have devstack on bionic working with midonet ml2, it should also work with 20.04 but need to confirm Needed a couple of patches to networking-midonet to get it working [1] I also need to point it to our git repo for now until we get the upstream deb packages upgraded MIDONET_DEB_URI=http://download.rc.nectar.org.au/nectar-ubuntu MIDONET_DEB_SUITE=bionic The changes we have on midonet to work with bionic are now in a pull request [2] Once that’s merged need to find a way to build a new package and upload that to the midonet repo, Yamamoto is that something you can help with? I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. I can now start to look into the devstack zuul jobs. Cheers, Sam [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack [2] https://github.com/midonet/midonet/pull/9 > On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: > > > >> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: >> >> hi, >> >> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: >>> >>> >>> >>>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: >>>> >>>> Sebastian, Sam, >>>> >>>> thank you for speaking up. >>>> >>>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >>>> the major part for it is to make midonet itself to run on ubuntu >>>> version used by the ci. (18.04, or maybe directly to 20.04) >>>> https://midonet.atlassian.net/browse/MNA-1344 >>>> iirc, the remaining blockers are: >>>> * libreswan (used by vpnaas) >>>> * vpp (used by fip64) >>>> maybe it's the easiest to drop those features along with their >>>> required components, if it's acceptable for your use cases. >>> >>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >>> >>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? >> >> it still exists. but i don't think it's maintained well. >> let me find and ask someone in midokura who "owns" that part of infra. >> >> does it also involve some package-related modifications to midonet repo, right? > > > Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow > > Sam > > > >> >>> >>> I’m keen to do the work but might need a bit of guidance to get started, >>> >>> Sam >>> >>> >>> >>> >>> >>> >>>> >>>> alternatively you might want to make midonet run in a container. (so >>>> that you can run it with older ubuntu, or even a container trimmed for >>>> JVM) >>>> there were a few attempts to containerize midonet. >>>> i think this is the latest one: https://github.com/midonet/midonet-docker >>>> >>>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >>>>> >>>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >>>>> >>>>> I’m happy to help too. >>>>> >>>>> Cheers, >>>>> Sam >>>>> >>>>> >>>>> >>>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> Thx Sebastian for stepping in to maintain the project. That is great news. >>>>>> I think that at the beginning You should do 2 things: >>>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>>>>> >>>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>>>>> >>>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: >>>>>>> >>>>>>> Hi Slawek, >>>>>>> >>>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>>>>>> >>>>>>> Please let me know how to proceed and how we can be onboarded easily. >>>>>>> >>>>>>> Best regards, >>>>>>> >>>>>>> Sebastian >>>>>>> >>>>>>> -- >>>>>>> Sebastian Saemann >>>>>>> Head of Managed Services >>>>>>> >>>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>>>>>> https://netways.de | sebastian.saemann at netways.de >>>>>>> >>>>>>> ** NETWAYS Web Services - https://nws.netways.de ** >>>>>> >>>>>> — >>>>>> Slawek Kaplonski >>>>>> Principal software engineer >>>>>> Red Hat From ltomasbo at redhat.com Fri Sep 4 06:38:53 2020 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Fri, 4 Sep 2020 08:38:53 +0200 Subject: Kuryr openstack In-Reply-To: References: Message-ID: Hi Mahdi, On Thu, Sep 3, 2020 at 6:46 PM mahdi abbasi wrote: > Hi development team, > > Do i need a special configuration to use kuryr when using openvswitch? > Are you using kuryr-kubernetes or kuryr-libnetwork? And yes, it works find with openvswitch. Only caveat is that for nested environments (running kuryr-kubernetes inside OpenStack VMs), the firewall driver must be set to openvswitch (instead of iptables_hybrid) > I get the following error when starting Docker container in zun: > > Unable to create the network.No tenant network is availble for allocation. > Seems like you are using the namespace subnet driver (1 neutron subnet per k8s namespace) and you have not set up a neutron subnet pool to use. Cheers, Luis > > Please help me. > > Best regards > Mahdi > -- LUIS TOMÁS BOLÍVAR Senior Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Sep 4 07:26:14 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Sep 2020 08:26:14 +0100 Subject: [ironic] [stable] Bifrost stable/stein is broken (by eventlet?): help needed In-Reply-To: References: Message-ID: On Thu, 3 Sep 2020 at 11:31, Dmitry Tantsur wrote: > > Hi folks, > > I'm trying to revive the Bifrost stable/stein CI, and after fixing a bunch of issues in https://review.opendev.org/749014 I've hit a wall with what seems an eventlet problem: ironic-inspector fails to start with: > > Exception AttributeError: "'_SocketDuckForFd' object has no attribute '_closed'" in ignored > > I've managed to find similar issues, but they should have been resolved in the eventlet version in stein (0.24.1). Any ideas? > > If we cannot fix it, we'll have to EOL stein and earlier on bifrost. Strange. Do you know why this affects only bifrost and not ironic inspector CI? > > Dmitry > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill From mark at stackhpc.com Fri Sep 4 07:31:38 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Sep 2020 08:31:38 +0100 Subject: [all][TC] OpenStack Client (OSC) vs python-*clients In-Reply-To: References: <1668118.VLH7GnMWUR@whitebase.usersys.redhat.com> <9cbf9d69a9beb30d03af71e42a3e2446a516292a.camel@redhat.com> <20200813164131.bdmhankpd2qxycux@yuggoth.org> <2956d6bd-320e-34ea-64a0-1001e102d75c@gmail.com> Message-ID: On Thu, 3 Sep 2020 at 16:45, Artem Goncharov wrote: > > > > > On 3. Sep 2020, at 17:10, Artom Lifshitz wrote: > > > > On Thu, Sep 3, 2020 at 9:05 AM Belmiro Moreira > > wrote: > >> > >> Hi everyone, > >> thank you for all your comments. > >> However, I don't think we have reached any conclusion. > >> > >> It would be great if the SDK/openstackclient team and the different projects that raised some concerns can collaborate and move forward. > >> Personally, I believe that the current situation is a very bad user experience. > >> > >> Let us know how the TC can help. > > > > Can we start by agreeing (or can the TC just top-down mandate?) an end > > state we want to get to? The way I've understood it and see it, what > > we're aiming for is: > > > > A. osc is user-facing CLI shell around sdk > > B. sdk is the only official client library for interacting with the > > OpenStack REST APIs > > > > I've been working with those assumptions for [1] and addressing point > > B, leaving point A to the osc team. > > > > If we take B to be true, patches like [2] would get blocked and > > redirected to the SDK for the API logic, with only the CLI parts in > > the osc. That doesn't seem to be the case, so I don't know what to > > think anymore. > > > > From all the discussions we held over the time, yes, A is definitely our target (while there might be still special cases) > > As SDK/OSC team we can say: B is true in a perfect world, but it is not a valid statement today. Somebody need to invest pretty huge effort in making this happen (I know this since I already invested in switching image part). During this time all the changes to OSC for things not based on SDK would need to be blocked. Amount of active people deep in the code of SDK/CLI is too small currently to handle this fast. On the other side, if nova team commits to do their patches to SDK first (what I see you guys are definitely doing, a great Thanks!) - we would be able to switch CLI for nova to SDK much easier. > The more teams would be doing that, the easier would it be to clean OSC up. > > Very unfortunately since last PTG there were only very minor activities in SDK/OSC team, but I would like to change this now (unfortunately there are still just 24 hours in the day). Let me see where I can find another few hours a day for repairing things and set a personal (or hopefully SDK team) target to move at least few nova resources onto CLI at SDK until next PTG. ;-) I'm not working on this, and haven't been following it, but honestly given the current level of activity in OpenStack this sounds unlikely to happen. IMO from a user perspective, focussing on feature parity for OSC for all clients should be the priority, especially when teams like Glance say they would need multiple cycles with both clients maintaining parity in order to deprecate and drop their legacy client. > > Regards, > Another Artem From mark at stackhpc.com Fri Sep 4 07:32:45 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Sep 2020 08:32:45 +0100 Subject: [tripleo] centos-binary -> openstack- In-Reply-To: References: Message-ID: On Thu, 3 Sep 2020 at 12:56, Wesley Hayutin wrote: > > Greetings, > > The container names in master have changed from centos-binary* to openstack*. > https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_containers.yaml > > https://opendev.org/openstack/tripleo-common/commit/90f6de7a7fab15e9161c1f03acecaf98726298f1 > > If your patches are failing to pull https://registry-1.docker.io/v2/tripleomaster/centos-binary* it's not going to be fixed in a recheck. Check that your patches are rebased and your dependent patches are rebased. Hi Wes, Can we infer from this that Tripleo is no longer using Kolla on master? > > Thanks 0/ From hberaud at redhat.com Fri Sep 4 07:44:19 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 4 Sep 2020 09:44:19 +0200 Subject: [oslo][ffe] request for oslo Message-ID: Hey Oslofolk, I request an FFE for these oslo.messaging changes [1]. The goal of these changes is to run rabbitmq heartbeat in python thread by default. Also these changes deprecating this option to prepare future removal and force to always run heartbeat in a python thread whatever the context. Land these changes during the victoria cycle can help us to prime the option removal during the next cycle. Thanks for your time, [1] https://review.opendev.org/#/c/747395/ -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Fri Sep 4 08:41:21 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Fri, 4 Sep 2020 08:41:21 +0000 Subject: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version Message-ID: Hi all We are using kolla to build the OpenStack, use kolla-ansible to deploy rabbitmq is v3.7.10, mariadb is v10.1.x when building Rocky release rabbitmq is v3.8.5, mariadb is v10.3.x when building Ussuri release what kind of reasons are considered as rabbitmq version is changed from v3.7.10 to v3.8.5 ? what kind of reasons are considered as mariadb version is changed from v10.1.x to v10.3.x ? If you can provide an explanation, or key explanation, we would be very grateful. Thanks. brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Fri Sep 4 08:47:04 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 4 Sep 2020 17:47:04 +0900 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> Message-ID: On Fri, Sep 4, 2020 at 10:28 AM Sam Morrison wrote: > > OK have been making some good progress, I now have devstack on bionic working with midonet ml2, it should also work with 20.04 but need to confirm nice work. thank you. > > Needed a couple of patches to networking-midonet to get it working [1] > > I also need to point it to our git repo for now until we get the upstream deb packages upgraded > > MIDONET_DEB_URI=http://download.rc.nectar.org.au/nectar-ubuntu > MIDONET_DEB_SUITE=bionic > > The changes we have on midonet to work with bionic are now in a pull request [2] > Once that’s merged need to find a way to build a new package and upload that to the midonet repo, Yamamoto is that something you can help with? i'm talking to our infra folks but it might take longer than i hoped. if you or someone else can provide a public repo, it might be faster. (i have looked at launchpad PPA while ago. but it didn't seem straightforward given the complex build machinary in midonet.) > > I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. probably. > > For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. this thing? https://review.opendev.org/#/c/749866/ > > > > I can now start to look into the devstack zuul jobs. > > Cheers, > Sam > > > [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack > [2] https://github.com/midonet/midonet/pull/9 > > > > > > On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: > > > > > > > >> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: > >> > >> hi, > >> > >> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: > >>> > >>> > >>> > >>>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: > >>>> > >>>> Sebastian, Sam, > >>>> > >>>> thank you for speaking up. > >>>> > >>>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. > >>>> the major part for it is to make midonet itself to run on ubuntu > >>>> version used by the ci. (18.04, or maybe directly to 20.04) > >>>> https://midonet.atlassian.net/browse/MNA-1344 > >>>> iirc, the remaining blockers are: > >>>> * libreswan (used by vpnaas) > >>>> * vpp (used by fip64) > >>>> maybe it's the easiest to drop those features along with their > >>>> required components, if it's acceptable for your use cases. > >>> > >>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. > >>> > >>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? > >> > >> it still exists. but i don't think it's maintained well. > >> let me find and ask someone in midokura who "owns" that part of infra. > >> > >> does it also involve some package-related modifications to midonet repo, right? > > > > > > Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow > > > > Sam > > > > > > > >> > >>> > >>> I’m keen to do the work but might need a bit of guidance to get started, > >>> > >>> Sam > >>> > >>> > >>> > >>> > >>> > >>> > >>>> > >>>> alternatively you might want to make midonet run in a container. (so > >>>> that you can run it with older ubuntu, or even a container trimmed for > >>>> JVM) > >>>> there were a few attempts to containerize midonet. > >>>> i think this is the latest one: https://github.com/midonet/midonet-docker > >>>> > >>>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: > >>>>> > >>>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. > >>>>> > >>>>> I’m happy to help too. > >>>>> > >>>>> Cheers, > >>>>> Sam > >>>>> > >>>>> > >>>>> > >>>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> Thx Sebastian for stepping in to maintain the project. That is great news. > >>>>>> I think that at the beginning You should do 2 things: > >>>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, > >>>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, > >>>>>> > >>>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). > >>>>>> > >>>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: > >>>>>>> > >>>>>>> Hi Slawek, > >>>>>>> > >>>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. > >>>>>>> > >>>>>>> Please let me know how to proceed and how we can be onboarded easily. > >>>>>>> > >>>>>>> Best regards, > >>>>>>> > >>>>>>> Sebastian > >>>>>>> > >>>>>>> -- > >>>>>>> Sebastian Saemann > >>>>>>> Head of Managed Services > >>>>>>> > >>>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > >>>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > >>>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > >>>>>>> https://netways.de | sebastian.saemann at netways.de > >>>>>>> > >>>>>>> ** NETWAYS Web Services - https://nws.netways.de ** > >>>>>> > >>>>>> — > >>>>>> Slawek Kaplonski > >>>>>> Principal software engineer > >>>>>> Red Hat > From radoslaw.piliszek at gmail.com Fri Sep 4 10:16:29 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 4 Sep 2020 12:16:29 +0200 Subject: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version In-Reply-To: References: Message-ID: Hi Brin, most of the time the distribution dictates the new version. However, sometimes it is forced due to issues with new versions of OpenStack and old versions of these dependencies (for example MariaDB 10.1 would fail to work with newer releases due to lack of MySQL compatibility). Newer versions usually mean new features that are useful for end users. In this case RabbitMQ 3.8 shines with its new Prometheus exporter. We generally try to avoid needless updates but still stay current enough to receive proper support from upstreams and satisfy the users. -yoctozepto On Fri, Sep 4, 2020 at 10:54 AM Brin Zhang(张百林) wrote: > > Hi all > > > > We are using kolla to build the OpenStack, use kolla-ansible to deploy > > rabbitmq is v3.7.10, mariadb is v10.1.x when building Rocky release > > rabbitmq is v3.8.5, mariadb is v10.3.x when building Ussuri release > > > > what kind of reasons are considered as rabbitmq version is changed from v3.7.10 to v3.8.5 ? > > what kind of reasons are considered as mariadb version is changed from v10.1.x to v10.3.x ? > > > > If you can provide an explanation, or key explanation, we would be very grateful. > > Thanks. > > > > brinzhang > > From tom.v.black at gmail.com Fri Sep 4 10:29:49 2020 From: tom.v.black at gmail.com (Tom Black) Date: Fri, 4 Sep 2020 18:29:49 +0800 Subject: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version In-Reply-To: References: Message-ID: Also rabbitmq 3.8 has a lot of performance improvement than the older versions. regards. Radosław Piliszek wrote: > Newer versions usually mean new features that are useful for end > users. In this case RabbitMQ 3.8 shines with its new Prometheus > exporter. From smooney at redhat.com Fri Sep 4 11:02:54 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 04 Sep 2020 12:02:54 +0100 Subject: [nova] Session Recording for Feedback In-Reply-To: References: Message-ID: <8ba2f8e59877f796558158a950be3acbd10eb0f9.camel@redhat.com> On Thu, 2020-09-03 at 19:11 +0000, Apsey, Christopher wrote: > Nova supports using the websockify -record functionality to capture frames sent over arbitrary console sessions. am that is an overstatement of novas capablity. you may be able to use -record but that is not a deliberate feature that we support as far as i am aware. we do not have a rest api for this functionality so ist not consumable by non admins that dont have access to the host. the capability is really an implematnion detail that is not well deocumented and not part fo the public api. so i would not frame it as nova supports this usecase more that the config option was just added when nova-novncproce was brought back into nova in https://github.com/openstack/nova/commit/13871ad4f39361531dff1abd7f9257369862cccc its proably something that never should have existed in the nova tree. suspect it was added by the novnc folks when the nova novncproxy code was in the novnc repo and it was included in our repo to have parity when we imported it. so its an option that still exits but i like me im sure its news to others and i dont think its something we would want to continue to support in its current form. > Currently, the record = path/to/file option in nova.conf saves these sessions on nova-(spice|vnc)proxy endpoints at > the specified path in the format of /path/to/file.(session number). These session numbers appear to be incrementally > created starting from 1 and don't really have an association with their respective instance from what I can tell. It > would be useful to have these files saved using the instance UUID instead. you are refering to this options https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.record https://github.com/openstack/nova/blob/master/nova/conf/novnc.py#L19-L24 i assume that you would prefer that instead of recoding the info via session id it was using the instace uuid or a combination of both for simple correalation. im more of thet mind that now that we have observed this still exists we proably should deprecated it and remove it rather then enhance it > > Ignoring that minor problem, playing back these files is a challenge. I've tried using noVNCs vnc_playback.html > function from both the master branch and stable/v0.6 with no luck - it looks like the libraries and the utilities > haven't been maintained at the same cadence and there are missing functions that the player is expecting. > > My goal here is to be able to capture the activities of students as they progress through various exercises and then > have instructors be able to go over their work with them after the fact if they have questions. Has anyone been able > to do this (or something like this) successfully, or are we just better off trying to do this in-band per instance? i would suggest doing it in band. it is an interesting usecase ill admit but im not trilled by the idea of having files recored to the novncproxy host that are never cleaned up. this is off by default so its not a security risk but other wise it could be a vector to fill the disk of the contoler or novnc proxy host. if we were to properly support this in nova i would want to see a way to enabel or disable this per instace at the api level an retrive the files possible storing them to swift or something periodicly at a given size or time interval. i certenly would want to see size limit and session limits per instance before considering it supported for production use. > > Chris Apsey > GEORGIA CYBER CENTER > From emilien at redhat.com Fri Sep 4 12:11:06 2020 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 4 Sep 2020 08:11:06 -0400 Subject: [tripleo] centos-binary -> openstack- In-Reply-To: References: Message-ID: On Fri, Sep 4, 2020 at 3:40 AM Mark Goddard wrote: > On Thu, 3 Sep 2020 at 12:56, Wesley Hayutin wrote: > > > > Greetings, > > > > The container names in master have changed from centos-binary* to > openstack*. > > > https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_containers.yaml > > > > > https://opendev.org/openstack/tripleo-common/commit/90f6de7a7fab15e9161c1f03acecaf98726298f1 > > > > If your patches are failing to pull > https://registry-1.docker.io/v2/tripleomaster/centos-binary* it's not > going to be fixed in a recheck. Check that your patches are rebased and > your dependent patches are rebased. > > Hi Wes, > > Can we infer from this that Tripleo is no longer using Kolla on master? > Mark, true, we no longer rely on Kolla on master, and removed our overrides. We backported all the work to Ussuri and Train but for backward compatibility will keep Kolla support. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Sep 4 12:37:57 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 4 Sep 2020 14:37:57 +0200 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> Message-ID: <20200904123757.d6mpyjcyxlgk63od@skaplons-mac> Hi, On Fri, Sep 04, 2020 at 05:47:04PM +0900, Takashi Yamamoto wrote: > On Fri, Sep 4, 2020 at 10:28 AM Sam Morrison wrote: > > > > OK have been making some good progress, I now have devstack on bionic working with midonet ml2, it should also work with 20.04 but need to confirm > > nice work. thank you. > > > > > Needed a couple of patches to networking-midonet to get it working [1] > > > > I also need to point it to our git repo for now until we get the upstream deb packages upgraded > > > > MIDONET_DEB_URI=http://download.rc.nectar.org.au/nectar-ubuntu > > MIDONET_DEB_SUITE=bionic > > > > The changes we have on midonet to work with bionic are now in a pull request [2] > > Once that’s merged need to find a way to build a new package and upload that to the midonet repo, Yamamoto is that something you can help with? > > i'm talking to our infra folks but it might take longer than i hoped. > if you or someone else can provide a public repo, it might be faster. > (i have looked at launchpad PPA while ago. but it didn't seem > straightforward given the complex build machinary in midonet.) > > > > > I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. > > probably. > > > > > For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. > > this thing? https://review.opendev.org/#/c/749866/ Yes, this revert should solve problem with db migration of some stadium projects. > > > > > > > > > I can now start to look into the devstack zuul jobs. > > > > Cheers, > > Sam > > > > > > [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack > > [2] https://github.com/midonet/midonet/pull/9 > > > > > > > > > > > On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: > > > > > > > > > > > >> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: > > >> > > >> hi, > > >> > > >> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: > > >>> > > >>> > > >>> > > >>>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: > > >>>> > > >>>> Sebastian, Sam, > > >>>> > > >>>> thank you for speaking up. > > >>>> > > >>>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. > > >>>> the major part for it is to make midonet itself to run on ubuntu > > >>>> version used by the ci. (18.04, or maybe directly to 20.04) > > >>>> https://midonet.atlassian.net/browse/MNA-1344 > > >>>> iirc, the remaining blockers are: > > >>>> * libreswan (used by vpnaas) > > >>>> * vpp (used by fip64) > > >>>> maybe it's the easiest to drop those features along with their > > >>>> required components, if it's acceptable for your use cases. > > >>> > > >>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. > > >>> > > >>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? > > >> > > >> it still exists. but i don't think it's maintained well. > > >> let me find and ask someone in midokura who "owns" that part of infra. > > >> > > >> does it also involve some package-related modifications to midonet repo, right? > > > > > > > > > Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow > > > > > > Sam > > > > > > > > > > > >> > > >>> > > >>> I’m keen to do the work but might need a bit of guidance to get started, > > >>> > > >>> Sam > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>>> > > >>>> alternatively you might want to make midonet run in a container. (so > > >>>> that you can run it with older ubuntu, or even a container trimmed for > > >>>> JVM) > > >>>> there were a few attempts to containerize midonet. > > >>>> i think this is the latest one: https://github.com/midonet/midonet-docker > > >>>> > > >>>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: > > >>>>> > > >>>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. > > >>>>> > > >>>>> I’m happy to help too. > > >>>>> > > >>>>> Cheers, > > >>>>> Sam > > >>>>> > > >>>>> > > >>>>> > > >>>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: > > >>>>>> > > >>>>>> Hi, > > >>>>>> > > >>>>>> Thx Sebastian for stepping in to maintain the project. That is great news. > > >>>>>> I think that at the beginning You should do 2 things: > > >>>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, > > >>>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, > > >>>>>> > > >>>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). > > >>>>>> > > >>>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: > > >>>>>>> > > >>>>>>> Hi Slawek, > > >>>>>>> > > >>>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. > > >>>>>>> > > >>>>>>> Please let me know how to proceed and how we can be onboarded easily. > > >>>>>>> > > >>>>>>> Best regards, > > >>>>>>> > > >>>>>>> Sebastian > > >>>>>>> > > >>>>>>> -- > > >>>>>>> Sebastian Saemann > > >>>>>>> Head of Managed Services > > >>>>>>> > > >>>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > > >>>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > > >>>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > > >>>>>>> https://netways.de | sebastian.saemann at netways.de > > >>>>>>> > > >>>>>>> ** NETWAYS Web Services - https://nws.netways.de ** > > >>>>>> > > >>>>>> — > > >>>>>> Slawek Kaplonski > > >>>>>> Principal software engineer > > >>>>>> Red Hat > > > -- Slawek Kaplonski Principal software engineer Red Hat From sean.mcginnis at gmx.com Fri Sep 4 12:46:59 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 4 Sep 2020 07:46:59 -0500 Subject: [oslo][ffe] request for oslo In-Reply-To: References: Message-ID: On 9/4/20 2:44 AM, Herve Beraud wrote: > Hey Oslofolk, > > I request an FFE for these oslo.messaging changes [1]. > > The goal of these changes is to run rabbitmq heartbeat in python > thread by default. > > Also these changes deprecating this option to prepare future removal > and force to always run heartbeat in a python thread whatever the context. > > Land these changes during the victoria cycle can help us to prime the > option removal during the next cycle. > > Thanks for your time, > > [1] https://review.opendev.org/#/c/747395/ > With the overall non-client library freeze yesterday, this isn't just an Oslo feature freeze request. It is also a release and requirements exception as well. The code change itself is very minor. But this flips a default for a behavior that hasn't been given very wide usage and runtime. I would be very cautious about making a change like that right as we are locking down things and trying to stabilize for the final release. It's a nice change, but personally I would feel more comfortable giving it all of the wallaby development cycle running as the new default to make sure there are no unintended side effects. I am interested in hearing Ben's opinion though. Sean From gfidente at redhat.com Fri Sep 4 13:22:51 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 4 Sep 2020 15:22:51 +0200 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: Message-ID: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> On 9/2/20 1:54 PM, Wesley Hayutin wrote: > Greetings, > > Some of you have contacted me regarding the recent news regarding > docker.io 's new policy with regards to container pull > rate limiting [1].  I wanted to take the opportunity to further > socialize our plan that will completely remove docker.io > from our upstream workflows and avoid any rate > limiting issues. thanks; I guess this will be a problem for the ceph containers as well > We will continue to upload containers to docker.io > for some time so that individuals and the community can access the > containers.  We will also start exploring other registries like quay and > newly announced github container registry. These other public registries > will NOT be used in our upstream jobs and will only serve the > communities individual contributors. I don't think ceph found alternatives yet, but Guillaume or Dimitri might know more about it -- Giulio Fidente GPG KEY: 08D733BA From dtroyer at gmail.com Fri Sep 4 14:15:48 2020 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 4 Sep 2020 09:15:48 -0500 Subject: [all][TC] OpenStack Client (OSC) vs python-*clients In-Reply-To: References: <1668118.VLH7GnMWUR@whitebase.usersys.redhat.com> <9cbf9d69a9beb30d03af71e42a3e2446a516292a.camel@redhat.com> <20200813164131.bdmhankpd2qxycux@yuggoth.org> <2956d6bd-320e-34ea-64a0-1001e102d75c@gmail.com> Message-ID: On Thu, Sep 3, 2020 at 8:07 AM Belmiro Moreira wrote: > However, I don't think we have reached any conclusion. You have reached the same conclusion that gets reached whenever this comes up. Lots of desire and hope, no resources. > It would be great if the SDK/openstackclient team and the different projects that raised some concerns can collaborate and move forward. I am going to be blunt. Until someone steps up to dedicate resources to both projects the overall situation will only continue to deteriorate. Foundation member companies do not see a CLI as a priority.[0] And it is much more than a single person can carry... The SDK is clearly in better shape than OSC, Monty and team are making great progress there. For OSC, Keystone was completed as early as it was because of the priority they put on getting out of the CLI business and the enormous amount of work Keystone devs did for OSC..[1] dt [0] I was told once by one of the Platinum CEOs that clients (not just CLIs) were unimportant. History seems to be more on his side than ours. [1] In lieu of forgetting someone I'll just say that said stevemar's contributions to OSC as a whole are still unequalled. Thanks Steve! -- Dean Troyer dtroyer at gmail.com From Lukasluedke at web.de Fri Sep 4 14:46:43 2020 From: Lukasluedke at web.de (Lukasluedke at web.de) Date: Fri, 4 Sep 2020 16:46:43 +0200 Subject: [Kolla Ansible] Error in "heat" stage during deployment - Followed Quick Start Guide (ubuntu 18.04.5, virtualbox) Message-ID: Hi everyone,   I am new to openstack and just followed the "Quick Start" guide for kolla-ansible [https://docs.openstack.org/kolla-ansible/ussuri/user/quickstart.html] on ussuri release, but now having some problems during deployment. I used virtualbox to create a new vm with ubuntu 18.04.5 [https://releases.ubuntu.com/18.04/ubuntu-18.04.5-live-server-amd64.iso.torrent] and attached one interface (enp0s3, 10.0.2.15) with Nat-Network and the second interface as for the neutron interface (enp0s8, no ip) an internal network and named it "neutron". Then I followed the guide to create a new virtual python environment (logged in over ssh using "user" user). -- sudo apt-get update sudo apt-get install python3-dev libffi-dev gcc libssl-dev sudo apt-get install python3-venv python3 -m venv $(pwd) source $(pwd)/bin/activate pip install -U pip pip install 'ansible<2.10' -- Everything seems to work as mentioned with no problems so far. (Beside the missing "python-venv" package in the docs) -- pip install kolla-ansible sudo mkdir -p /etc/kolla sudo chown $USER:$USER /etc/kolla cp -r $(pwd)/share/kolla-ansible/etc_examples/kolla/* /etc/kolla cp $(pwd)/share/kolla-ansible/ansible/inventory/* . mkdir /etc/ansible sudo nano /etc/ansible/ansible.cfg ### content from /etc/ansible/ansible.cfg [defaults] host_key_checking=False pipelining=True forks=100 ### -- I then modified the "multinode" file to only deploy on localhost (replaced all control01, etc. with localhost) as I first wanted to try openstack on one node. The ping worked, changed the globals.yml file according to the documentation, then bootstrap and the prechecks worked so far with no issue. -- ansible -i multinode all -m ping kolla-genpwd sudo nano /etc/kolla/globals.yml ### additional content of /etc/kolla/globals.yml kolla_base_distro: "ubuntu" kolla_install_type: "source" network_interface: "enp0s3" neutron_external_interface: "enp0s8" kolla_internal_vip_address: "10.0.2.10" ### kolla-ansible -i ./multinode bootstrap-servers kolla-ansible -i ./multinode prechecks -- But on the deploy stage I am stuck right now: -- kolla-ansible -i ./multinode deploy -- The mentioned error is telling me, that the keystone service is not available. I attached the error log file, because the error is longer. As a side node, now when I try to login using tty, the vm is nearly frozen and only prompts very, very slowly (about 10 minutes for prompt). ssh in putty is also frozen and the connection broke after about 30 minutes. Does anyone know what is wrong or has some hints/insight on how to correctly deploy openstack using kolla-ansible? Best Regards, Lukas Lüdke -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack_kolla_error_heat.log Type: application/octet-stream Size: 10818 bytes Desc: not available URL: From mahdi.abbasi.2013 at gmail.com Fri Sep 4 06:44:33 2020 From: mahdi.abbasi.2013 at gmail.com (mahdi abbasi) Date: Fri, 4 Sep 2020 11:14:33 +0430 Subject: Kuryr openstack In-Reply-To: References: Message-ID: Thanks a alot This issue has been resolved On Fri, 4 Sep 2020, 11:09 Luis Tomas Bolivar, wrote: > Hi Mahdi, > > > On Thu, Sep 3, 2020 at 6:46 PM mahdi abbasi > wrote: > >> Hi development team, >> >> Do i need a special configuration to use kuryr when using openvswitch? >> > > Are you using kuryr-kubernetes or kuryr-libnetwork? > > And yes, it works find with openvswitch. Only caveat is that for nested > environments (running kuryr-kubernetes inside OpenStack VMs), the firewall > driver must be set to openvswitch (instead of iptables_hybrid) > >> I get the following error when starting Docker container in zun: >> >> Unable to create the network.No tenant network is availble for allocation. >> > > Seems like you are using the namespace subnet driver (1 neutron subnet per > k8s namespace) and you have not set up a neutron subnet pool to use. > > Cheers, > Luis > >> >> Please help me. >> >> Best regards >> Mahdi >> > > > -- > LUIS TOMÁS BOLÍVAR > Senior Software Engineer > Red Hat > Madrid, Spain > ltomasbo at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahdi.abbasi.2013 at gmail.com Fri Sep 4 06:45:30 2020 From: mahdi.abbasi.2013 at gmail.com (mahdi abbasi) Date: Fri, 4 Sep 2020 11:15:30 +0430 Subject: Kuryr openstack In-Reply-To: References: Message-ID: Thanks a lot This issue has been resolved On Thu, 3 Sep 2020, 23:28 Radosław Piliszek, wrote: > Hi Mahdi, > > have you created any networks (as in Neutron resource) in that project? > You need to have at least one network for Zun to use. > > -yoctozepto > > On Thu, Sep 3, 2020 at 6:50 PM mahdi abbasi > wrote: > > > > Hi development team, > > > > Do i need a special configuration to use kuryr when using openvswitch? > > I get the following error when starting Docker container in zun: > > > > Unable to create the network.No tenant network is availble for > allocation. > > > > Please help me. > > > > Best regards > > Mahdi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From huyp at inspur.com Fri Sep 4 08:29:12 2020 From: huyp at inspur.com (=?gb2312?B?U2hlbGRvbiBIdSi6+tPxxfQp?=) Date: Fri, 4 Sep 2020 08:29:12 +0000 Subject: [mq][mariadb] what kind of reasons are considered when choosing the mq/mariadb version Message-ID: Hi all I use kolla to build the openstack version, use kolla-ansible to deploy rabbitmq is v3.7.10, mariadb is v10.1.x when building rocky version rabbitmq is v3.8.5, mariadb is v10.3.x when building ussuri version what kind of reasons are considered as rabbitmq version is changed from v3.7.10 to v3.8.5 ? what kind of reasons are considered as mariadb version is changed from v10.1.x to v10.3.x ? many thanks. Sheldon Hu | 胡玉鹏 CBRD | 云计算与大数据研发部 T: 18663761785 E: huyp at inspur.com 浪潮电子信息产业股份有限公司 Inspur Electronic Information Industry Co.,Ltd. 山东省济南市历城区东八区企业公馆A7-1 Building A7-1, Dongbaqu Office Block, Licheng District, Jinan,Shandong Province, PRC 浪潮云海 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7250 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3667 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4081 bytes Desc: not available URL: From pierre at stackhpc.com Fri Sep 4 15:46:46 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 4 Sep 2020 17:46:46 +0200 Subject: [cloudkitty] Virtual PTG planning Message-ID: Hi, You may know that the next PTG will be held virtually during the week of October 26-30, 2020. I will very likely *not* be available during that time, so I would like to hear from the CloudKitty community: - if you would like to meet and for how long (a half day may be enough depending on the agenda) - what day and time is preferred (see list in https://ethercalc.openstack.org/7xp2pcbh1ncb) - if anyone is willing to chair the discussions (I can help you prepare an agenda before the event) Thanks in advance, Pierre Riteau (priteau) -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Sep 4 16:12:49 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 4 Sep 2020 10:12:49 -0600 Subject: [tripleo] docker.io rate limiting In-Reply-To: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> References: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> Message-ID: On Fri, Sep 4, 2020 at 7:23 AM Giulio Fidente wrote: > On 9/2/20 1:54 PM, Wesley Hayutin wrote: > > Greetings, > > > > Some of you have contacted me regarding the recent news regarding > > docker.io 's new policy with regards to container pull > > rate limiting [1]. I wanted to take the opportunity to further > > socialize our plan that will completely remove docker.io > > from our upstream workflows and avoid any rate > > limiting issues. > > thanks; I guess this will be a problem for the ceph containers as well > > > We will continue to upload containers to docker.io > > for some time so that individuals and the community can access the > > containers. We will also start exploring other registries like quay and > > newly announced github container registry. These other public registries > > will NOT be used in our upstream jobs and will only serve the > > communities individual contributors. > > I don't think ceph found alternatives yet, but Guillaume or Dimitri > might know more about it > -- > talk to Fulton.. I think we'll have ceph covered from a tripleo perspective. Not sure about anything else. > Giulio Fidente > GPG KEY: 08D733BA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Sep 4 16:19:02 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 4 Sep 2020 18:19:02 +0200 Subject: [Kolla Ansible] Error in "heat" stage during deployment - Followed Quick Start Guide (ubuntu 18.04.5, virtualbox) In-Reply-To: References: Message-ID: Hello Lukas, It sounds like you did it well overally. Did you check the load on that machine? It could be that it turned out to be too weak to handle the desired set of services. It sounds like it started to swap crazy. Try something like 4G for starters (it would probably work with 2 but 4 is safe for sure!). Another reason could be that networking went wrong with that VIP address. But it would likely have occurred earlier than the heat deployment step. -yoctozepto On Fri, Sep 4, 2020 at 5:01 PM Lukasluedke at web.de wrote: > > Hi everyone, > > I am new to openstack and just followed > the "Quick Start" guide for kolla-ansible > [https://docs.openstack.org/kolla-ansible/ussuri/user/quickstart.html] > on ussuri release, but now having some problems during deployment. > I used virtualbox to create a new vm with ubuntu 18.04.5 > [https://releases.ubuntu.com/18.04/ubuntu-18.04.5-live-server-amd64.iso.torrent] > and attached one interface (enp0s3, 10.0.2.15) with Nat-Network and > the second interface as for the neutron interface (enp0s8, no ip) > an internal network and named it "neutron". > Then I followed the guide to create a new virtual python environment > (logged in over ssh using "user" user). > -- > sudo apt-get update > sudo apt-get install python3-dev libffi-dev gcc libssl-dev > sudo apt-get install python3-venv > > python3 -m venv $(pwd) > source $(pwd)/bin/activate > pip install -U pip > pip install 'ansible<2.10' > -- > > Everything seems to work as mentioned with no problems so far. > (Beside the missing "python-venv" package in the docs) > -- > pip install kolla-ansible > sudo mkdir -p /etc/kolla > sudo chown $USER:$USER /etc/kolla > cp -r $(pwd)/share/kolla-ansible/etc_examples/kolla/* /etc/kolla > cp $(pwd)/share/kolla-ansible/ansible/inventory/* . > > mkdir /etc/ansible > sudo nano /etc/ansible/ansible.cfg > ### content from /etc/ansible/ansible.cfg > [defaults] > host_key_checking=False > pipelining=True > forks=100 > ### > -- > > I then modified the "multinode" file to only deploy on localhost > (replaced all control01, etc. with localhost) as I first wanted > to try openstack on one node. > The ping worked, changed the globals.yml file according to the > documentation, then bootstrap and the prechecks worked > so far with no issue. > -- > ansible -i multinode all -m ping > kolla-genpwd > > sudo nano /etc/kolla/globals.yml > ### additional content of /etc/kolla/globals.yml > kolla_base_distro: "ubuntu" > kolla_install_type: "source" > network_interface: "enp0s3" > neutron_external_interface: "enp0s8" > kolla_internal_vip_address: "10.0.2.10" > ### > > kolla-ansible -i ./multinode bootstrap-servers > kolla-ansible -i ./multinode prechecks > -- > > But on the deploy stage I am stuck right now: > -- > kolla-ansible -i ./multinode deploy > -- > > The mentioned error is telling me, > that the keystone service is not available. > I attached the error log file, because the error is longer. > As a side node, now when I try to login using tty, > the vm is nearly frozen and only prompts very, very slowly > (about 10 minutes for prompt). ssh in putty is also frozen > and the connection broke after about 30 minutes. > > Does anyone know what is wrong or has some hints/insight on how > to correctly deploy openstack using kolla-ansible? > > Best Regards, > Lukas Lüdke From hberaud at redhat.com Fri Sep 4 16:34:35 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 4 Sep 2020 18:34:35 +0200 Subject: [oslo][ffe] request for oslo In-Reply-To: References: Message-ID: Le ven. 4 sept. 2020 à 14:50, Sean McGinnis a écrit : > On 9/4/20 2:44 AM, Herve Beraud wrote: > > Hey Oslofolk, > > > > I request an FFE for these oslo.messaging changes [1]. > > > > The goal of these changes is to run rabbitmq heartbeat in python > > thread by default. > > > > Also these changes deprecating this option to prepare future removal > > and force to always run heartbeat in a python thread whatever the > context. > > > > Land these changes during the victoria cycle can help us to prime the > > option removal during the next cycle. > > > > Thanks for your time, > > > > [1] https://review.opendev.org/#/c/747395/ > > > With the overall non-client library freeze yesterday, this isn't just an > Oslo feature freeze request. It is also a release and requirements > exception as well. > Right, good point. > The code change itself is very minor. But this flips a default for a > behavior that hasn't been given very wide usage and runtime. I would be > very cautious about making a change like that right as we are locking > down things and trying to stabilize for the final release. It's a nice > change, but personally I would feel more comfortable giving it all of > the wallaby development cycle running as the new default to make sure > there are no unintended side effects. > Sure we can survive without these changes until the Wallaby cycle. > I am interested in hearing Ben's opinion though. > > Sean > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Fri Sep 4 16:42:10 2020 From: johfulto at redhat.com (John Fulton) Date: Fri, 4 Sep 2020 12:42:10 -0400 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> Message-ID: On Fri, Sep 4, 2020 at 12:13 PM Wesley Hayutin wrote: > > On Fri, Sep 4, 2020 at 7:23 AM Giulio Fidente wrote: > >> On 9/2/20 1:54 PM, Wesley Hayutin wrote: >> > Greetings, >> > >> > Some of you have contacted me regarding the recent news regarding >> > docker.io 's new policy with regards to container >> pull >> > rate limiting [1]. I wanted to take the opportunity to further >> > socialize our plan that will completely remove docker.io >> > from our upstream workflows and avoid any rate >> > limiting issues. >> >> thanks; I guess this will be a problem for the ceph containers as well >> >> > We will continue to upload containers to docker.io >> > for some time so that individuals and the community can access the >> > containers. We will also start exploring other registries like quay and >> > newly announced github container registry. These other public registries >> > will NOT be used in our upstream jobs and will only serve the >> > communities individual contributors. >> >> I don't think ceph found alternatives yet, but Guillaume or Dimitri >> might know more about it >> -- >> > > talk to Fulton.. I think we'll have ceph covered from a tripleo > perspective. Not sure about anything else. > Yes, thank you Wes for your help on the plan to cover the TripleO CI perspective. A thread similar to this one has been posted on ceph-dev [1] the outcome so far is that some Ceph projects are using quay.ceph.com to store temporary CI images to deal with the docker.io rate limits. As per an IRC conversation I had with Dimitri, ceph-ansible is not using quay.ceph.com but has made some changes to deal with current rate limits [2]. I expect they'll need to make further changes for November but my understanding is that they're still looking to push the authoritative copy of the Ceph container image [3] we use to docker.io. On the TripleO side we change that image rarely so provided it can be cached for CI jobs we should be safe. When we do change the image to the newer version we use a DNM patch [4] to pull it directly from docker. We could continue to do this as only that patch would be vulnerable to the rate limit. If we then see by way of the CI to the DNM patch that the new image is good, we can pursue getting it cached as the new image for TripleO CI Ceph jobs. One thing that's not clear to me is the mechanism to do this. John [1] https://lists.ceph.io/hyperkitty/list/dev at ceph.io/thread/BYZOGN3Y3CJLY35QLDL7SX6SOX74YZCE/#BYZOGN3Y3CJLY35QLDL7SX6SOX74YZCE [2] https://github.com/ceph/ceph-container/blob/master/tests/tox.sh#L86-L110 [3] https://hub.docker.com/r/ceph/daemon [4] https://review.opendev.org/#/c/690036/ > > >> Giulio Fidente >> GPG KEY: 08D733BA >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Fri Sep 4 17:48:14 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Fri, 4 Sep 2020 13:48:14 -0400 Subject: [ops] picking up some activity Message-ID: Greetings! The OpenStack Operators ("ops") meetups team will attempt to have an IRC meeting at the normal time and place (#openstack-operators on freenode at 10am EST/DST) on *Sept 15th*( following a period of complete inactivity for obvious reasons. If you're an official member of the team or even just interested in what we do, please feel free to join us. Whilst we can't yet contemplate resuming in-person meetups during this global pandemic, we can resume attempting to build the openstack operators community, share knowledge and perhaps even do some more virtual get-togethers. See you then Chris on behalf of the openstack ops meetups team -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Sep 4 17:49:05 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 4 Sep 2020 19:49:05 +0200 Subject: [oslo][ffe] request for oslo In-Reply-To: References: Message-ID: I agree with Sean in that it is quite dangerous this late in the cycle. Let's do it early in the Wallaby cycle. -yoctozepto On Fri, Sep 4, 2020 at 6:44 PM Herve Beraud wrote: > > > > Le ven. 4 sept. 2020 à 14:50, Sean McGinnis a écrit : >> >> On 9/4/20 2:44 AM, Herve Beraud wrote: >> > Hey Oslofolk, >> > >> > I request an FFE for these oslo.messaging changes [1]. >> > >> > The goal of these changes is to run rabbitmq heartbeat in python >> > thread by default. >> > >> > Also these changes deprecating this option to prepare future removal >> > and force to always run heartbeat in a python thread whatever the context. >> > >> > Land these changes during the victoria cycle can help us to prime the >> > option removal during the next cycle. >> > >> > Thanks for your time, >> > >> > [1] https://review.opendev.org/#/c/747395/ >> > >> With the overall non-client library freeze yesterday, this isn't just an >> Oslo feature freeze request. It is also a release and requirements >> exception as well. > > > Right, good point. > >> >> The code change itself is very minor. But this flips a default for a >> behavior that hasn't been given very wide usage and runtime. I would be >> very cautious about making a change like that right as we are locking >> down things and trying to stabilize for the final release. It's a nice >> change, but personally I would feel more comfortable giving it all of >> the wallaby development cycle running as the new default to make sure >> there are no unintended side effects. > > > Sure we can survive without these changes until the Wallaby cycle. > >> >> I am interested in hearing Ben's opinion though. >> >> Sean >> >> > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From radoslaw.piliszek at gmail.com Fri Sep 4 17:54:06 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 4 Sep 2020 19:54:06 +0200 Subject: [tripleo] centos-binary -> openstack- In-Reply-To: References: Message-ID: Hi Emilien and Wes, I proposed to drop the job from Kolla's pipeline then. [1] https://review.opendev.org/750004 -yoctozepto On Fri, Sep 4, 2020 at 2:22 PM Emilien Macchi wrote: > > > > On Fri, Sep 4, 2020 at 3:40 AM Mark Goddard wrote: >> >> On Thu, 3 Sep 2020 at 12:56, Wesley Hayutin wrote: >> > >> > Greetings, >> > >> > The container names in master have changed from centos-binary* to openstack*. >> > https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_containers.yaml >> > >> > https://opendev.org/openstack/tripleo-common/commit/90f6de7a7fab15e9161c1f03acecaf98726298f1 >> > >> > If your patches are failing to pull https://registry-1.docker.io/v2/tripleomaster/centos-binary* it's not going to be fixed in a recheck. Check that your patches are rebased and your dependent patches are rebased. >> >> Hi Wes, >> >> Can we infer from this that Tripleo is no longer using Kolla on master? > > > Mark, true, we no longer rely on Kolla on master, and removed our overrides. > We backported all the work to Ussuri and Train but for backward compatibility will keep Kolla support. > -- > Emilien Macchi From kennelson11 at gmail.com Fri Sep 4 17:58:10 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 4 Sep 2020 10:58:10 -0700 Subject: [TC][PTG] Virtual PTG Planning Message-ID: Hello! So as you might have seen, the deadline to sign up for PTG time by the end of next week. To coordinate our time to meet as the TC, please fill out the poll[1] that Mohammed kindly put together for us. *We need responses by EOD Thursday September 10th* so that we can book the time in the ethercalc and fill out the survey to reserve our space before the deadline. Also, I created this planning etherpad [2] to start collecting ideas for discussion topics! Can't wait to see you all there! -Kendall & Mohammed [1] https://doodle.com/poll/hkbg44da2udxging [2] https://etherpad.opendev.org/p/tc-wallaby-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Sep 4 18:28:06 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 4 Sep 2020 10:28:06 -0800 Subject: Mentoring Boston University Students In-Reply-To: References: Message-ID: On Wed, Sep 2, 2020 at 8:38 AM Kendall Nelson wrote: > Hey Goutham! > > Here is the form: > https://docs.google.com/forms/d/e/1FAIpQLSdehzBYqJeJ8x4RlPvQjTZpJ-LXs2A9vPrmRUPZNdawn1LgMg/viewform > Great, thank you Kendall! :) > > -Kendall (diablo_rojo) > > On Tue, Sep 1, 2020 at 6:56 PM Goutham Pacha Ravi > wrote: > >> Hi Kendall, >> >> We'd like to help and have help in the manila team. We have a few >> projects [1] where the on-ramp may be relatively easy - I can work with you >> and define them. How do we apply? >> >> Thanks, >> Goutham >> >> >> [1] https://etherpad.opendev.org/p/manila-todos >> >> >> >> >> >> On Tue, Sep 1, 2020 at 9:08 AM Kendall Nelson >> wrote: >> >>> Hello! >>> >>> As you may or may not know, the last two years various community members >>> have mentored students from North Dakota State University for a semester to >>> work on projects in OpenStack. Recently, I learned of a similar program at >>> Boston University and they are still looking for mentors interested for the >>> upcoming semester. >>> >>> Essentially you would have 5 to 7 students for 13 weeks to mentor and >>> work on some feature or effort in your project. >>> >>> The time to apply is running out however as the deadline is Sept 3rd. If >>> you are interested, please let me know ASAP! I am happy to help get the >>> students up to speed with the community and getting their workspaces set >>> up, but the actual work they would do is more up to you :) >>> >>> -Kendall (diablo_rojo) >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From admin at gibdev.ru Fri Sep 4 18:43:57 2020 From: admin at gibdev.ru (admin) Date: Fri, 4 Sep 2020 21:43:57 +0300 Subject: how-to enable s3api Message-ID: <06322a0a-eba3-d2a1-432a-761d001bf05d@gibdev.ru> hi where is there a normal manual on how to enable s3api support in swift? trying to go through https://docs.openstack.org/swift/latest/middleware.html and as a result keystone gives an error in authorization (trying to connect via (s3curl) Authorization failed. The request you have made requires authentication. from x.x.x.x Unauthorized: The request you have made requires authentication. From tburke at nvidia.com Fri Sep 4 21:09:07 2020 From: tburke at nvidia.com (Tim Burke) Date: Fri, 4 Sep 2020 14:09:07 -0700 Subject: [ironic] [stable] Bifrost stable/stein is broken (by eventlet?): help needed In-Reply-To: References: Message-ID: <65108979-a4d2-e9ea-266c-01d624269773@nvidia.com> On 9/3/20 3:30 AM, Dmitry Tantsur wrote: > *External email: Use caution opening links or attachments* > > > Hi folks, > > I'm trying to revive the Bifrost stable/stein CI, and after fixing a > bunch of issues in https://review.opendev.org/749014 I've hit a wall > with what seems an eventlet problem: ironic-inspector fails to start with: > > Exception AttributeError: "'_SocketDuckForFd' object has no attribute > '_closed'" in _SocketDuckForFd:16> ignored > > I've managed to find similar issues, but they should have been resolved > in the eventlet version in stein (0.24.1). Any ideas? > > If we cannot fix it, we'll have to EOL stein and earlier on bifrost. > > Dmitry > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill The "ignored" makes me think that it shouldn't actually be a problem -- are we assuming that's the error because of logs like https://c972f4bb262ae2d5c5d6-598e1d61c0aab85aa3b67b337ca2c556.ssl.cf2.rackcdn.com/749014/2/check/bifrost-integration-tinyipa-ubuntu-xenial/9d2905c/logs/ironic-inspector.log ? Digging down to https://c972f4bb262ae2d5c5d6-598e1d61c0aab85aa3b67b337ca2c556.ssl.cf2.rackcdn.com/749014/2/check/bifrost-integration-tinyipa-ubuntu-xenial/9d2905c/logs/all/syslog shows tracebacks like File ".../eventlet/hubs/__init__.py", line 39, in get_default_hub import eventlet.hubs.epolls File ".../eventlet/hubs/epolls.py", line 13, in from eventlet.hubs.hub import BaseHub File ".../eventlet/hubs/hub.py", line 24, in import monotonic File ".../monotonic.py", line 169, in raise RuntimeError('no suitable implementation for this system: ' + repr(e)) RuntimeError: no suitable implementation for this system: AttributeError("'module' object has no attribute 'epolls'",) Maybe it's worth looking at why monotonic can't find a suitable implementation? Tim From nate.johnston at redhat.com Fri Sep 4 21:33:13 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 4 Sep 2020 17:33:13 -0400 Subject: [TC] Looking for a volunteer to help with community goal selection Message-ID: <20200904213313.5xotjz47vdamypzy@firewall> TC members, Since I have not been able to get in contact with Graham, I would like to see if there is anyone who would like to work with me for the selection of community goals for the W series. The commitment would be to identify a few good candidates that have well formulated goal proposals and champions, which usually involves a meeting followed by some reaching out and logistical work. I am hoping to increase the pool of those familiar with the goal selection process, so this is a great place to jump in if you have not done this before. Also, if Graham does have time then we can make sure to include him as well. Two is good but three is better! Please let me know if you are interested. Thanks! Nate From dsavinea at redhat.com Fri Sep 4 17:26:00 2020 From: dsavinea at redhat.com (Dimitri Savineau) Date: Fri, 4 Sep 2020 13:26:00 -0400 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> Message-ID: Hi, We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2]. Official ceph images will still be updated on docker.io. Note that from a ceph-ansible point of view, switching to the quay.ceph.io registry isn't enough to get rid of the docker.io registry when deploying with the Ceph dashboard enabled. The whole monitoring stack (alertmanager, prometheus, grafana and node-exporter) coming with the Ceph dashboard is still using docker.io by default [3][4][5][6]. As an alternative, you can use the official quay registry (quay.io) for altermanager, prometheus and node-exporter images [7] from the prometheus namespace like we're doing in [2]. Only the grafana container image will still be pulled from docker.io. Regards, Dimitri [1] https://quay.ceph.io/repository/ceph-ci/daemon?tab=tags [2] https://github.com/ceph/ceph-ansible/pull/5726 [3] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/defaults/main.yml#L802 [4] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/defaults/main.yml#L793 [5] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/defaults/main.yml#L767 [6] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-defaults/defaults/main.yml#L757 [7] https://quay.io/organization/prometheus On Fri, Sep 4, 2020 at 12:42 PM John Fulton wrote: > On Fri, Sep 4, 2020 at 12:13 PM Wesley Hayutin > wrote: > >> >> On Fri, Sep 4, 2020 at 7:23 AM Giulio Fidente >> wrote: >> >>> On 9/2/20 1:54 PM, Wesley Hayutin wrote: >>> > Greetings, >>> > >>> > Some of you have contacted me regarding the recent news regarding >>> > docker.io 's new policy with regards to container >>> pull >>> > rate limiting [1]. I wanted to take the opportunity to further >>> > socialize our plan that will completely remove docker.io >>> > from our upstream workflows and avoid any rate >>> > limiting issues. >>> >>> thanks; I guess this will be a problem for the ceph containers as well >>> >>> > We will continue to upload containers to docker.io >>> > for some time so that individuals and the community can access the >>> > containers. We will also start exploring other registries like quay >>> and >>> > newly announced github container registry. These other public >>> registries >>> > will NOT be used in our upstream jobs and will only serve the >>> > communities individual contributors. >>> >>> I don't think ceph found alternatives yet, but Guillaume or Dimitri >>> might know more about it >>> -- >>> >> >> talk to Fulton.. I think we'll have ceph covered from a tripleo >> perspective. Not sure about anything else. >> > > Yes, thank you Wes for your help on the plan to cover the TripleO CI > perspective. A thread similar to this one has been posted on ceph-dev [1] > the outcome so far is that some Ceph projects are using quay.ceph.com to > store temporary CI images to deal with the docker.io rate limits. > > As per an IRC conversation I had with Dimitri, ceph-ansible is not using > quay.ceph.com but has made some changes to deal with current rate limits > [2]. I expect they'll need to make further changes for November but my > understanding is that they're still looking to push the authoritative copy > of the Ceph container image [3] we use to docker.io. > > On the TripleO side we change that image rarely so provided it can be > cached for CI jobs we should be safe. When we do change the image to the > newer version we use a DNM patch [4] to pull it directly from docker. We > could continue to do this as only that patch would be vulnerable to the > rate limit. If we then see by way of the CI to the DNM patch that the new > image is good, we can pursue getting it cached as the new image for TripleO > CI Ceph jobs. One thing that's not clear to me is the mechanism to do this. > > John > > [1] > https://lists.ceph.io/hyperkitty/list/dev at ceph.io/thread/BYZOGN3Y3CJLY35QLDL7SX6SOX74YZCE/#BYZOGN3Y3CJLY35QLDL7SX6SOX74YZCE > [2] > https://github.com/ceph/ceph-container/blob/master/tests/tox.sh#L86-L110 > [3] https://hub.docker.com/r/ceph/daemon > [4] https://review.opendev.org/#/c/690036/ > > > >> >> >>> Giulio Fidente >>> GPG KEY: 08D733BA >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From foundjem at ieee.org Fri Sep 4 19:49:39 2020 From: foundjem at ieee.org (Foundjem Armstrong) Date: Fri, 4 Sep 2020 15:49:39 -0400 Subject: [scientific] Message-ID: Hello Blair, I am wondering of this email is still functional. Regards, Armstrong From fungi at yuggoth.org Fri Sep 4 22:14:49 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 4 Sep 2020 22:14:49 +0000 Subject: [scientific-sig] ping In-Reply-To: References: Message-ID: <20200904221449.tytpnafbjph3triq@yuggoth.org> On 2020-09-04 15:49:39 -0400 (-0400), Foundjem Armstrong wrote: > I am wondering of this email is still functional. Yes, the old openstack-sigs mailing list was folded into openstack-discuss nearly two years ago, so the Scientific SIG's discussions now happen on this merged mailing list using the [scientific-sig] subject tag. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tom.v.black at gmail.com Sat Sep 5 02:57:42 2020 From: tom.v.black at gmail.com (Tom Black) Date: Sat, 5 Sep 2020 10:57:42 +0800 Subject: [mq][mariadb] what kind of reasons are considered when choosing the mq/mariadb version In-Reply-To: References: Message-ID: RMQ 3.9 will release in short time future, thus 3.7 will not be supported after that. You should upgrade to use 3.8 version. Please see RMQ official release notes. Regards On Fri, Sep 4, 2020 at 11:47 PM Sheldon Hu(胡玉鹏) wrote: > Hi all > > > > I use kolla to build the openstack version, use kolla-ansible to deploy > > rabbitmq is v3.7.10, mariadb is v10.1.x when building rocky version > > rabbitmq is v3.8.5, mariadb is v10.3.x when building ussuri version > > > > what kind of reasons are considered as rabbitmq version is changed from > v3.7.10 to v3.8.5 ? > > what kind of reasons are considered as mariadb version is changed from > v10.1.x to v10.3.x ? > > > > many thanks. > > > > > > > > [image: cid:image005.png at 01D65C4D.EFF29CE0] > > > > > > > > Sheldon Hu *|* 胡玉鹏 > > > > CBRD * |* 云计算与大数据研发部 > > > > *T:* 18663761785 > > > > *E:* huyp at inspur.com > > > > > > [image: cid:image002.jpg at 01D66A3B.5CCD6E30] > > 浪潮电子信息产业股份有限公司 > > Inspur Electronic Information Industry Co.,Ltd. > > 山东省济南市历城区东八区企业公馆A7-1 > > Building A7-1, Dongbaqu Office Block, Licheng District, Jinan,Shandong > Province, PRC > > 浪潮云海 > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7250 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3667 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Sat Sep 5 08:54:52 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 5 Sep 2020 10:54:52 +0200 Subject: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version In-Reply-To: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> References: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> Message-ID: Hi Sheldon, On Sat, Sep 5, 2020 at 3:52 AM Sheldon Hu(胡玉鹏) wrote: > > Thx for reply > > As of mysql compatibility, could you give me a 'for instance' to detail like 'which part of ussuri code must need mysql v10.3.x, if we use mysql v10.1, the code will not run correctly '. I just reviewed this and it was even earlier - in Train: https://bugs.launchpad.net/kolla/+bug/1841907 afair, some other projects also introduced such incompatibilities due to testing against MySQL. -yoctozepto From donny at fortnebula.com Sat Sep 5 15:26:02 2020 From: donny at fortnebula.com (Donny Davis) Date: Sat, 5 Sep 2020 11:26:02 -0400 Subject: how-to enable s3api In-Reply-To: <06322a0a-eba3-d2a1-432a-761d001bf05d@gibdev.ru> References: <06322a0a-eba3-d2a1-432a-761d001bf05d@gibdev.ru> Message-ID: Did you create ec2 credentials using keystone? On Fri, Sep 4, 2020 at 2:47 PM admin wrote: > hi > > where is there a normal manual on how to enable s3api support in swift? > trying to go through > https://docs.openstack.org/swift/latest/middleware.html and as a result > keystone gives an error in authorization (trying to connect via (s3curl) > Authorization failed. The request you have made requires authentication. > from x.x.x.x Unauthorized: The request you have made requires > authentication. > > > > > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Sat Sep 5 16:21:32 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Sun, 6 Sep 2020 00:21:32 +0800 Subject: New deployed packstack openstack, Failed create instance due to volume failed to created Message-ID: Hi, my clean install openstack queens are not able to create simple instance. Got error *Error: Failed to perform requested operation on instance "test1", the instance has an error status: Please try again later [Error: Build of instance 18c607fd-e919-4022-be7d-d178d7ab410e aborted: Volume 8a3b3e80-1593-46e5-8aa8-a5083bfbb46f did not finish being created even after we waited 9 seconds or 4 attempts. And its status is error.].* Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 4.0K 3.9G 1% /dev/shm tmpfs 3.9G 25M 3.8G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/centos-root 48G 34G 15G 71% / /dev/sda1 1014M 182M 833M 18% /boot /dev/mapper/centos-home 24G 33M 24G 1% /home tmpfs 783M 0 783M 0% /run/user/1000 /dev/loop1 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback tmpfs 783M 0 783M 0% /run/user/0 pvs PV VG Fmt Attr PSize PFree /dev/loop0 cinder-volumes lvm2 a-- <20.60g 1012.00m /dev/sda2 centos lvm2 a-- <79.00g 4.00m vgs VG #PV #LV #SN Attr VSize VFree centos 1 3 0 wz--n- <79.00g 4.00m cinder-volumes 1 1 0 wz--n- <20.60g 1012.00m seems my cinder volume no free space. I install VM with 80G storage but cinder volume no space. I tried extend cinder volume as follows *lvm vgextend "cinder-volumes" /dev/loop3 Physical volume "/dev/loop3" successfully created. Volume group "cinder-volumes" successfully extendedpvscan PV /dev/loop0 VG cinder-volumes lvm2 [<20.60 GiB / 1012.00 MiB free] PV /dev/loop3 VG cinder-volumes lvm2 [<30.00 GiB / <30.00 GiB free] PV /dev/sda2 VG centos lvm2 [<79.00 GiB / 4.00 MiB free] Total: 3 [<129.59 GiB] / in use: 3 [<129.59 GiB] / in no VG: 0 [0 ]* *vgs VG #PV #LV #SN Attr VSize VFree centos 1 3 0 wz--n- <79.00g 4.00m cinder-volumes 2 1 0 wz--n- 50.59g 30.98g* When try to create new instance I keep getting the same volume error... Please help and advise what should I do...Please help...why 80G still not enough for cinder volume. Appreciate help. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From huyp at inspur.com Sat Sep 5 01:52:24 2020 From: huyp at inspur.com (=?utf-8?B?U2hlbGRvbiBIdSjog6HnjonpuY8p?=) Date: Sat, 5 Sep 2020 01:52:24 +0000 Subject: =?utf-8?B?562U5aSNOiBbbXFdW21hcmlhZGJdIFdoYXQga2luZCBvZiByZWFzb25zIGFy?= =?utf-8?B?ZSBjb25zaWRlcmVkIHdoZW4gY2hvb3NpbmcgdGhlIG1xL21hcmlhZGIgdmVy?= =?utf-8?Q?sion?= In-Reply-To: References: Message-ID: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> Thx for reply As of mysql compatibility, could you give me a 'for instance' to detail like 'which part of ussuri code must need mysql v10.3.x, if we use mysql v10.1, the code will not run correctly '. Prometheus exporter was depended on rabbitmq 3.8, so if vendor distribution doesn’t contain Prometheus, rabbitmq v3.7.10 would be ok or not ? I suggest the community that it add a doc page about why openstack version chooses the common component like 'mq/db/memcache' version. When openstack version is released , in addition to release note is published , community releases this doc page to statement the reason of the common component version choice. thx Sheldon Hu | 胡玉鹏 CBRD | 云计算与大数据研发部 T: 18663761785 E: huyp at inspur.com 浪潮电子信息产业股份有限公司 Inspur Electronic Information Industry Co.,Ltd. 山东省济南市历城区东八区企业公馆A7-1 Building A7-1, Dongbaqu Office Block, Licheng District, Jinan,Shandong Province, PRC 浪潮云海 -----邮件原件----- 发件人: Radosław Piliszek [mailto:radoslaw.piliszek at gmail.com] 发送时间: 2020年9月4日 18:16 收件人: Brin Zhang(张百林) 抄送: openstack-discuss at lists.openstack.org; Sheldon Hu(胡玉鹏) 主题: Re: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version Hi Brin, most of the time the distribution dictates the new version. However, sometimes it is forced due to issues with new versions of OpenStack and old versions of these dependencies (for example MariaDB 10.1 would fail to work with newer releases due to lack of MySQL compatibility). Newer versions usually mean new features that are useful for end users. In this case RabbitMQ 3.8 shines with its new Prometheus exporter. We generally try to avoid needless updates but still stay current enough to receive proper support from upstreams and satisfy the users. -yoctozepto On Fri, Sep 4, 2020 at 10:54 AM Brin Zhang(张百林) wrote: > > Hi all > > > > We are using kolla to build the OpenStack, use kolla-ansible to deploy > > rabbitmq is v3.7.10, mariadb is v10.1.x when building Rocky release > > rabbitmq is v3.8.5, mariadb is v10.3.x when building Ussuri release > > > > what kind of reasons are considered as rabbitmq version is changed from v3.7.10 to v3.8.5 ? > > what kind of reasons are considered as mariadb version is changed from v10.1.x to v10.3.x ? > > > > If you can provide an explanation, or key explanation, we would be very grateful. > > Thanks. > > > > brinzhang > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4081 bytes Desc: not available URL: From Lukasluedke at web.de Sat Sep 5 17:59:45 2020 From: Lukasluedke at web.de (Lukasluedke at web.de) Date: Sat, 05 Sep 2020 19:59:45 +0200 Subject: AW: [Kolla Ansible] Error in "heat" stage during deployment - Followed Quick Start Guide (ubuntu 18.04.5, virtualbox) References: Message-ID: <-sk5j4f-1lniqv-fndwmj1idcasr57jwc-bcfh079f6zk7-pvzmj86mtr5t-wm0xmqpd9s9vdrkh6f-p2hkss-swownxttvthsfi209p-6tg9cx3lcps1-h64ohhwx6wo9-nssug-sfesmsqm5l17-ghm78e.1599250755418@email.android.com> An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sat Sep 5 20:22:02 2020 From: donny at fortnebula.com (Donny Davis) Date: Sat, 5 Sep 2020 16:22:02 -0400 Subject: New deployed packstack openstack, Failed create instance due to volume failed to created In-Reply-To: References: Message-ID: What is the output of lvs? Also what size block device are you trying to create? Donny Davis c: 805 814 6800 On Sat, Sep 5, 2020, 12:27 PM dangerzone ar wrote: > Hi, my clean install openstack queens are not able to create simple > instance. Got error > > *Error: Failed to perform requested operation on instance "test1", the > instance has an error status: Please try again later [Error: Build of > instance 18c607fd-e919-4022-be7d-d178d7ab410e aborted: Volume > 8a3b3e80-1593-46e5-8aa8-a5083bfbb46f did not finish being created even > after we waited 9 seconds or 4 attempts. And its status is error.].* > > Filesystem Size Used Avail Use% Mounted on > devtmpfs 3.9G 0 3.9G 0% /dev > tmpfs 3.9G 4.0K 3.9G 1% /dev/shm > tmpfs 3.9G 25M 3.8G 1% /run > tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup > /dev/mapper/centos-root 48G 34G 15G 71% / > /dev/sda1 1014M 182M 833M 18% /boot > /dev/mapper/centos-home 24G 33M 24G 1% /home > tmpfs 783M 0 783M 0% /run/user/1000 > /dev/loop1 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback > tmpfs 783M 0 783M 0% /run/user/0 > > pvs > PV VG Fmt Attr PSize PFree > /dev/loop0 cinder-volumes lvm2 a-- <20.60g 1012.00m > /dev/sda2 centos lvm2 a-- <79.00g 4.00m > > vgs > VG #PV #LV #SN Attr VSize VFree > centos 1 3 0 wz--n- <79.00g 4.00m > cinder-volumes 1 1 0 wz--n- <20.60g 1012.00m > > seems my cinder volume no free space. I install VM with 80G storage but > cinder volume no space. > I tried extend cinder volume as follows > > > > > > > > > *lvm vgextend "cinder-volumes" /dev/loop3 Physical volume "/dev/loop3" > successfully created. Volume group "cinder-volumes" successfully > extendedpvscan PV /dev/loop0 VG cinder-volumes lvm2 [<20.60 GiB / > 1012.00 MiB free] PV /dev/loop3 VG cinder-volumes lvm2 [<30.00 GiB / > <30.00 GiB free] PV /dev/sda2 VG centos lvm2 [<79.00 GiB / > 4.00 MiB free] Total: 3 [<129.59 GiB] / in use: 3 [<129.59 GiB] / in no > VG: 0 [0 ]* > > > > > > *vgs VG #PV #LV #SN Attr VSize VFree centos 1 > 3 0 wz--n- <79.00g 4.00m cinder-volumes 2 1 0 wz--n- 50.59g > 30.98g* > > When try to create new instance I keep getting the same volume error... > Please help and advise what should I do...Please help...why 80G still not > enough for cinder volume. > > Appreciate help. Thank you > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Sat Sep 5 23:41:29 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sat, 5 Sep 2020 23:41:29 +0000 Subject: [kolla-ansible] prometheus.yml is empty Message-ID: Hi, I deployed Prometheus with Kolla Ansible (10.1.0.dev36 from PIP). Given the following log, the change has been made on nodes. ============================= TASK [prometheus : Copying over prometheus config file] ************************ skipping: [os-control-2] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) skipping: [os-control-1] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) skipping: [os-control-3] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) skipping: [compute-1] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) changed: [monitor-2] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) changed: [monitor-1] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) changed: [monitor-3] => (item=/usr/local/share/kolla-ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) ============================= But the /etc/kolla/prometheus-server/prometheus.yml on monitor-* nodes is empty. ============================= {} ============================= Anything I am missing here? Thanks! Tony From dangerzonen at gmail.com Sat Sep 5 23:50:03 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Sun, 6 Sep 2020 07:50:03 +0800 Subject: New deployed packstack openstack, Failed create instance due to volume failed to created In-Reply-To: References: Message-ID: Hi Donny and team, This is the output of lvs and vgdisplay. I have tried reinstall my OS with different storage size (60G and 80G) both setup also got problem openstack to create new instance error with the volume and similar output where storage of cinder is too small.... even when I extend cinder volume and the size has been extended, still got volume error when trying to spin new vm... all this is new deployment. May I know how to resolve this...I want to test openstack. Thank you lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home centos -wi-ao---- 23.33g root centos -wi-ao---- <47.79g swap centos -wi-ao---- <7.88g cinder-volumes-pool cinder-volumes twi-aotz-- 19.57g 0.00 10.55 vgdisplay --- Volume group --- VG Name cinder-volumes System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 50.59 GiB PE Size 4.00 MiB Total PE 12952 Alloc PE / Size 5020 / <19.61 GiB Free PE / Size 7932 / 30.98 GiB VG UUID LUenAt-0zrU-42HE-dmc3-bb2e-g4bi-HFDOTA --- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size <79.00 GiB PE Size 4.00 MiB Total PE 20223 Alloc PE / Size 20222 / 78.99 GiB Free PE / Size 1 / 4.00 MiB VG UUID CKC4EU-9qXu-m77Z-x1O4-S0Rq-eLdZ-VczPOb On Sun, Sep 6, 2020 at 4:22 AM Donny Davis wrote: > What is the output of lvs? Also what size block device are you trying to > create? > > Donny Davis > c: 805 814 6800 > > On Sat, Sep 5, 2020, 12:27 PM dangerzone ar wrote: > >> Hi, my clean install openstack queens are not able to create simple >> instance. Got error >> >> *Error: Failed to perform requested operation on instance "test1", the >> instance has an error status: Please try again later [Error: Build of >> instance 18c607fd-e919-4022-be7d-d178d7ab410e aborted: Volume >> 8a3b3e80-1593-46e5-8aa8-a5083bfbb46f did not finish being created even >> after we waited 9 seconds or 4 attempts. And its status is error.].* >> >> Filesystem Size Used Avail Use% Mounted on >> devtmpfs 3.9G 0 3.9G 0% /dev >> tmpfs 3.9G 4.0K 3.9G 1% /dev/shm >> tmpfs 3.9G 25M 3.8G 1% /run >> tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup >> /dev/mapper/centos-root 48G 34G 15G 71% / >> /dev/sda1 1014M 182M 833M 18% /boot >> /dev/mapper/centos-home 24G 33M 24G 1% /home >> tmpfs 783M 0 783M 0% /run/user/1000 >> /dev/loop1 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback >> tmpfs 783M 0 783M 0% /run/user/0 >> >> pvs >> PV VG Fmt Attr PSize PFree >> /dev/loop0 cinder-volumes lvm2 a-- <20.60g 1012.00m >> /dev/sda2 centos lvm2 a-- <79.00g 4.00m >> >> vgs >> VG #PV #LV #SN Attr VSize VFree >> centos 1 3 0 wz--n- <79.00g 4.00m >> cinder-volumes 1 1 0 wz--n- <20.60g 1012.00m >> >> seems my cinder volume no free space. I install VM with 80G storage but >> cinder volume no space. >> I tried extend cinder volume as follows >> >> >> >> >> >> >> >> >> *lvm vgextend "cinder-volumes" /dev/loop3 Physical volume "/dev/loop3" >> successfully created. Volume group "cinder-volumes" successfully >> extendedpvscan PV /dev/loop0 VG cinder-volumes lvm2 [<20.60 GiB / >> 1012.00 MiB free] PV /dev/loop3 VG cinder-volumes lvm2 [<30.00 GiB / >> <30.00 GiB free] PV /dev/sda2 VG centos lvm2 [<79.00 GiB / >> 4.00 MiB free] Total: 3 [<129.59 GiB] / in use: 3 [<129.59 GiB] / in no >> VG: 0 [0 ]* >> >> >> >> >> >> *vgs VG #PV #LV #SN Attr VSize VFree centos 1 >> 3 0 wz--n- <79.00g 4.00m cinder-volumes 2 1 0 wz--n- 50.59g >> 30.98g* >> >> When try to create new instance I keep getting the same volume error... >> Please help and advise what should I do...Please help...why 80G still not >> enough for cinder volume. >> >> Appreciate help. Thank you >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Sun Sep 6 00:11:10 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 6 Sep 2020 00:11:10 +0000 Subject: [kolla-ansible] Difference between deploy and reconfigure? Message-ID: Hi, What's the difference between deploy and reconfigure? I checked some roles, reconfigure.yml just include_tasks deploy.yml. And I didn't find reconfigure as a condition for any tasks or handlers. I checked kolla-ansible script, the difference is ANSIBLE_SERIAL is specified by reconfigure. I also see comment saying "Serial is not recommended and disabled by default.". Here are some cases. I'd like to know which should be used. #1 Add compute. After updating inventory with the new compute node, "deploy --limit new-compute" and "reconfigure --limit new-compute", do they have the same result? If not, what's the difference? #2 Change configuration. In case configuration is changed in global.yml or inventory, which of "deploy" or "reconfigure" should be executed to update the cluster? Same or any difference? #3 Fix the cluster I was told to rerun playbook to fix some problems, like a container was deleted. "deploy --limit host" or "reconfigure --limit host"? Same or any difference? Thanks! Tony From donny at fortnebula.com Sun Sep 6 07:15:23 2020 From: donny at fortnebula.com (Donny Davis) Date: Sun, 6 Sep 2020 03:15:23 -0400 Subject: New deployed packstack openstack, Failed create instance due to volume failed to created In-Reply-To: References: Message-ID: I would start by looking at the logs for cinder. Donny Davis c: 805 814 6800 On Sat, Sep 5, 2020, 7:50 PM dangerzone ar wrote: > Hi Donny and team, > This is the output of lvs and vgdisplay. > I have tried reinstall my OS with different storage size (60G and 80G) > both setup also got problem openstack to create new instance error with the > volume and similar output where > storage of cinder is too small.... even when I extend cinder volume and > the size has been extended, still got volume error when trying to spin new > vm... all this is new deployment. > May I know how to resolve this...I want to test openstack. Thank you > > lvs > LV VG Attr LSize Pool Origin Data% > Meta% Move Log Cpy%Sync Convert > home centos -wi-ao---- 23.33g > root centos -wi-ao---- <47.79g > swap centos -wi-ao---- <7.88g > cinder-volumes-pool cinder-volumes twi-aotz-- 19.57g 0.00 > 10.55 > > vgdisplay > --- Volume group --- > VG Name cinder-volumes > System ID > Format lvm2 > Metadata Areas 2 > Metadata Sequence No 17 > VG Access read/write > VG Status resizable > MAX LV 0 > Cur LV 1 > Open LV 0 > Max PV 0 > Cur PV 2 > Act PV 2 > VG Size 50.59 GiB > PE Size 4.00 MiB > Total PE 12952 > Alloc PE / Size 5020 / <19.61 GiB > Free PE / Size 7932 / 30.98 GiB > VG UUID LUenAt-0zrU-42HE-dmc3-bb2e-g4bi-HFDOTA > > --- Volume group --- > VG Name centos > System ID > Format lvm2 > Metadata Areas 1 > Metadata Sequence No 4 > VG Access read/write > VG Status resizable > MAX LV 0 > Cur LV 3 > Open LV 3 > Max PV 0 > Cur PV 1 > Act PV 1 > VG Size <79.00 GiB > PE Size 4.00 MiB > Total PE 20223 > Alloc PE / Size 20222 / 78.99 GiB > Free PE / Size 1 / 4.00 MiB > VG UUID CKC4EU-9qXu-m77Z-x1O4-S0Rq-eLdZ-VczPOb > > On Sun, Sep 6, 2020 at 4:22 AM Donny Davis wrote: > >> What is the output of lvs? Also what size block device are you trying to >> create? >> >> Donny Davis >> c: 805 814 6800 >> >> On Sat, Sep 5, 2020, 12:27 PM dangerzone ar >> wrote: >> >>> Hi, my clean install openstack queens are not able to create simple >>> instance. Got error >>> >>> *Error: Failed to perform requested operation on instance "test1", the >>> instance has an error status: Please try again later [Error: Build of >>> instance 18c607fd-e919-4022-be7d-d178d7ab410e aborted: Volume >>> 8a3b3e80-1593-46e5-8aa8-a5083bfbb46f did not finish being created even >>> after we waited 9 seconds or 4 attempts. And its status is error.].* >>> >>> Filesystem Size Used Avail Use% Mounted on >>> devtmpfs 3.9G 0 3.9G 0% /dev >>> tmpfs 3.9G 4.0K 3.9G 1% /dev/shm >>> tmpfs 3.9G 25M 3.8G 1% /run >>> tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup >>> /dev/mapper/centos-root 48G 34G 15G 71% / >>> /dev/sda1 1014M 182M 833M 18% /boot >>> /dev/mapper/centos-home 24G 33M 24G 1% /home >>> tmpfs 783M 0 783M 0% /run/user/1000 >>> /dev/loop1 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback >>> tmpfs 783M 0 783M 0% /run/user/0 >>> >>> pvs >>> PV VG Fmt Attr PSize PFree >>> /dev/loop0 cinder-volumes lvm2 a-- <20.60g 1012.00m >>> /dev/sda2 centos lvm2 a-- <79.00g 4.00m >>> >>> vgs >>> VG #PV #LV #SN Attr VSize VFree >>> centos 1 3 0 wz--n- <79.00g 4.00m >>> cinder-volumes 1 1 0 wz--n- <20.60g 1012.00m >>> >>> seems my cinder volume no free space. I install VM with 80G storage but >>> cinder volume no space. >>> I tried extend cinder volume as follows >>> >>> >>> >>> >>> >>> >>> >>> >>> *lvm vgextend "cinder-volumes" /dev/loop3 Physical volume "/dev/loop3" >>> successfully created. Volume group "cinder-volumes" successfully >>> extendedpvscan PV /dev/loop0 VG cinder-volumes lvm2 [<20.60 GiB / >>> 1012.00 MiB free] PV /dev/loop3 VG cinder-volumes lvm2 [<30.00 GiB / >>> <30.00 GiB free] PV /dev/sda2 VG centos lvm2 [<79.00 GiB / >>> 4.00 MiB free] Total: 3 [<129.59 GiB] / in use: 3 [<129.59 GiB] / in no >>> VG: 0 [0 ]* >>> >>> >>> >>> >>> >>> *vgs VG #PV #LV #SN Attr VSize VFree centos >>> 1 3 0 wz--n- <79.00g 4.00m cinder-volumes 2 1 0 wz--n- 50.59g >>> 30.98g* >>> >>> When try to create new instance I keep getting the same volume error... >>> Please help and advise what should I do...Please help...why 80G still not >>> enough for cinder volume. >>> >>> Appreciate help. Thank you >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Sun Sep 6 07:47:15 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Sun, 6 Sep 2020 07:47:15 +0000 Subject: =?utf-8?B?562U5aSNOiBbbXFdW21hcmlhZGJdIFdoYXQga2luZCBvZiByZWFzb25zIGFy?= =?utf-8?B?ZSBjb25zaWRlcmVkIHdoZW4gY2hvb3NpbmcgdGhlIG1xL21hcmlhZGIgdmVy?= =?utf-8?Q?sion?= In-Reply-To: References: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> Message-ID: <5efadef480e849f095dfd8649680bf7b@inspur.com> Hi Radoslaw. I have a doubt, https://review.opendev.org/#/c/677221 "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" altered ' network_id' attribute, but why does this make it necessary to upgrade the version of mariadb? "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" run *-db sync" it will be upgrade our local project's db, I think it's ok to run even if I don't upgrade the mariadb from v10.1 to v 10.3, right? brinzhang > Hi Sheldon, > On Sat, Sep 5, 2020 at 3:52 AM Sheldon Hu(胡玉鹏) wrote: > > Thx for reply > > As of mysql compatibility, could you give me a 'for instance' to detail like 'which part of ussuri code must need mysql v10.3.x, if we use mysql v10.1, the code will not run correctly '. > I just reviewed this and it was even earlier - in Train: > https://bugs.launchpad.net/kolla/+bug/1841907 > afair, some other projects also introduced such incompatibilities due to testing against MySQL. > -yoctozepto From radoslaw.piliszek at gmail.com Sun Sep 6 07:52:51 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 6 Sep 2020 09:52:51 +0200 Subject: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version In-Reply-To: <5efadef480e849f095dfd8649680bf7b@inspur.com> References: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> <5efadef480e849f095dfd8649680bf7b@inspur.com> Message-ID: Hi Brin, the issue is that, per the bug report, MariaDB 10.1 cannot handle such changes to foreign keys: Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1' It received support later. Is there a particular reason you are trying to keep MariaDB 10.1? -yoctozepto On Sun, Sep 6, 2020 at 9:47 AM Brin Zhang(张百林) wrote: > > Hi Radoslaw. > > I have a doubt, https://review.opendev.org/#/c/677221 "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" altered ' network_id' attribute, but why does this make it necessary to upgrade the version of mariadb? > > "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" run *-db sync" it will be upgrade our local project's db, I think it's ok to run even if I don't upgrade the mariadb from v10.1 to v 10.3, right? > > brinzhang > > > Hi Sheldon, > > > On Sat, Sep 5, 2020 at 3:52 AM Sheldon Hu(胡玉鹏) wrote: > > > > Thx for reply > > > > As of mysql compatibility, could you give me a 'for instance' to detail like 'which part of ussuri code must need mysql v10.3.x, if we use mysql v10.1, the code will not run correctly '. > > > I just reviewed this and it was even earlier - in Train: > > https://bugs.launchpad.net/kolla/+bug/1841907 > > afair, some other projects also introduced such incompatibilities due to testing against MySQL. > > > -yoctozepto > From radoslaw.piliszek at gmail.com Sun Sep 6 07:55:24 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 6 Sep 2020 09:55:24 +0200 Subject: [kolla-ansible] Difference between deploy and reconfigure? In-Reply-To: References: Message-ID: Hi Tony, at the moment: deploy == reconfigure Except for the special cases of Bifrost and Swift. If it ever changes, you will read about it in the release notes. -yoctozepto On Sun, Sep 6, 2020 at 2:13 AM Tony Liu wrote: > > Hi, > > What's the difference between deploy and reconfigure? > I checked some roles, reconfigure.yml just include_tasks deploy.yml. > And I didn't find reconfigure as a condition for any tasks or handlers. > I checked kolla-ansible script, the difference is ANSIBLE_SERIAL > is specified by reconfigure. I also see comment saying "Serial is not > recommended and disabled by default.". > > Here are some cases. I'd like to know which should be used. > > #1 Add compute. > After updating inventory with the new compute node, > "deploy --limit new-compute" and "reconfigure --limit new-compute", > do they have the same result? If not, what's the difference? > > #2 Change configuration. > In case configuration is changed in global.yml or inventory, > which of "deploy" or "reconfigure" should be executed to update > the cluster? Same or any difference? > > #3 Fix the cluster > I was told to rerun playbook to fix some problems, like a container > was deleted. "deploy --limit host" or "reconfigure --limit host"? > Same or any difference? > > > Thanks! > Tony > > From zhangbailin at inspur.com Sun Sep 6 08:05:23 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Sun, 6 Sep 2020 08:05:23 +0000 Subject: =?utf-8?B?562U5aSNOiBbbXFdW21hcmlhZGJdIFdoYXQga2luZCBvZiByZWFzb25zIGFy?= =?utf-8?B?ZSBjb25zaWRlcmVkIHdoZW4gY2hvb3NpbmcgdGhlIG1xL21hcmlhZGIgdmVy?= =?utf-8?Q?sion?= In-Reply-To: References: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> <5efadef480e849f095dfd8649680bf7b@inspur.com> Message-ID: >Hi Brin, > the issue is that, per the bug report, MariaDB 10.1 cannot handle such changes to foreign keys: Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1' > It received support later. > Is there a particular reason you are trying to keep MariaDB 10.1? No, we want to upgrade openstack release, but I don’t know upgrade MariaDB whether is necessary (used 10.1 now), your suggestion will be very helpful to our choice. Therefore, for features that are highly dependent, or bug fixes, further verification is required. Thanks. -yoctozepto On Sun, Sep 6, 2020 at 9:47 AM Brin Zhang(张百林) wrote: > > Hi Radoslaw. > > I have a doubt, https://review.opendev.org/#/c/677221 "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" altered ' network_id' attribute, but why does this make it necessary to upgrade the version of mariadb? > > "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" run *-db sync" it will be upgrade our local project's db, I think it's ok to run even if I don't upgrade the mariadb from v10.1 to v 10.3, right? > > brinzhang > > > Hi Sheldon, > > > On Sat, Sep 5, 2020 at 3:52 AM Sheldon Hu(胡玉鹏) wrote: > > > > Thx for reply > > > > As of mysql compatibility, could you give me a 'for instance' to detail like 'which part of ussuri code must need mysql v10.3.x, if we use mysql v10.1, the code will not run correctly '. > > > I just reviewed this and it was even earlier - in Train: > > https://bugs.launchpad.net/kolla/+bug/1841907 > > afair, some other projects also introduced such incompatibilities due to testing against MySQL. > > > -yoctozepto > From rncchae at gmail.com Sun Sep 6 11:30:32 2020 From: rncchae at gmail.com (Myeong Chul Chae) Date: Sun, 6 Sep 2020 20:30:32 +0900 Subject: [sdk][python-openstackclient] Request the code review about the story: openstack CLI - Create an instance using --image-property filtering not working. Message-ID: Hi. I researched the story 'openstack CLI - Create an instance using --image-property filtering not working' and modified the code to solve it. This is the issue that I opened. - Link And the hyperlink of the story is here . In addition, there is a review posted before my review of the same story, so conflict resolution is necessary. Please check the commit message and history of the two reviews and continue the discussion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Mon Sep 7 02:57:00 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 7 Sep 2020 02:57:00 +0000 Subject: [kolla-ansible] prometheus.yml is empty In-Reply-To: References: Message-ID: Please discard my report. The issue was fixed by this commit. https://github.com/openstack/kolla-ansible/commit/6f1bd3e35b46b8c4feb179e697348a5a0efd1549#diff-aba4fa152ad54e1eaaf5cfd43a99b62d Tony > -----Original Message----- > From: Tony Liu > Sent: Saturday, September 5, 2020 4:41 PM > To: openstack-discuss > Subject: [kolla-ansible] prometheus.yml is empty > > Hi, > > I deployed Prometheus with Kolla Ansible (10.1.0.dev36 from PIP). > Given the following log, the change has been made on nodes. > ============================= > TASK [prometheus : Copying over prometheus config file] > ************************ > skipping: [os-control-2] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > skipping: [os-control-1] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > skipping: [os-control-3] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > skipping: [compute-1] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > changed: [monitor-2] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > changed: [monitor-1] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > changed: [monitor-3] => (item=/usr/local/share/kolla- > ansible/ansible/roles/prometheus/templates/prometheus.yml.j2) > ============================= > But the /etc/kolla/prometheus-server/prometheus.yml on monitor-* nodes > is empty. > ============================= > {} > ============================= > Anything I am missing here? > > Thanks! > Tony > From chkumar246 at gmail.com Mon Sep 7 03:31:59 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Mon, 7 Sep 2020 09:01:59 +0530 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> Message-ID: Hello, On Sat, Sep 5, 2020 at 3:44 AM Dimitri Savineau wrote: > > Hi, > > We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2]. > > Official ceph images will still be updated on docker.io. > > Note that from a ceph-ansible point of view, switching to the quay.ceph.io registry isn't enough to get rid of the docker.io registry when deploying with the Ceph dashboard enabled. > The whole monitoring stack (alertmanager, prometheus, grafana and node-exporter) coming with the Ceph dashboard is still using docker.io by default [3][4][5][6]. > > As an alternative, you can use the official quay registry (quay.io) for altermanager, prometheus and node-exporter images [7] from the prometheus namespace like we're doing in [2]. > Only the grafana container image will still be pulled from docker.io. > The app-sre team mirrors the grafana image from docker.io on quay. https://quay.io/repository/app-sre/grafana?tab=tags , we reuse the same in CI? I have proposed a patch on tripleo-common to switch to quay.io -> https://review.opendev.org/#/c/750119/ Thanks, Chandan Kumar From jpena at redhat.com Mon Sep 7 07:23:43 2020 From: jpena at redhat.com (Javier Pena) Date: Mon, 7 Sep 2020 03:23:43 -0400 (EDT) Subject: New deployed packstack openstack, Failed create instance due to volume failed to created In-Reply-To: References: Message-ID: <1685671078.48951829.1599463423232.JavaMail.zimbra@redhat.com> Hi, If SELinux is enabled on your machine, could you check if there are any AVCs (grep avc /var/log/audit/audit.log) related to iscsiadm? I have seen that in some deployments, and found out the instructions in [1] to be helpful. Regards, Javier [1] - https://www.server-world.info/en/note?os=CentOS_8&p=openstack_ussuri2&f=8 ----- Original Message ----- > Hi, my clean install openstack queens are not able to create simple instance. > Got error > Error: Failed to perform requested operation on instance "test1", the > instance has an error status: Please try again later [Error: Build of > instance 18c607fd-e919-4022-be7d-d178d7ab410e aborted: Volume > 8a3b3e80-1593-46e5-8aa8-a5083bfbb46f did not finish being created even after > we waited 9 seconds or 4 attempts. And its status is error.]. > Filesystem Size Used Avail Use% Mounted on > devtmpfs 3.9G 0 3.9G 0% /dev > tmpfs 3.9G 4.0K 3.9G 1% /dev/shm > tmpfs 3.9G 25M 3.8G 1% /run > tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup > /dev/mapper/centos-root 48G 34G 15G 71% / > /dev/sda1 1014M 182M 833M 18% /boot > /dev/mapper/centos-home 24G 33M 24G 1% /home > tmpfs 783M 0 783M 0% /run/user/1000 > /dev/loop1 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback > tmpfs 783M 0 783M 0% /run/user/0 > pvs > PV VG Fmt Attr PSize PFree > /dev/loop0 cinder-volumes lvm2 a-- <20.60g 1012.00m > /dev/sda2 centos lvm2 a-- <79.00g 4.00m > vgs > VG #PV #LV #SN Attr VSize VFree > centos 1 3 0 wz--n- <79.00g 4.00m > cinder-volumes 1 1 0 wz--n- <20.60g 1012.00m > seems my cinder volume no free space. I install VM with 80G storage but > cinder volume no space. > I tried extend cinder volume as follows > lvm vgextend "cinder-volumes" /dev/loop3 > Physical volume "/dev/loop3" successfully created. > Volume group "cinder-volumes" successfully extended > pvscan > PV /dev/loop0 VG cinder-volumes lvm2 [<20.60 GiB / 1012.00 MiB free] > PV /dev/loop3 VG cinder-volumes lvm2 [<30.00 GiB / <30.00 GiB free] > PV /dev/sda2 VG centos lvm2 [<79.00 GiB / 4.00 MiB free] > Total: 3 [<129.59 GiB] / in use: 3 [<129.59 GiB] / in no VG: 0 [0 ] > vgs > VG #PV #LV #SN Attr VSize VFree > centos 1 3 0 wz--n- <79.00g 4.00m > cinder-volumes 2 1 0 wz--n- 50.59g 30.98g > When try to create new instance I keep getting the same volume error... > Please help and advise what should I do...Please help...why 80G still not > enough for cinder volume. > Appreciate help. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From changzhi at cn.ibm.com Mon Sep 7 07:52:54 2020 From: changzhi at cn.ibm.com (Zhi CZ Chang) Date: Mon, 7 Sep 2020 07:52:54 +0000 Subject: [neutron][policy] Admin user can do anything without the control of policy.json? Message-ID: An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Sep 7 08:08:02 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 7 Sep 2020 09:08:02 +0100 Subject: [mq][mariadb] What kind of reasons are considered when choosing the mq/mariadb version In-Reply-To: References: <823f98c6dc954e90a27f3d46974c9dd5@inspur.com> <5efadef480e849f095dfd8649680bf7b@inspur.com> Message-ID: On Sun, 6 Sep 2020 at 09:06, Brin Zhang(张百林) wrote: > > >Hi Brin, > > > the issue is that, per the bug report, MariaDB 10.1 cannot handle such changes to foreign keys: > Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1' > > It received support later. > > > Is there a particular reason you are trying to keep MariaDB 10.1? > > No, we want to upgrade openstack release, but I don’t know upgrade MariaDB whether is necessary (used 10.1 now), your suggestion will be very helpful to our choice. Therefore, for features that are highly dependent, or bug fixes, further verification is required. Hi Brin, as always it would be helpful to know exactly the problem you are facing, and if there is a reason why you are considering to choose a version of these key components that differs from the versions tested upstream. Are you using an external DB & MQ outside of kolla that you do not have control over? > > Thanks. > > -yoctozepto > > On Sun, Sep 6, 2020 at 9:47 AM Brin Zhang(张百林) wrote: > > > > Hi Radoslaw. > > > > I have a doubt, https://review.opendev.org/#/c/677221 "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" altered ' network_id' attribute, but why does this make it necessary to upgrade the version of mariadb? > > > > "op.alter_column('subnets','network_id', nullable=False, existing_type=sa.String(36))" run *-db sync" it will be upgrade our local project's db, I think it's ok to run even if I don't upgrade the mariadb from v10.1 to v 10.3, right? > > > > brinzhang > > > > > Hi Sheldon, > > > > > On Sat, Sep 5, 2020 at 3:52 AM Sheldon Hu(胡玉鹏) wrote: > > > > > > Thx for reply > > > > > > As of mysql compatibility, could you give me a 'for instance' to detail like 'which part of ussuri code must need mysql v10.3.x, if we use mysql v10.1, the code will not run correctly '. > > > > > I just reviewed this and it was even earlier - in Train: > > > https://bugs.launchpad.net/kolla/+bug/1841907 > > > afair, some other projects also introduced such incompatibilities due to testing against MySQL. > > > > > -yoctozepto > > From bcafarel at redhat.com Mon Sep 7 08:24:19 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 7 Sep 2020 10:24:19 +0200 Subject: [neutron] Bug deputy report (week starting on 2020-08-31) Message-ID: Hello neutrinos, back to school week in some parts of the world, and new bug deputy rotation for us! Here is the bug list for week 36 A relatively quiet week on the new numbers count, all urgent bugs were handled Critical * openstack-tox-py36-with-ovsdbapp-master periodic job is failing everyday since 28.08.2020 - https://bugs.launchpad.net/neutron/+bug/1893965/ Patch merged in neutron https://review.opendev.org/#/c/749537/ High * [OVN Octavia Provider] OVN provider fails during listener delete - https://bugs.launchpad.net/neutron/+bug/1894136 Assigned to Brian Medium * [ovn] Limit the number of metadata workers - https://bugs.launchpad.net/neutron/+bug/1893656 Aims to use a separate config option than metadata_workers one, unassigned in neutron (working in tripleo and fixed in charm) * neutron-ovn-db-sync-util: Unhandled error: oslo_config.cfg.NoSuchOptError: no such option keystone_authtoken in group [DEFAULT] - https://bugs.launchpad.net/neutron/+bug/1894048 This was fixed previously in bug 1882020, but is observed again, unassigned * ovn: filtering hash ring nodes is not time zone aware - https://bugs.launchpad.net/neutron/+bug/1894117 Timezone fun, unassigned Wishlist * For neutron-l3-agent, after the execution of the linux command fails, it is not displayed which command failed to execute - https://bugs.launchpad.net/neutron/+bug/1893627 Add failed command to error log Patch merged: https://review.opendev.org/#/c/749076/ Incomplete * Upgrading from train(el7) to ussuri(el8): Packet sequence number wrong - got 2 expected 1 - https://bugs.launchpad.net/neutron/+bug/1894077 Migration error on port forwarding description Undecided * Get domain name for dhcp from neutron db - https://bugs.launchpad.net/neutron/+bug/1893802 This one needs confirmation from subject experts, but as described in comment I think this was attempted and reverted before - and this marked as invalid (or to fix in another way) -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Sep 7 09:41:09 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 7 Sep 2020 11:41:09 +0200 Subject: [neutron][policy] Admin user can do anything without the control of policy.json? In-Reply-To: References: Message-ID: <20200907094109.g6tjbb5e4yiqp4za@skaplons-mac> Hi, I'm adding Akihiro to the thread as maybe he will have some more knowledge about why it is like that in Neutron. On Mon, Sep 07, 2020 at 07:52:54AM +0000, Zhi CZ Chang wrote: > Hi, all > > I have a question about Neutron Policy. > > I create some neutron policies in the file /etc/neutron/policy.json, plus > in this policy file, I don't want to anyone to create address scope and > set " "create_address_scope": "!" ". > > After that, I execute the command line " openstack address scope create > test " by the admin user and it works fine. > > This is not my expected. > > After some investigation, I find that in this pr[1], it will return True > directly even if the admin user. > > Could someone tell me why the admin user can do anything without the > control of policies? Or maybe I make some mistakes? > > > Thanks > > 1. https://review.opendev.org/#/c/175238/11/neutron/policy.py -- Slawek Kaplonski Principal software engineer Red Hat From mark at stackhpc.com Mon Sep 7 11:32:49 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 7 Sep 2020 12:32:49 +0100 Subject: [kolla] virtual PTG Message-ID: Hi, The next PTG will be virtual again, and held from Monday October 26th to Friday October 30th. For those who have not attended before, the PTG is where we discuss various aspects of the project, and eventually decide on priorities for the next development cycle. I have provisionally booked the same slots as last time: * Monday 26th 13:00 - 17:00 UTC (kolla & kolla ansible) * Tuesday 27th 13:00 - 17:00 UTC (kolla & kolla ansible) * Wednesday 28th 13:00 - 15:00 UTC (kayobe) Please get in touch if you would like to attend but these slots are not suitable for you. In particular, if you are in a time zone where these times are unsociable, we could consider running some sessions during a different 4 hour block. Thanks, Mark From mark at stackhpc.com Mon Sep 7 11:35:54 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 7 Sep 2020 12:35:54 +0100 Subject: [kolla] Kolla klub break In-Reply-To: References: Message-ID: Hi, Apologies, I will be on holiday on Thursday 10th September. Let's get started again on 24th September, hopefully everyone will be feeling refreshed after a long break from the Klub. Thanks, Mark On Fri, 7 Aug 2020 at 15:11, Mark Goddard wrote: > > Hi, > > We agreed in Wednesday's IRC meeting to take a short summer break from > the klub. Let's meet again on 10th September. > > Thanks to everyone who has taken part in these meetings so far, we've > had some really great discussions. As always, if anyone has ideas for > topics, please add them to the Google doc. > > Looking forward to some more great sessions in September. > > https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw/edit# > > Thanks, > Mark From dangerzonen at gmail.com Mon Sep 7 11:37:39 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Mon, 7 Sep 2020 19:37:39 +0800 Subject: New deployed packstack openstack, Failed create instance due to volume failed to created In-Reply-To: <1685671078.48951829.1599463423232.JavaMail.zimbra@redhat.com> References: <1685671078.48951829.1599463423232.JavaMail.zimbra@redhat.com> Message-ID: Hi sir...Selinux is set to permissiv mode. I tried deploy packstack and devstack..both give me error volume when create new instanc..i can create instance without volume but i cannot console to the vm...both fresh instaltn over virtualbox (windows10). Tried wth centos7 and storage 60G and 80G.... This really puzzl me...everythg is new setup...and success deploy openstack till completed... i want to test openstack further...but this problm really stuck me... i hope to be advised further... i will setup another vm and instll via packstack. Thank you On Mon, 7 Sep 2020 15:23 Javier Pena, wrote: > Hi, > > If SELinux is enabled on your machine, could you check if there are any > AVCs (grep avc /var/log/audit/audit.log) related to iscsiadm? I have seen > that in some deployments, and found out the instructions in [1] to be > helpful. > > Regards, > Javier > > [1] - > https://www.server-world.info/en/note?os=CentOS_8&p=openstack_ussuri2&f=8 > > ------------------------------ > > Hi, my clean install openstack queens are not able to create simple > instance. Got error > > *Error: Failed to perform requested operation on instance "test1", the > instance has an error status: Please try again later [Error: Build of > instance 18c607fd-e919-4022-be7d-d178d7ab410e aborted: Volume > 8a3b3e80-1593-46e5-8aa8-a5083bfbb46f did not finish being created even > after we waited 9 seconds or 4 attempts. And its status is error.].* > > Filesystem Size Used Avail Use% Mounted on > devtmpfs 3.9G 0 3.9G 0% /dev > tmpfs 3.9G 4.0K 3.9G 1% /dev/shm > tmpfs 3.9G 25M 3.8G 1% /run > tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup > /dev/mapper/centos-root 48G 34G 15G 71% / > /dev/sda1 1014M 182M 833M 18% /boot > /dev/mapper/centos-home 24G 33M 24G 1% /home > tmpfs 783M 0 783M 0% /run/user/1000 > /dev/loop1 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback > tmpfs 783M 0 783M 0% /run/user/0 > > pvs > PV VG Fmt Attr PSize PFree > /dev/loop0 cinder-volumes lvm2 a-- <20.60g 1012.00m > /dev/sda2 centos lvm2 a-- <79.00g 4.00m > > vgs > VG #PV #LV #SN Attr VSize VFree > centos 1 3 0 wz--n- <79.00g 4.00m > cinder-volumes 1 1 0 wz--n- <20.60g 1012.00m > > seems my cinder volume no free space. I install VM with 80G storage but > cinder volume no space. > I tried extend cinder volume as follows > > > > > > > > > *lvm vgextend "cinder-volumes" /dev/loop3 Physical volume "/dev/loop3" > successfully created. Volume group "cinder-volumes" successfully > extendedpvscan PV /dev/loop0 VG cinder-volumes lvm2 [<20.60 GiB / > 1012.00 MiB free] PV /dev/loop3 VG cinder-volumes lvm2 [<30.00 GiB / > <30.00 GiB free] PV /dev/sda2 VG centos lvm2 [<79.00 GiB / > 4.00 MiB free] Total: 3 [<129.59 GiB] / in use: 3 [<129.59 GiB] / in no > VG: 0 [0 ]* > > > > > > *vgs VG #PV #LV #SN Attr VSize VFree centos 1 > 3 0 wz--n- <79.00g 4.00m cinder-volumes 2 1 0 wz--n- 50.59g > 30.98g* > > When try to create new instance I keep getting the same volume error... > Please help and advise what should I do...Please help...why 80G still not > enough for cinder volume. > > Appreciate help. Thank you > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza.b2008 at gmail.com Mon Sep 7 13:53:44 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Mon, 7 Sep 2020 18:23:44 +0430 Subject: Floating IP problem in HA OVN DVR with TripleO Message-ID: Hi all, I deployed an environment with TripleO Ussuri with 3 HA Controllers and some Compute nodes with neutron-ovn-dvr-ha.yaml Instances have Internet access through routers with SNAT traffic (in this case traffic is routed via a controller node), and by assigning IP address directly from provider network (not having a router). But in case of assigning FIP from provider to an instance, VM Internet connection is lost. Here is the output of router nat lists, which seems OK: # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP EXTERNAL_MAC LOGICAL_PORT dnat_and_snat X.X.X.X 192.168.0.153 fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 snat Y.Y.Y.Y 192.168.0.0/24 I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and as it has to be, but traffic routes from a Controller node and it won't be distributed anymore. Any idea or suggestion would be grateful. Regards, Reza -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 7 14:29:40 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Sep 2020 09:29:40 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update Message-ID: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> Hello Everyone, Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can break the projects gate if not yet taken care of. Read below for the plan. Tracking: https://storyboard.openstack.org/#!/story/2007865 Progress: ======= * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below plan. * Part1: Migrating tox base job tomorrow (8th Sept): ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this series (all base patches of this): https://review.opendev.org/#/c/738328/ . **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. * Part2: Migrating devstack/tempest base job on 10th sept: * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. ** Bug#1882521 ** DB migration issues, *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 Testing Till now: ============ * ~200 repos gate have been tested or fixed till now. ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your project repos if I am late to fix them): ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open * ~30repos fixes ready to merge: ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 Bugs Report: ========== 1. Bug#1882521. (IN-PROGRESS) There is open bug for nova/cinder where three tempest tests are failing for volume detach operation. There is no clear root cause found yet -https://bugs.launchpad.net/cinder/+bug/1882521 We have skipped the tests in tempest base patch to proceed with the other projects testing but this is blocking things for the migration. 2. DB migration issues (IN-PROGRESS) * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) nodeset conflict is resolved now and devstack provides all focal nodes now. 4. Bug#1886296. (IN-PROGRESS) pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] nd will release a new hacking version. After that project can move to new hacking and do not need to maintain pyflakes version compatibility. 5. Bug#1886298. (IN-PROGRESS) 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. There are a few more issues[5] with lower-constraint jobs which I am debugging. What work to be done on the project side: ================================ This goal is more of testing the jobs on focal and fixing bugs if any otherwise migrate jobs by switching the nodeset to focal node sets defined in devstack. 1. Start a patch in your repo by making depends-on on either of below: devstack base patch if you are using only devstack base jobs not tempest: Depends-on: https://review.opendev.org/#/c/731207/ OR tempest base patch if you are using the tempest base job (like devstack-tempest): Depends-on: https://review.opendev.org/#/c/734700/ Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So you can test the complete gate jobs(unit/functional/doc/integration) together. This and its base patches - https://review.opendev.org/#/c/738328/ Example: https://review.opendev.org/#/c/738126/ 2. If none of your project jobs override the nodeset then above patch will be testing patch(do not merge) otherwise change the nodeset to focal. Example: https://review.opendev.org/#/c/737370/ 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset is overridden then devstack being branched and stable base job using bionic/xenial will take care of this. Example: https://review.opendev.org/#/c/744056/2 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in this migration. Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG Once we finish the testing on projects side and no failure then we will merge the devstack and tempest base patches. Important things to note: =================== * Do not forgot to add the story and task link to your patch so that we can track it smoothly. * Use gerrit topic 'migrate-to-focal' * Do not backport any of the patches. References: ========= Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 [1] https://github.com/PyCQA/pyflakes/issues/367 [2] https://review.opendev.org/#/c/739315/ [3] https://review.opendev.org/#/c/739334/ [4] https://github.com/pallets/markupsafe/issues/116 [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 -gmann From dtantsur at redhat.com Mon Sep 7 14:29:47 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 7 Sep 2020 16:29:47 +0200 Subject: [ironic] [stable] Bifrost stable/stein is broken (by eventlet?): help needed In-Reply-To: References: Message-ID: On Fri, Sep 4, 2020 at 9:26 AM Mark Goddard wrote: > On Thu, 3 Sep 2020 at 11:31, Dmitry Tantsur wrote: > > > > Hi folks, > > > > I'm trying to revive the Bifrost stable/stein CI, and after fixing a > bunch of issues in https://review.opendev.org/749014 I've hit a wall with > what seems an eventlet problem: ironic-inspector fails to start with: > > > > Exception AttributeError: "'_SocketDuckForFd' object has no attribute > '_closed'" in _SocketDuckForFd:16> ignored > > > > I've managed to find similar issues, but they should have been resolved > in the eventlet version in stein (0.24.1). Any ideas? > > > > If we cannot fix it, we'll have to EOL stein and earlier on bifrost. > > Strange. Do you know why this affects only bifrost and not ironic > inspector CI? > I'm totally lost, but at the very least the normal CI is devstack, so a lot of things can be different. Dmitry > > > > > Dmitry > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Sep 7 14:30:35 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 7 Sep 2020 16:30:35 +0200 Subject: [ironic] [stable] Bifrost stable/stein is broken (by eventlet?): help needed In-Reply-To: <65108979-a4d2-e9ea-266c-01d624269773@nvidia.com> References: <65108979-a4d2-e9ea-266c-01d624269773@nvidia.com> Message-ID: On Fri, Sep 4, 2020 at 11:12 PM Tim Burke wrote: > On 9/3/20 3:30 AM, Dmitry Tantsur wrote: > > *External email: Use caution opening links or attachments* > > > > > > Hi folks, > > > > I'm trying to revive the Bifrost stable/stein CI, and after fixing a > > bunch of issues in https://review.opendev.org/749014 I've hit a wall > > with what seems an eventlet problem: ironic-inspector fails to start > with: > > > > Exception AttributeError: "'_SocketDuckForFd' object has no attribute > > '_closed'" in > _SocketDuckForFd:16> ignored > > > > I've managed to find similar issues, but they should have been resolved > > in the eventlet version in stein (0.24.1). Any ideas? > > > > If we cannot fix it, we'll have to EOL stein and earlier on bifrost. > > > > Dmitry > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > > O'Neill > > The "ignored" makes me think that it shouldn't actually be a problem -- > are we assuming that's the error because of logs like > > https://c972f4bb262ae2d5c5d6-598e1d61c0aab85aa3b67b337ca2c556.ssl.cf2.rackcdn.com/749014/2/check/bifrost-integration-tinyipa-ubuntu-xenial/9d2905c/logs/ironic-inspector.log > ? > > Digging down to > > https://c972f4bb262ae2d5c5d6-598e1d61c0aab85aa3b67b337ca2c556.ssl.cf2.rackcdn.com/749014/2/check/bifrost-integration-tinyipa-ubuntu-xenial/9d2905c/logs/all/syslog > shows tracebacks like > > File ".../eventlet/hubs/__init__.py", line 39, in get_default_hub > import eventlet.hubs.epolls > File ".../eventlet/hubs/epolls.py", line 13, in > from eventlet.hubs.hub import BaseHub > File ".../eventlet/hubs/hub.py", line 24, in > import monotonic > File ".../monotonic.py", line 169, in > raise RuntimeError('no suitable implementation for this system: ' + > repr(e)) > RuntimeError: no suitable implementation for this system: > AttributeError("'module' object has no attribute 'epolls'",) > > Maybe it's worth looking at why monotonic can't find a suitable > implementation? > It used to be an issue in eventlet, but we seem to be using a new enough version to avoid it (the initial problem was IIRC around the pike timeframe). Dmitry > > Tim > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Sep 7 14:49:35 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 07 Sep 2020 16:49:35 +0200 Subject: [nova] virtual PTG and Forum planning In-Reply-To: References: Message-ID: Hi, I've booked Wed - Fri, 13:00 UTC - 17:00 UTC slots for the Nova PTG. Cyborg indicated that they would like to have a cross project session so I added one at Wed 13:00 - 14:00. If we need cross project sessions with other teams please let me know and I will book them as well. I'd like to have them on Wednesday if possible. Cheers, gibi On Mon, Aug 24, 2020 at 14:06, Balázs Gibizer wrote: > Hi, > > As you probably know the next virtual PTG will be held between > October 26-30. I need to book time slots for Nova [1] so please add > your availability to the doodle [2] before 7th of September. > > I have created an etherpad [3] to collect the PTG topics for the Nova > sessions. Feel free to add your topics. > > Also there will be a Forum between October 19-23 [4]. You can use the > PTG etherpad [3] to brainstorm forum topics before the official CFP > opens. > > Cheers, > gibi > > [1] https://ethercalc.openstack.org/7xp2pcbh1ncb > [2] https://doodle.com/poll/a5pgqh7bypq8piew > [3] https://etherpad.opendev.org/p/nova-wallaby-ptg > [4] https://wiki.openstack.org/wiki/Forum/Virtual202 > > > From artem.goncharov at gmail.com Mon Sep 7 15:10:08 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 7 Sep 2020 17:10:08 +0200 Subject: [sdk/cli] virtual PTG planning Message-ID: Hi all, I’ve booked a slot for SDK/CLI PTG on 28th of Oct 14:00-17:00 UTC time. Please fill in the etherpad [1] if you want to come in. Regards, Artem (gtema) [1] https://meetpad.opendev.org/etherpad/p/wallaby_sdk_cli -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Sep 7 15:25:34 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 7 Sep 2020 12:25:34 -0300 Subject: [cloudkitty] Virtual PTG planning In-Reply-To: References: Message-ID: I'm available. I am not sure though if we need half a day. That seems quite a lot, but I might be mistaken (as I have never participated in a PTG planning meeting). 14 UTC works for me on any of the proposed days (Friday October 30 seems to be free a series of slots in the Cactus room). I could also help being the chair, if needed. On Fri, Sep 4, 2020 at 12:47 PM Pierre Riteau wrote: > Hi, > > You may know that the next PTG will be held virtually during the week > of October 26-30, 2020. > I will very likely *not* be available during that time, so I would like to > hear from the CloudKitty community: > > - if you would like to meet and for how long (a half day may be enough > depending on the agenda) > - what day and time is preferred (see list in > https://ethercalc.openstack.org/7xp2pcbh1ncb) > - if anyone is willing to chair the discussions (I can help you prepare an > agenda before the event) > > Thanks in advance, > Pierre Riteau (priteau) > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasluedke at web.de Mon Sep 7 15:45:32 2020 From: Lukasluedke at web.de (Lukasluedke at web.de) Date: Mon, 07 Sep 2020 17:45:32 +0200 Subject: [Kolla Ansible] Unable to connect to external network after Initial deployment Message-ID: An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Sep 7 16:04:34 2020 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 7 Sep 2020 21:34:34 +0530 Subject: [gance_store][FFE] Cinder multiple stores support Message-ID: Hi Team, Last week we released glance_store 2.3.0 which adds support for configuring cinder multiple stores as glance backend. While adding functional tests in glance for the same [1], we have noticed that it is failing with some hard requirements from oslo side to use project_id instead of tenant and user_id instead of user. It is really strange behavior as this failure occurs only in functional tests but works properly in the actual environment without any issue. The fix is proposed in glance_store [2] to resolve this issue. I would like to apply for FFE with this glance_store patch [2] to be approved and release a new version of glance_store 2.3.1. Kindly provide approval for the same. [1] https://review.opendev.org/#/c/750144/ [2] https://review.opendev.org/#/c/750131/ Thanks and Regards, Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Sep 7 16:25:50 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 7 Sep 2020 21:55:50 +0530 Subject: [gance_store][FFE] Cinder multiple stores support In-Reply-To: References: Message-ID: +1 from me, glance_store 2.3.0 contains the actual functionality and glance functionality patch [1] is also in good shape. [1] https://review.opendev.org/#/c/748039/11 Thanks & Best Regards, Abhishek Kekane On Mon, Sep 7, 2020 at 9:40 PM Rajat Dhasmana wrote: > Hi Team, > > Last week we released glance_store 2.3.0 which adds support for configuring cinder multiple stores as glance backend. > While adding functional tests in glance for the same [1], we have noticed that it is failing with some hard requirements from oslo side to use project_id instead of tenant and user_id instead of user. > It is really strange behavior as this failure occurs only in functional tests but works properly in the actual environment without any issue. The fix is proposed in glance_store [2] to resolve this issue. > > I would like to apply for FFE with this glance_store patch [2] to be approved and release a new version of glance_store 2.3.1. > > Kindly provide approval for the same. > > [1] https://review.opendev.org/#/c/750144/ > [2] https://review.opendev.org/#/c/750131/ > > Thanks and Regards, > Rajat Dhasmana > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Mon Sep 7 21:50:50 2020 From: knikolla at bu.edu (Kristi Nikolla) Date: Mon, 7 Sep 2020 17:50:50 -0400 Subject: [keystone][ptg] Keystone vPTG Planning Message-ID: Hi all, I have started a doodle poll for scheduling our sessions during the vPTG. Please fill it out by September 10. [0] The etherpad that we are using for brainstorming the topics is [1]. Please fill it out with anything you would like us to schedule time for. More information on the vPTG and links to register can be found at [2]. [0]. https://doodle.com/poll/czgyuwepxk348vqq [1]. https://etherpad.opendev.org/p/keystone-wallaby-ptg [2]. https://www.openstack.org/ptg/ From satish.txt at gmail.com Tue Sep 8 03:17:35 2020 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 7 Sep 2020 23:17:35 -0400 Subject: who is running RabbitMQ HiPE mode? Message-ID: Folks, I was reading about HiPE mode and thinking to give it a shot but i would like to know feedback from the community? Is it safe to enable HiPE for rabbitMQ to boost performance? From dev.faz at gmail.com Tue Sep 8 04:24:23 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Tue, 8 Sep 2020 06:24:23 +0200 Subject: who is running RabbitMQ HiPE mode? In-Reply-To: References: Message-ID: Hi, HiPE is deprecated, so you should think twice about using it. Maybe the rabbitmq-mailinglist is also a good place to ask for more information. Fabian Satish Patel schrieb am Di., 8. Sept. 2020, 05:23: > Folks, > > I was reading about HiPE mode and thinking to give it a shot but i > would like to know feedback from the community? > > Is it safe to enable HiPE for rabbitMQ to boost performance? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Tue Sep 8 05:13:17 2020 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 8 Sep 2020 15:13:17 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> Message-ID: Hi Yamamoto, > On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto wrote: > i'm talking to our infra folks but it might take longer than i hoped. > if you or someone else can provide a public repo, it might be faster. > (i have looked at launchpad PPA while ago. but it didn't seem > straightforward given the complex build machinary in midonet.) Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in it and happy to use that for the time being. > >> >> I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. > > probably. Yeah this looks like a neutron or neutron-lib issue > >> >> For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. > > this thing? https://review.opendev.org/#/c/749866/ Yeah that fixed that issue. I have been working to get everything fixed in this review [1] The pep8 job is working but not in the gate due to neutron issues [2] The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service which I don’t really have any idea about, never used it. [3] The tempest aio job is working well now, I’m not sure what tempest tests were run before but it’s just doing what ever is the default at the moment. The tempest multinode job isn’t working due to what I think is networking issues between the 2 nodes. I don’t really know what I’m doing here so any pointers would be helpful. [4] The grenade job is also failing because I also need to put these fixes on the stable/ussuri branch to make it work so will need to figure that out too Cheers, Sam [1] https://review.opendev.org/#/c/749857/ [2] https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b [3] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html [4] https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 > >> >> >> >> I can now start to look into the devstack zuul jobs. >> >> Cheers, >> Sam >> >> >> [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack >> [2] https://github.com/midonet/midonet/pull/9 >> >> >> >> >>> On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: >>> >>> >>> >>>> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: >>>> >>>> hi, >>>> >>>> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: >>>>> >>>>> >>>>> >>>>>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: >>>>>> >>>>>> Sebastian, Sam, >>>>>> >>>>>> thank you for speaking up. >>>>>> >>>>>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >>>>>> the major part for it is to make midonet itself to run on ubuntu >>>>>> version used by the ci. (18.04, or maybe directly to 20.04) >>>>>> https://midonet.atlassian.net/browse/MNA-1344 >>>>>> iirc, the remaining blockers are: >>>>>> * libreswan (used by vpnaas) >>>>>> * vpp (used by fip64) >>>>>> maybe it's the easiest to drop those features along with their >>>>>> required components, if it's acceptable for your use cases. >>>>> >>>>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >>>>> >>>>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? >>>> >>>> it still exists. but i don't think it's maintained well. >>>> let me find and ask someone in midokura who "owns" that part of infra. >>>> >>>> does it also involve some package-related modifications to midonet repo, right? >>> >>> >>> Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow >>> >>> Sam >>> >>> >>> >>>> >>>>> >>>>> I’m keen to do the work but might need a bit of guidance to get started, >>>>> >>>>> Sam >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> alternatively you might want to make midonet run in a container. (so >>>>>> that you can run it with older ubuntu, or even a container trimmed for >>>>>> JVM) >>>>>> there were a few attempts to containerize midonet. >>>>>> i think this is the latest one: https://github.com/midonet/midonet-docker >>>>>> >>>>>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >>>>>>> >>>>>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >>>>>>> >>>>>>> I’m happy to help too. >>>>>>> >>>>>>> Cheers, >>>>>>> Sam >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Thx Sebastian for stepping in to maintain the project. That is great news. >>>>>>>> I think that at the beginning You should do 2 things: >>>>>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>>>>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>>>>>>> >>>>>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>>>>>>> >>>>>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: >>>>>>>>> >>>>>>>>> Hi Slawek, >>>>>>>>> >>>>>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>>>>>>>> >>>>>>>>> Please let me know how to proceed and how we can be onboarded easily. >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> >>>>>>>>> Sebastian >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Sebastian Saemann >>>>>>>>> Head of Managed Services >>>>>>>>> >>>>>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>>>>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>>>>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>>>>>>>> https://netways.de | sebastian.saemann at netways.de >>>>>>>>> >>>>>>>>> ** NETWAYS Web Services - https://nws.netways.de ** >>>>>>>> >>>>>>>> — >>>>>>>> Slawek Kaplonski >>>>>>>> Principal software engineer >>>>>>>> Red Hat >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshewale at redhat.com Tue Sep 8 05:18:12 2020 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Tue, 8 Sep 2020 10:48:12 +0530 Subject: [tripleo] Proposing Takashi Kajinami to be core on puppet-tripleo In-Reply-To: References: Message-ID: Congratulations Kajinami San :) Regards, Bhagyashri Shewale On Tue, Aug 18, 2020 at 8:02 PM Emilien Macchi wrote: > Hi people, > > If you don't know Takashi yet, he has been involved in the Puppet > OpenStack project and helped *a lot* in its maintenance (and by maintenance > I mean not-funny-work). When our community was getting smaller and smaller, > he joined us and our review velicity went back to eleven. He became a core > maintainer very quickly and we're glad to have him onboard. > > He's also been involved in taking care of puppet-tripleo for a few months > and I believe he has more than enough knowledge on the module to provide > core reviews and be part of the core maintainer group. I also noticed his > amount of contribution (bug fixes, improvements, reviews, etc) in other > TripleO repos and I'm confident he'll make his road to be core in TripleO > at some point. For now I would like him to propose him to be core in > puppet-tripleo. > > As usual, any feedback is welcome but in the meantime I want to thank > Takashi for his work in TripleO and we're super happy to have new > contributors! > > Thanks, > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Tue Sep 8 07:30:20 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 8 Sep 2020 13:00:20 +0530 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> Message-ID: Hello Dimitri, On Mon, Sep 7, 2020 at 9:01 AM Chandan kumar wrote: > > Hello, > > On Sat, Sep 5, 2020 at 3:44 AM Dimitri Savineau wrote: > > > > Hi, > > > > We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2]. > > In TripleO side, daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 is used but this image is not available on quay.io registry But v4.0.13-stable-4.0-nautilus-centos-7-x86_64 is available there. Can we get daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 in quay.ceph.io registry? or we switch to v4.0.13-stable-4.0-nautilus-centos-7-x86_64 this tag? > > Official ceph images will still be updated on docker.io. > > > > Note that from a ceph-ansible point of view, switching to the quay.ceph.io registry isn't enough to get rid of the docker.io registry when deploying with the Ceph dashboard enabled. > > The whole monitoring stack (alertmanager, prometheus, grafana and node-exporter) coming with the Ceph dashboard is still using docker.io by default [3][4][5][6]. > > > > As an alternative, you can use the official quay registry (quay.io) for altermanager, prometheus and node-exporter images [7] from the prometheus namespace like we're doing in [2]. > > > Only the grafana container image will still be pulled from docker.io. > > > > The app-sre team mirrors the grafana image from docker.io on quay. > https://quay.io/repository/app-sre/grafana?tab=tags , we reuse the same in CI? > > I have proposed a patch on tripleo-common to switch to quay.io -> > https://review.opendev.org/#/c/750119/ > Thanks, Chandan Kumar From gfidente at redhat.com Tue Sep 8 07:42:31 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 8 Sep 2020 09:42:31 +0200 Subject: [tripleo] docker.io rate limiting In-Reply-To: References: <9f9606a3-d8e8-bc66-3440-8cc5ae080d64@redhat.com> Message-ID: <43572ab6-d0d3-7e36-0dde-505e89e7c7fc@redhat.com> On 9/8/20 9:30 AM, Chandan kumar wrote: > Hello Dimitri, > > On Mon, Sep 7, 2020 at 9:01 AM Chandan kumar wrote: >> >> Hello, >> >> On Sat, Sep 5, 2020 at 3:44 AM Dimitri Savineau wrote: >>> >>> Hi, >>> >>> We're currently in the progress of using the quay.ceph.io registry [1] with a copy of the ceph container images from docker.io and consumed by the ceph-ansible CI [2]. >>> > > In TripleO side, daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 is > used but this image is not available on quay.io registry > > But v4.0.13-stable-4.0-nautilus-centos-7-x86_64 is available there. > Can we get daemon:v4.0.12-stable-4.0-nautilus-centos-7-x86_64 in > quay.ceph.io registry? or we switch to > v4.0.13-stable-4.0-nautilus-centos-7-x86_64 this tag? we can switch to the newer image version ... but what will in the future control which image is copied from docker.io to quay.io this is the same question I had in https://review.opendev.org/#/c/750119/4 ; I guess we can continue the conversation there -- Giulio Fidente GPG KEY: 08D733BA From skaplons at redhat.com Tue Sep 8 08:09:00 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 8 Sep 2020 10:09:00 +0200 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: References: Message-ID: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Hi, Maybe You hit this bug [1]. Please check what ovn version do You have and maybe update it if needed. On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: > Hi all, > > I deployed an environment with TripleO Ussuri with 3 HA Controllers and > some Compute nodes with neutron-ovn-dvr-ha.yaml > Instances have Internet access through routers with SNAT traffic (in this > case traffic is routed via a controller node), and by assigning IP address > directly from provider network (not having a router). > > But in case of assigning FIP from provider to an instance, VM Internet > connection is lost. > Here is the output of router nat lists, which seems OK: > > > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP > EXTERNAL_MAC LOGICAL_PORT > dnat_and_snat X.X.X.X 192.168.0.153 > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 > snat Y.Y.Y.Y 192.168.0.0/24 > > > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y > > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and as it > has to be, but traffic routes from a Controller node and it won't be > distributed anymore. > > Any idea or suggestion would be grateful. > Regards, > Reza [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 -- Slawek Kaplonski Principal software engineer Red Hat From skaplons at redhat.com Tue Sep 8 08:30:08 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 8 Sep 2020 10:30:08 +0200 Subject: [neutron] Wallaby PTG planning In-Reply-To: <20200821072155.pfdjttikum5r54hz@skaplons-mac> References: <20200821072155.pfdjttikum5r54hz@skaplons-mac> Message-ID: <20200908083008.3ykanudjhg5fsavs@skaplons-mac> Hi, Based on doodle [1] I just booked slots: Monday 15 - 17 UTC Tuesday 14 - 17 UTC Wednesday 14 - 17 UTC Thursday 14 - 17 UTC Friday 14 - 17 UTC For now I don't have any agenda for those sessions yet. Please add Your topics to the etherpad [2]. If You need some cross project session with Neutron, please also let me know so we can book some slot for that. On Fri, Aug 21, 2020 at 09:21:55AM +0200, Slawek Kaplonski wrote: > Hi, > > It's again that time of the cycle (time flies) when we need to start thinking > about next cycle already. > As You probably know, next virtual PTG will be in October 26-30. > I need to book some space for the Neuton team before 11th of September so I > prepared doodle [1] with possible time slots. Please reply there what are the > best days and hours for You so we can try to schedule our sessions in the time > slots which fits best most of us :) > Please fill this doodle before 4.09 so I will have time to summarize it and book > some slots for us. > > I also prepared etherpad [2]. Please add Your name if You are going to attend > the PTG sessions. > Please also add proposals of the topics which You want to discuss during the > PTG. > > [1] https://doodle.com/poll/2ppmnua2nuva5nyp > [2] https://etherpad.opendev.org/p/neutron-wallaby-ptg > > -- > Slawek Kaplonski > Principal software engineer > Red Hat -- Slawek Kaplonski Principal software engineer Red Hat From lyarwood at redhat.com Tue Sep 8 10:12:40 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 8 Sep 2020 11:12:40 +0100 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> Message-ID: <20200908101240.3kxygycjgo6cqh5u@lyarwood.usersys.redhat.com> On 07-09-20 09:29:40, Ghanshyam Mann wrote: > Bugs Report: > ========== > > 1. Bug#1882521. (IN-PROGRESS) > There is open bug for nova/cinder where three tempest tests are failing for > volume detach operation. There is no clear root cause found yet > -https://bugs.launchpad.net/cinder/+bug/1882521 > We have skipped the tests in tempest base patch to proceed with the other > projects testing but this is blocking things for the migration. FWIW this looks like a QEMU 4.2 issue and I've raised the following bug: Second DEVICE_DELETED event missing during virtio-blk disk device detach https://bugs.launchpad.net/qemu/+bug/1894804 -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sean.mcginnis at gmx.com Tue Sep 8 10:57:28 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 8 Sep 2020 05:57:28 -0500 Subject: [release] Release countdown for week R-5 Sept 7 - 11 Message-ID: <20200908105728.GA725930@sm-workstation> We are getting close to the end of the Victoria cycle! Next week on September 10 is the victoria-3 milestone, also known as feature freeze. It's time to wrap up feature work in the services and their client libraries, and defer features that won't make it to Wallaby. General Information ------------------- This coming week is the deadline for client libraries: their last feature release needs to happen before "Client library freeze" on September 3. Only bugfix releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/victoria branching request with the review. As an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2 September 10 is also the deadline for feature work in all OpenStack deliverables following the cycle-with-rc model. To help those projects produce a first release candidate in time, only bugfixes should be allowed in the master branch beyond this point. Any feature work past that deadline has to be raised as a Feature Freeze Exception (FFE) and approved by the team PTL. Finally, feature freeze is also the deadline for submitting a first version of your cycle-highlights. Cycle highlights are the raw data hat helps shape what is communicated in press releases and other release activity at the end of the cycle, avoiding direct contacts from marketing folks. See https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights for more details. Upcoming Deadlines & Dates -------------------------- Client library freeze: September 10 (R-5 week) Victoria-3 milestone: September 10 (R-5 week) Cycle Highlights Due: September 10 (R-5 week) Final Victoria release: October 14 Open Infra Summit: October 19-23 Wallaby PTG: October 26-30 From reza.b2008 at gmail.com Tue Sep 8 11:03:30 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Tue, 8 Sep 2020 15:33:30 +0430 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> References: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Message-ID: Hi Slawek, I'm using the latest CentOS 8 Ussuri OVN packages at: https://trunk.rdoproject.org/centos8-ussuri/deps/latest/x86_64/ On both Controller and Compute I get: # rpm -qa | grep ovn ovn-host-20.03.0-4.el8.x86_64 ovn-20.03.0-4.el8.x86_64 # yum info ovn Installed Packages Name : ovn Version : 20.03.0 Release : 4.el8 Architecture : x86_64 Size : 12 M Source : ovn-20.03.0-4.el8.src.rpm Repository : @System >From repo : delorean-ussuri-testing Summary : Open Virtual Network support URL : http://www.openvswitch.org/ License : ASL 2.0 and LGPLv2+ and SISSL Do you suggest installing ovn manually from source on containers? ي On Tue, 8 Sep 2020 at 12:39, Slawek Kaplonski wrote: > Hi, > > Maybe You hit this bug [1]. Please check what ovn version do You have and > maybe > update it if needed. > > On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: > > Hi all, > > > > I deployed an environment with TripleO Ussuri with 3 HA Controllers and > > some Compute nodes with neutron-ovn-dvr-ha.yaml > > Instances have Internet access through routers with SNAT traffic (in this > > case traffic is routed via a controller node), and by assigning IP > address > > directly from provider network (not having a router). > > > > But in case of assigning FIP from provider to an instance, VM Internet > > connection is lost. > > Here is the output of router nat lists, which seems OK: > > > > > > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 > > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP > > EXTERNAL_MAC LOGICAL_PORT > > dnat_and_snat X.X.X.X 192.168.0.153 > > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 > > snat Y.Y.Y.Y 192.168.0.0/24 > > > > > > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y > > > > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and as > it > > has to be, but traffic routes from a Controller node and it won't be > > distributed anymore. > > > > Any idea or suggestion would be grateful. > > Regards, > > Reza > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 > > -- > Slawek Kaplonski > Principal software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.dibbo at stfc.ac.uk Tue Sep 8 12:47:43 2020 From: alexander.dibbo at stfc.ac.uk (Alexander Dibbo - UKRI STFC) Date: Tue, 8 Sep 2020 12:47:43 +0000 Subject: Create Application Credential not appearing in Horizon Message-ID: <6fd945e064ad4be88bad05c17594b067@stfc.ac.uk> Hi All, I'm having an issue where users with the "user" role in my deployment are not seeing the "Create Application Credential" button in horizon although users with the "admin" role can. I have tried modifying the keystone policy a couple of different ways to fix this but none have worked: "identity:create_application_credential": "user_id:%(user_id)s", "identity:create_application_credential": "role: user", "identity:create_application_credential": "rule:owner", "identity:create_application_credential": "", Could anyone point me in the right direction? My deployment is running the Train release of OpenStack Thanks, Alex Regards Alexander Dibbo - Cloud Architect / Cloud Operations Group Leader For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk To receive notifications about the service please subscribe to our mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, conclusions or other information in this message and attachments that are not related directly to UKRI business are solely those of the author and do not represent the views of UKRI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsafrono at redhat.com Tue Sep 8 12:50:49 2020 From: rsafrono at redhat.com (Roman Safronov) Date: Tue, 8 Sep 2020 15:50:49 +0300 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: References: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Message-ID: Hi Reza, Are you using 'geneve' tenant networks or 'vlan' ones? I am asking because with VLAN we have the following DVR issue [1] [1] Bug 1704596 - FIP traffix does not work on OVN-DVR setup when using VLAN tenant network type On Tue, Sep 8, 2020 at 2:04 PM Reza Bakhshayeshi wrote: > Hi Slawek, > > I'm using the latest CentOS 8 Ussuri OVN packages at: > https://trunk.rdoproject.org/centos8-ussuri/deps/latest/x86_64/ > > On both Controller and Compute I get: > > # rpm -qa | grep ovn > ovn-host-20.03.0-4.el8.x86_64 > ovn-20.03.0-4.el8.x86_64 > > # yum info ovn > Installed Packages > Name : ovn > Version : 20.03.0 > Release : 4.el8 > Architecture : x86_64 > Size : 12 M > Source : ovn-20.03.0-4.el8.src.rpm > Repository : @System > From repo : delorean-ussuri-testing > Summary : Open Virtual Network support > URL : http://www.openvswitch.org/ > License : ASL 2.0 and LGPLv2+ and SISSL > > Do you suggest installing ovn manually from source on containers? > ي > > On Tue, 8 Sep 2020 at 12:39, Slawek Kaplonski wrote: > >> Hi, >> >> Maybe You hit this bug [1]. Please check what ovn version do You have and >> maybe >> update it if needed. >> >> On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: >> > Hi all, >> > >> > I deployed an environment with TripleO Ussuri with 3 HA Controllers and >> > some Compute nodes with neutron-ovn-dvr-ha.yaml >> > Instances have Internet access through routers with SNAT traffic (in >> this >> > case traffic is routed via a controller node), and by assigning IP >> address >> > directly from provider network (not having a router). >> > >> > But in case of assigning FIP from provider to an instance, VM Internet >> > connection is lost. >> > Here is the output of router nat lists, which seems OK: >> > >> > >> > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 >> > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP >> > EXTERNAL_MAC LOGICAL_PORT >> > dnat_and_snat X.X.X.X 192.168.0.153 >> > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 >> > snat Y.Y.Y.Y 192.168.0.0/24 >> > >> > >> > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y >> > >> > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and as >> it >> > has to be, but traffic routes from a Controller node and it won't be >> > distributed anymore. >> > >> > Any idea or suggestion would be grateful. >> > Regards, >> > Reza >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 >> >> -- >> Slawek Kaplonski >> Principal software engineer >> Red Hat >> >> -- ROMAN SAFRONOV SENIOR QE, OPENSTACK NETWORKING Red Hat Israel M: +972545433957 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ligang6 at huawei.com Mon Sep 7 14:04:20 2020 From: ligang6 at huawei.com (ligang (P)) Date: Mon, 7 Sep 2020 14:04:20 +0000 Subject: watchdog fed successfully event of 6300esb Message-ID: <624825e298f94650a7b69f483c9de84d@huawei.com> Hi folks, I have an question to discuss about the 6300esb watchdog. I think is it possible that qemu can send an event while the watchdog successfully fed by the vm at the first time. Here is the situation: Qemu will send an VIR_DOMAIN_EVENT_ID_WATCHDOG event while watch dog timeout, and if the action of the watchdog in xml of the vm was set to "reset", the vm will be rebooted while timeout. I have an monitor process that register callback function of the VIR_DOMAIN_EVENT_ID_WATCHDOG event, the callback function will send an alarm to my upper layer monitor platform indicate that the vm is fault, and the cluster deployed business on the vm will isolate the vm by the alarm. And after the vm rebooted , the monitor process will receive an reboot event and send it to the platform, the upper layer monitor platform will clear the alarm, and business continue to run on the vm. In most cases ,the watch dog process in vm will feed the watchdog after vm rebooted and all things go back on track. In some other cases,the guestos may failed to start (in my environment vm start failed by io error), but the reboot event will still be received and the alarm will be cleared and the vm is still fault. So the this may not a good idea to clear the alarm by the reboot event. So, I think it will be helpful that the qemu can send an event while the watchdog successfully fed by the vm at the first time. So I can exactly know that the guest os go back on running and the watch dog initialized successfully. Or any other opintion about this situation. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Sep 8 14:18:14 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Sep 2020 16:18:14 +0200 Subject: Create Application Credential not appearing in Horizon In-Reply-To: <6fd945e064ad4be88bad05c17594b067@stfc.ac.uk> References: <6fd945e064ad4be88bad05c17594b067@stfc.ac.uk> Message-ID: Hi Alex, In addition to adding this setting to Keystone's policy file, does Horizon have its own copy of the keystone policy file accessible and including these settings? By default it is named keystone_policy.json. Best wishes, Pierre On Tue, 8 Sep 2020 at 14:48, Alexander Dibbo - UKRI STFC < alexander.dibbo at stfc.ac.uk> wrote: > Hi All, > > > > I’m having an issue where users with the “user” role in my deployment are > not seeing the “Create Application Credential” button in horizon although > users with the “admin” role can. > > > > I have tried modifying the keystone policy a couple of different ways to > fix this but none have worked: > > "identity:create_application_credential": "user_id:%(user_id)s", > > "identity:create_application_credential": "role: user", > > "identity:create_application_credential": "rule:owner", > > "identity:create_application_credential": "", > > > > Could anyone point me in the right direction? > > > > My deployment is running the Train release of OpenStack > > > > Thanks, > > > > Alex > > > > > > Regards > > > > Alexander Dibbo – Cloud Architect / Cloud Operations Group Leader > > For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io > > To raise a support ticket with the cloud team please email > cloud-support at gridpp.rl.ac.uk > > To receive notifications about the service please subscribe to our mailing > list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD > > To receive fast notifications or to discuss usage of the cloud please join > our Slack: https://stfc-cloud.slack.com/ > > > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. Opinions, conclusions or other information in this > message and attachments that are not related directly to UKRI business are > solely those of the author and do not represent the views of UKRI. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.dibbo at stfc.ac.uk Tue Sep 8 14:25:38 2020 From: alexander.dibbo at stfc.ac.uk (Alexander Dibbo - UKRI STFC) Date: Tue, 8 Sep 2020 14:25:38 +0000 Subject: Create Application Credential not appearing in Horizon In-Reply-To: References: <6fd945e064ad4be88bad05c17594b067@stfc.ac.uk> Message-ID: Thanks Pierre, That’s got it, I knew it would be something simple Thanks Regards Alexander Dibbo – Cloud Architect / Cloud Operations Group Leader For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk To receive notifications about the service please subscribe to our mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ From: Pierre Riteau Sent: 08 September 2020 15:18 To: Dibbo, Alexander (STFC,RAL,SC) Cc: openstack-discuss at lists.openstack.org Subject: Re: Create Application Credential not appearing in Horizon Hi Alex, In addition to adding this setting to Keystone's policy file, does Horizon have its own copy of the keystone policy file accessible and including these settings? By default it is named keystone_policy.json. Best wishes, Pierre On Tue, 8 Sep 2020 at 14:48, Alexander Dibbo - UKRI STFC > wrote: Hi All, I’m having an issue where users with the “user” role in my deployment are not seeing the “Create Application Credential” button in horizon although users with the “admin” role can. I have tried modifying the keystone policy a couple of different ways to fix this but none have worked: "identity:create_application_credential": "user_id:%(user_id)s", "identity:create_application_credential": "role: user", "identity:create_application_credential": "rule:owner", "identity:create_application_credential": "", Could anyone point me in the right direction? My deployment is running the Train release of OpenStack Thanks, Alex Regards Alexander Dibbo – Cloud Architect / Cloud Operations Group Leader For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk To receive notifications about the service please subscribe to our mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, conclusions or other information in this message and attachments that are not related directly to UKRI business are solely those of the author and do not represent the views of UKRI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Sep 8 14:40:43 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Sep 2020 16:40:43 +0200 Subject: [cloudkitty] Virtual PTG planning In-Reply-To: References: Message-ID: Hi Rafael, Thanks a lot for your reply and for volunteering to help moderate the discussion. I've answered the survey to register our interest in attending. I also booked three hours in the Cactus room starting at 14 UTC on Friday, October 30. That's probably above what we really need, so don't feel like you have to use it all if there's no more to discuss. Best wishes, Pierre On Mon, 7 Sep 2020 at 17:26, Rafael Weingärtner wrote: > I'm available. I am not sure though if we need half a day. That seems > quite a lot, but I might be mistaken (as I have never participated in a PTG > planning meeting). > 14 UTC works for me on any of the proposed days (Friday October 30 seems > to be free a series of slots in the Cactus room). I could also help being > the chair, if needed. > > > On Fri, Sep 4, 2020 at 12:47 PM Pierre Riteau wrote: > >> Hi, >> >> You may know that the next PTG will be held virtually during the week >> of October 26-30, 2020. >> I will very likely *not* be available during that time, so I would like >> to hear from the CloudKitty community: >> >> - if you would like to meet and for how long (a half day may be enough >> depending on the agenda) >> - what day and time is preferred (see list in >> https://ethercalc.openstack.org/7xp2pcbh1ncb) >> - if anyone is willing to chair the discussions (I can help you prepare >> an agenda before the event) >> >> Thanks in advance, >> Pierre Riteau (priteau) >> > > > -- > Rafael Weingärtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Tue Sep 8 15:13:05 2020 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 8 Sep 2020 20:43:05 +0530 Subject: [requirements][FFE] Cinder multiple stores support Message-ID: Hi Team, Last week we released glance_store 2.3.0 which adds support for configuring cinder multiple stores as glance backend. While adding functional tests in glance for the same [1], we have noticed that it is failing with some hard requirements from oslo side to use project_id instead of tenant and user_id instead of user. It is really strange behavior as this failure occurs only in functional tests but works properly in the actual environment without any issue. The fix is proposed in glance_store [2] to resolve this issue. I would like to apply for FFE with this glance_store patch [2] to be approved and release a new version of glance_store 2.3.1. Kindly provide approval for the same. [1] https://review.opendev.org/#/c/750144/ [2] https://review.opendev.org/#/c/750131/ Thanks and Regards, Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Sep 8 15:23:03 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 8 Sep 2020 20:53:03 +0530 Subject: [requirements][FFE] Cinder multiple stores support In-Reply-To: References: Message-ID: Hi Team, The reason for failure is we are suppressing Deprecation warning into error in glance [1] and we are using those deprecated parameters in glance_store. This is the reason why it is only failing in functional tests [2] and not in actual scenarios. [1] https://opendev.org/openstack/glance/src/branch/master/glance/tests/unit/fixtures.py#L133-L136 [2]https://review.opendev.org/#/c/750144/ Thanks & Best Regards, Abhishek Kekane On Tue, Sep 8, 2020 at 8:48 PM Rajat Dhasmana wrote: > Hi Team, > > Last week we released glance_store 2.3.0 which adds support for configuring cinder multiple stores as glance backend. > While adding functional tests in glance for the same [1], we have noticed that it is failing with some hard requirements from oslo side to use project_id instead of tenant and user_id instead of user. > It is really strange behavior as this failure occurs only in functional tests but works properly in the actual environment without any issue. The fix is proposed in glance_store [2] to resolve this issue. > > I would like to apply for FFE with this glance_store patch [2] to be approved and release a new version of glance_store 2.3.1. > > Kindly provide approval for the same. > > [1] https://review.opendev.org/#/c/750144/ > [2] https://review.opendev.org/#/c/750131/ > > Thanks and Regards, > Rajat Dhasmana > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.dibbo at stfc.ac.uk Tue Sep 8 15:28:01 2020 From: alexander.dibbo at stfc.ac.uk (Alexander Dibbo - UKRI STFC) Date: Tue, 8 Sep 2020 15:28:01 +0000 Subject: Application Credentials with federated users Message-ID: <7fc8d66653064962aa458e13124bcb9d@stfc.ac.uk> Hi All, Is it possible for a user logging in via an oidc provider to generate application credentials? When I try it I get an error about there being no role for the user in the project. We map the users to groups based on assertions in their tokens. It looks like it would work if we mapped users individually to local users in keystone and then gave those roles. I would prefer to avoid using per user mappings for this if possible as it would be a lot of extra work for my team. Regards Alexander Dibbo - Cloud Architect / Cloud Operations Group Leader For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk To receive notifications about the service please subscribe to our mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, conclusions or other information in this message and attachments that are not related directly to UKRI business are solely those of the author and do not represent the views of UKRI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 8 15:45:36 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Sep 2020 10:45:36 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib Message-ID: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> Hello Team, This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and base job switch is planned to be switched by today[1]. There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. Opinion? [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html -gmann From its-openstack at zohocorp.com Tue Sep 8 16:49:51 2020 From: its-openstack at zohocorp.com (its-openstack at zohocorp.com) Date: Tue, 08 Sep 2020 22:19:51 +0530 Subject: Windows 10 instance hostname not updating Message-ID: <1746e9fa998.113c837477438.2565411855453438482@zohocorp.com> Dear openstack, I have installed openstack train branch, I am facing issue with windows image. all windows 10 instance dosen't get its hostname updated from the metadata, but able to get the metadata(hostname) from inside the instance using powershell. ``` $ Invoke-WebRequest http://169.254.169.254/latest/meta-data/hostname -UseBasicParsing  ``` windows2016 instance no issue. using the stable cloudbase-init package for preparation of windows 10. if you would so kindly help us with this issue Regards sysadmin -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza.b2008 at gmail.com Tue Sep 8 16:51:32 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Tue, 8 Sep 2020 21:21:32 +0430 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: References: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Message-ID: Hi Roman, I'm using 'geneve' for my tenant networks. By the way, by pinging 8.8.8.8 from an instance with FIP, tcpdump on its Compute node shows an ARP request for every lost ping. Is it normal behaviour? 21:13:04.808508 ARP, Request who-has dns.google tell X.X.X.X , length 28 21:13:05.808726 ARP, Request who-has dns.google tell X.X.X.X , length 28 21:13:06.808900 ARP, Request who-has dns.google tell X.X.X.X , length 28 . . . X.X.X.X if FIP of VM. On Tue, 8 Sep 2020 at 17:21, Roman Safronov wrote: > Hi Reza, > > Are you using 'geneve' tenant networks or 'vlan' ones? I am asking because > with VLAN we have the following DVR issue [1] > > [1] Bug 1704596 - FIP traffix does not work on OVN-DVR setup when using > VLAN tenant network type > > > On Tue, Sep 8, 2020 at 2:04 PM Reza Bakhshayeshi > wrote: > >> Hi Slawek, >> >> I'm using the latest CentOS 8 Ussuri OVN packages at: >> https://trunk.rdoproject.org/centos8-ussuri/deps/latest/x86_64/ >> >> On both Controller and Compute I get: >> >> # rpm -qa | grep ovn >> ovn-host-20.03.0-4.el8.x86_64 >> ovn-20.03.0-4.el8.x86_64 >> >> # yum info ovn >> Installed Packages >> Name : ovn >> Version : 20.03.0 >> Release : 4.el8 >> Architecture : x86_64 >> Size : 12 M >> Source : ovn-20.03.0-4.el8.src.rpm >> Repository : @System >> From repo : delorean-ussuri-testing >> Summary : Open Virtual Network support >> URL : http://www.openvswitch.org/ >> License : ASL 2.0 and LGPLv2+ and SISSL >> >> Do you suggest installing ovn manually from source on containers? >> ي >> >> On Tue, 8 Sep 2020 at 12:39, Slawek Kaplonski >> wrote: >> >>> Hi, >>> >>> Maybe You hit this bug [1]. Please check what ovn version do You have >>> and maybe >>> update it if needed. >>> >>> On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: >>> > Hi all, >>> > >>> > I deployed an environment with TripleO Ussuri with 3 HA Controllers and >>> > some Compute nodes with neutron-ovn-dvr-ha.yaml >>> > Instances have Internet access through routers with SNAT traffic (in >>> this >>> > case traffic is routed via a controller node), and by assigning IP >>> address >>> > directly from provider network (not having a router). >>> > >>> > But in case of assigning FIP from provider to an instance, VM Internet >>> > connection is lost. >>> > Here is the output of router nat lists, which seems OK: >>> > >>> > >>> > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 >>> > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP >>> > EXTERNAL_MAC LOGICAL_PORT >>> > dnat_and_snat X.X.X.X 192.168.0.153 >>> > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 >>> > snat Y.Y.Y.Y 192.168.0.0/24 >>> > >>> > >>> > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y >>> > >>> > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and >>> as it >>> > has to be, but traffic routes from a Controller node and it won't be >>> > distributed anymore. >>> > >>> > Any idea or suggestion would be grateful. >>> > Regards, >>> > Reza >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 >>> >>> -- >>> Slawek Kaplonski >>> Principal software engineer >>> Red Hat >>> >>> > > -- > > ROMAN SAFRONOV > > SENIOR QE, OPENSTACK NETWORKING > > Red Hat > > Israel > > M: +972545433957 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsafrono at redhat.com Tue Sep 8 17:10:15 2020 From: rsafrono at redhat.com (Roman Safronov) Date: Tue, 8 Sep 2020 20:10:15 +0300 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: References: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Message-ID: > > I'm using 'geneve' for my tenant networks. > > By the way, by pinging 8.8.8.8 from an instance with FIP, tcpdump on its > Compute node shows an ARP request for every lost ping. Is it normal > behaviour? > > 21:13:04.808508 ARP, Request who-has dns.google tell X.X.X.X , length 28 > 21:13:05.808726 ARP, Request who-has dns.google tell X.X.X.X , length 28 > 21:13:06.808900 ARP, Request who-has dns.google tell X.X.X.X , length 28 > . > . > . > X.X.X.X if FIP of VM. > If so, it looks like the bug mentioned above. On Tue, Sep 8, 2020 at 7:51 PM Reza Bakhshayeshi wrote: > Hi Roman, > > I'm using 'geneve' for my tenant networks. > > By the way, by pinging 8.8.8.8 from an instance with FIP, tcpdump on its > Compute node shows an ARP request for every lost ping. Is it normal > behaviour? > > 21:13:04.808508 ARP, Request who-has dns.google tell X.X.X.X , length 28 > 21:13:05.808726 ARP, Request who-has dns.google tell X.X.X.X , length 28 > 21:13:06.808900 ARP, Request who-has dns.google tell X.X.X.X , length 28 > . > . > . > X.X.X.X if FIP of VM. > > > On Tue, 8 Sep 2020 at 17:21, Roman Safronov wrote: > >> Hi Reza, >> >> Are you using 'geneve' tenant networks or 'vlan' ones? I am asking >> because with VLAN we have the following DVR issue [1] >> >> [1] Bug 1704596 - FIP traffix does not work on OVN-DVR setup when using >> VLAN tenant network type >> >> >> On Tue, Sep 8, 2020 at 2:04 PM Reza Bakhshayeshi >> wrote: >> >>> Hi Slawek, >>> >>> I'm using the latest CentOS 8 Ussuri OVN packages at: >>> https://trunk.rdoproject.org/centos8-ussuri/deps/latest/x86_64/ >>> >>> On both Controller and Compute I get: >>> >>> # rpm -qa | grep ovn >>> ovn-host-20.03.0-4.el8.x86_64 >>> ovn-20.03.0-4.el8.x86_64 >>> >>> # yum info ovn >>> Installed Packages >>> Name : ovn >>> Version : 20.03.0 >>> Release : 4.el8 >>> Architecture : x86_64 >>> Size : 12 M >>> Source : ovn-20.03.0-4.el8.src.rpm >>> Repository : @System >>> From repo : delorean-ussuri-testing >>> Summary : Open Virtual Network support >>> URL : http://www.openvswitch.org/ >>> License : ASL 2.0 and LGPLv2+ and SISSL >>> >>> Do you suggest installing ovn manually from source on containers? >>> ي >>> >>> On Tue, 8 Sep 2020 at 12:39, Slawek Kaplonski >>> wrote: >>> >>>> Hi, >>>> >>>> Maybe You hit this bug [1]. Please check what ovn version do You have >>>> and maybe >>>> update it if needed. >>>> >>>> On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: >>>> > Hi all, >>>> > >>>> > I deployed an environment with TripleO Ussuri with 3 HA Controllers >>>> and >>>> > some Compute nodes with neutron-ovn-dvr-ha.yaml >>>> > Instances have Internet access through routers with SNAT traffic (in >>>> this >>>> > case traffic is routed via a controller node), and by assigning IP >>>> address >>>> > directly from provider network (not having a router). >>>> > >>>> > But in case of assigning FIP from provider to an instance, VM Internet >>>> > connection is lost. >>>> > Here is the output of router nat lists, which seems OK: >>>> > >>>> > >>>> > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 >>>> > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP >>>> > EXTERNAL_MAC LOGICAL_PORT >>>> > dnat_and_snat X.X.X.X 192.168.0.153 >>>> > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 >>>> > snat Y.Y.Y.Y 192.168.0.0/24 >>>> > >>>> > >>>> > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y >>>> > >>>> > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and >>>> as it >>>> > has to be, but traffic routes from a Controller node and it won't be >>>> > distributed anymore. >>>> > >>>> > Any idea or suggestion would be grateful. >>>> > Regards, >>>> > Reza >>>> >>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 >>>> >>>> -- >>>> Slawek Kaplonski >>>> Principal software engineer >>>> Red Hat >>>> >>>> >> >> -- >> >> ROMAN SAFRONOV >> >> SENIOR QE, OPENSTACK NETWORKING >> >> Red Hat >> >> Israel >> >> M: +972545433957 >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Tue Sep 8 17:12:33 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Tue, 8 Sep 2020 19:12:33 +0200 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: References: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Message-ID: Hi Reza, Here is a related bug: https://bugs.launchpad.net/bugs/1881041 I had to use ovn/ovs 2.13 builds from cbs to overcome this issue ( https://cbs.centos.org/koji/buildinfo?buildID=30482) Regards, Michal On Tue, 8 Sep 2020 at 18:52, Reza Bakhshayeshi wrote: > Hi Roman, > > I'm using 'geneve' for my tenant networks. > > By the way, by pinging 8.8.8.8 from an instance with FIP, tcpdump on its > Compute node shows an ARP request for every lost ping. Is it normal > behaviour? > > 21:13:04.808508 ARP, Request who-has dns.google tell > > X.X.X.X > > > > , length 28 > 21:13:05.808726 ARP, Request who-has dns.google tell > > X.X.X.X > > > > , length 28 > 21:13:06.808900 ARP, Request who-has dns.google tell > > X.X.X.X > > > > , length 28 > . > . > . > X.X.X.X if FIP of VM. > > > On Tue, 8 Sep 2020 at 17:21, Roman Safronov wrote: > >> Hi Reza, >> >> Are you using 'geneve' tenant networks or 'vlan' ones? I am asking >> because with VLAN we have the following DVR issue [1] >> >> [1] Bug 1704596 - FIP traffix does not work on OVN-DVR setup when using >> VLAN tenant network type >> >> >> On Tue, Sep 8, 2020 at 2:04 PM Reza Bakhshayeshi >> wrote: >> >>> Hi Slawek, >>> >>> I'm using the latest CentOS 8 Ussuri OVN packages at: >>> https://trunk.rdoproject.org/centos8-ussuri/deps/latest/x86_64/ >>> >>> On both Controller and Compute I get: >>> >>> # rpm -qa | grep ovn >>> ovn-host-20.03.0-4.el8.x86_64 >>> ovn-20.03.0-4.el8.x86_64 >>> >>> # yum info ovn >>> Installed Packages >>> Name : ovn >>> Version : 20.03.0 >>> Release : 4.el8 >>> Architecture : x86_64 >>> Size : 12 M >>> Source : ovn-20.03.0-4.el8.src.rpm >>> Repository : @System >>> From repo : delorean-ussuri-testing >>> Summary : Open Virtual Network support >>> URL : http://www.openvswitch.org/ >>> License : ASL 2.0 and LGPLv2+ and SISSL >>> >>> Do you suggest installing ovn manually from source on containers? >>> ي >>> >>> On Tue, 8 Sep 2020 at 12:39, Slawek Kaplonski >>> wrote: >>> >>>> Hi, >>>> >>>> >>>> >>>> >>>> >>>> Maybe You hit this bug [1]. Please check what ovn version do You have >>>> and maybe >>>> >>>> >>>> update it if needed. >>>> >>>> >>>> >>>> >>>> >>>> On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: >>>> >>>> >>>> > Hi all, >>>> >>>> >>>> > >>>> >>>> >>>> > I deployed an environment with TripleO Ussuri with 3 HA Controllers >>>> and >>>> >>>> >>>> > some Compute nodes with neutron-ovn-dvr-ha.yaml >>>> >>>> >>>> > Instances have Internet access through routers with SNAT traffic (in >>>> this >>>> >>>> >>>> > case traffic is routed via a controller node), and by assigning IP >>>> address >>>> >>>> >>>> > directly from provider network (not having a router). >>>> >>>> >>>> > >>>> >>>> >>>> > But in case of assigning FIP from provider to an instance, VM Internet >>>> >>>> >>>> > connection is lost. >>>> >>>> >>>> > Here is the output of router nat lists, which seems OK: >>>> >>>> >>>> > >>>> >>>> >>>> > >>>> >>>> >>>> > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 >>>> >>>> >>>> > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP >>>> >>>> >>>> > EXTERNAL_MAC LOGICAL_PORT >>>> >>>> >>>> > dnat_and_snat X.X.X.X 192.168.0.153 >>>> >>>> >>>> > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 >>>> >>>> >>>> > snat Y.Y.Y.Y 192.168.0.0/24 >>>> >>>> >>>> > >>>> >>>> >>>> > >>>> >>>> >>>> > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y >>>> >>>> >>>> > >>>> >>>> >>>> > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and >>>> as it >>>> >>>> >>>> > has to be, but traffic routes from a Controller node and it won't be >>>> >>>> >>>> > distributed anymore. >>>> >>>> >>>> > >>>> >>>> >>>> > Any idea or suggestion would be grateful. >>>> >>>> >>>> > Regards, >>>> >>>> >>>> > Reza >>>> >>>> >>>> >>>> >>>> >>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> >>>> Slawek Kaplonski >>>> >>>> >>>> Principal software engineer >>>> >>>> >>>> Red Hat >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> >> >> -- >> >> ROMAN SAFRONOV >> >> SENIOR QE, OPENSTACK NETWORKING >> >> Red Hat >> >> Israel >> >> M: +972545433957 >> >> >> >> >> > > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohuck at redhat.com Tue Sep 8 14:41:30 2020 From: cohuck at redhat.com (Cornelia Huck) Date: Tue, 8 Sep 2020 16:41:30 +0200 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200831044344.GB13784@joy-OptiPlex-7040> References: <3a073222-dcfe-c02d-198b-29f6a507b2e1@redhat.com> <20200818091628.GC20215@redhat.com> <20200818113652.5d81a392.cohuck@redhat.com> <20200820003922.GE21172@joy-OptiPlex-7040> <20200819212234.223667b3@x1.home> <20200820031621.GA24997@joy-OptiPlex-7040> <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> Message-ID: <20200908164130.2fe0d106.cohuck@redhat.com> On Mon, 31 Aug 2020 12:43:44 +0800 Yan Zhao wrote: > On Fri, Aug 28, 2020 at 03:04:12PM +0100, Sean Mooney wrote: > > On Fri, 2020-08-28 at 15:47 +0200, Cornelia Huck wrote: > > > On Wed, 26 Aug 2020 14:41:17 +0800 > > > Yan Zhao wrote: > > > > > > > previously, we want to regard the two mdevs created with dsa-1dwq x 30 and > > > > dsa-2dwq x 15 as compatible, because the two mdevs consist equal resources. > > > > > > > > But, as it's a burden to upper layer, we agree that if this condition > > > > happens, we still treat the two as incompatible. > > > > > > > > To fix it, either the driver should expose dsa-1dwq only, or the target > > > > dsa-2dwq needs to be destroyed and reallocated via dsa-1dwq x 30. > > > > > > AFAIU, these are mdev types, aren't they? So, basically, any management > > > software needs to take care to use the matching mdev type on the target > > > system for device creation? > > > > or just do the simple thing of use the same mdev type on the source and dest. > > matching mdevtypes is not nessiarly trivial. we could do that but we woudl have > > to do that in python rather then sql so it would be slower to do at least today. > > > > we dont currently have the ablity to say the resouce provider must have 1 of these > > set of traits. just that we must have a specific trait. this is a feature we have > > disucssed a couple of times and delayed untill we really really need it but its not out > > of the question that we could add it for this usecase. i suspect however we would do exact > > match first and explore this later after the inital mdev migration works. > > Yes, I think it's good. > > still, I'd like to put it more explicitly to make ensure it's not missed: > the reason we want to specify compatible_type as a trait and check > whether target compatible_type is the superset of source > compatible_type is for the consideration of backward compatibility. > e.g. > an old generation device may have a mdev type xxx-v4-yyy, while a newer > generation device may be of mdev type xxx-v5-yyy. > with the compatible_type traits, the old generation device is still > able to be regarded as compatible to newer generation device even their > mdev types are not equal. If you want to support migration from v4 to v5, can't the (presumably newer) driver that supports v5 simply register the v4 type as well, so that the mdev can be created as v4? (Just like QEMU versioned machine types work.) From alexis.deberg at ubisoft.com Tue Sep 8 14:46:29 2020 From: alexis.deberg at ubisoft.com (Alexis Deberg) Date: Tue, 8 Sep 2020 14:46:29 +0000 Subject: [neutron] Flow drop on agent restart with openvswitch firewall driver Message-ID: Hi All, I'm looking for ideas as we need to upgrade our Neutron deployment and it looks like it would impact workloads a bit much for now to do so and i'm no master of the neutron code... We're running Neutron 14.0.2 with ml2 plugin and firewall_driver set as openvswitch. drop_flows_on_start is default False. Reading at some old bug reports my understanding was that a restart of the neutron-openvswitch-agent should not impact existing flows and be seamless, but this is not what I'm experiencing as I see some temporary drop(s) around when ovs-fctl del-flows/add-flows is called on br-int (either east-west traffic or north-south). I tried switching to iptables_hybrid driver instead and I don't see the issue in that case. e.g when a wget download is happening on an instance while the agent is restarting, I see the following: 2020-09-08 14:26:09 (12.2 MB/s) - Read error at byte 146971864/7416743936 (Success). Retrying I'm a bit lot so i'm wondering if that's expected/known behavior, if a workaround is possible.... Let me know if a bug report might be a better place to dig deeper or not or if you want additional information... or if I missed a closed bug. Thanks ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuuta.takanashi at aol.com Tue Sep 8 16:24:38 2020 From: yuuta.takanashi at aol.com (yuuta.takanashi at aol.com) Date: Tue, 8 Sep 2020 16:24:38 +0000 (UTC) Subject: Tripleo Standalone Deployment Issues - Reproduceable References: <432018575.3487968.1599582278275.ref@mail.yahoo.com> Message-ID: <432018575.3487968.1599582278275@mail.yahoo.com> Hi Experts, I am facing on below issues while following guide of Tripleo Standalone Deployment: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html Issue 1) Using custom domain will cause vm instantiation error after machine reboot Description =========== After Openstack installed, it's adding centos82.localdomain on /etc/hosts and causing OVN Southbound (ovn-sbctl show) using this centos82.localdomain. But after the machine rebooted, "ovn-sbctl show" is showing the correct fqdn: centos82.domain.tld, and it is causing VM instantiate error (Refusing to bind port due to no OVN chassis for host: centos82.localdomain). Steps to reproduce ================== Base installation: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html Tested on Centos 8.2 latest, tested several times on Train and Ussuri. 100% reproduceable. Set the custom FQDN hostname hostnamectl set-hostname centos82.domain.tld hostnamectl set-hostname centos82.domain.tld --transient    (also tested with or without this line) cat /etc/hosts 127.0.0.1       centos82.domain.tld localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1             centos82.domain.tld localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.156.82  centos82.domain.tld also tested with hostnamectl set-hostname centos82.domain.tld hostnamectl set-hostname centos82.domain.tld --transient    (also tested with or without this line) cat /etc/hosts 127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1             localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.156.82  centos82.domain.tld centos82 also tested with hostnamectl set-hostname centos82.domain.tld hostnamectl set-hostname centos82.domain.tld --transient    (also tested with or without this line) cat /etc/hosts 127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1             localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.156.82  centos82.domain.tld Part of standalone_parameters.yaml for domain:   # domain name used by the host   NeutronDnsDomain: domain.tld        Follow the guide to install (without ceph) (Issue 1B) with ceph was having another error, just tested once on Train: fatal: [undercloud]: FAILED! => {     "ceph_ansible_std_out_err": [         "Using /home/stack/standalone-ansible-huva1ccw/ansible.cfg as config file",         "ERROR! the playbook: /usr/share/ceph-ansible/site-container.yml.sample could not be found"     ],     "failed_when_result": true } All will be installed successfully, and able to instantiate VMs and these VMs are able to ping to each other. Then do "reboot" on this standalone host/server. Do instantiate a new VM, and then this new VM status will be ERROR. On Neutron logs: Refusing to bind port due to no OVN chassis for host: centos82.localdomain bind_port Found the cause on OVN: Previous after installed (before reboot) [root at centos82 /]# ovn-sbctl show Chassis ""     hostname: centos82.localdomain     Encap geneve         ip: "192.168.156.82"         options: {csum="true"} [root at centos82 /]# After reboot [root at centos82 /]# ovn-sbctl show Chassis ""     hostname: centos82.domain.tld     Encap geneve         ip: "192.168.156.82"         options: {csum="true"} [root at centos82 /]# So that the OVN seems could not bind the port to different hostname (that already changed). Environment =========== 1. Exact version of OpenStack you are running: Any, tested Train and Ussuri 2. Centos 8.2 latest update Others (without ceph): https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html Workaround ========== Don't use custom domain, instead use "localdomain" as on the document hostnamectl set-hostname centos82.localdomain [root at centos82 ~]# cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.156.82 centos82.localdomain centos82 Deploy, test instantiate, reboot, test instantiate, and working perfectly fine. Question: For custom domain, where did I do wrongly? Is it expected or kind of bugs? Issue 2) This is not Openstack issue, perhaps bugs or perhaps my issue, but it is still related with the topic on Tripleo Standalone Deployment. On the same Standalone Deployment Guide (https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html) with Centos 7.6 or 7.7 with "python-tripleoclient" and this has dependency of Docker 1.13.1, encounters kind of Dns querying issue which I posted here: https://serverfault.com/questions/1032816/centos-7-and-docker-1-13-1-error-timeout-exceeded-while-awaiting-headers-no Perhaps if anybody knows how to resolve this. Also, just wondering is there any way for "python-tripleoclient" to use newer docker-ce 1.19.x instead of docker 1.13.1? ThanksBest regards, TY -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Tue Sep 8 18:19:03 2020 From: johfulto at redhat.com (John Fulton) Date: Tue, 8 Sep 2020 14:19:03 -0400 Subject: Tripleo Standalone Deployment Issues - Reproduceable In-Reply-To: <432018575.3487968.1599582278275@mail.yahoo.com> References: <432018575.3487968.1599582278275.ref@mail.yahoo.com> <432018575.3487968.1599582278275@mail.yahoo.com> Message-ID: On Tue, Sep 8, 2020 at 2:07 PM wrote: > > Hi Experts, > > I am facing on below issues while following guide of Tripleo Standalone Deployment: > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html > > Issue 1) Using custom domain will cause vm instantiation error after machine reboot > > > Description > =========== > After Openstack installed, it's adding centos82.localdomain on /etc/hosts and causing OVN Southbound (ovn-sbctl show) using this centos82.localdomain. But after the machine rebooted, "ovn-sbctl show" is showing the correct fqdn: centos82.domain.tld, and it is causing VM instantiate error (Refusing to bind port due to no OVN chassis for host: centos82.localdomain). > > > Steps to reproduce > ================== > Base installation: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html > Tested on Centos 8.2 latest, tested several times on Train and Ussuri. > 100% reproduceable. > > > Set the custom FQDN hostname > > hostnamectl set-hostname centos82.domain.tld > hostnamectl set-hostname centos82.domain.tld --transient (also tested with or without this line) > > cat /etc/hosts > 127.0.0.1 centos82.domain.tld localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 centos82.domain.tld localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.168.156.82 centos82.domain.tld > > also tested with > > hostnamectl set-hostname centos82.domain.tld > hostnamectl set-hostname centos82.domain.tld --transient (also tested with or without this line) > > cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.168.156.82 centos82.domain.tld centos82 > > also tested with > > hostnamectl set-hostname centos82.domain.tld > hostnamectl set-hostname centos82.domain.tld --transient (also tested with or without this line) > > cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.168.156.82 centos82.domain.tld > > > Part of standalone_parameters.yaml for domain: > # domain name used by the host > NeutronDnsDomain: domain.tld > > > > Follow the guide to install (without ceph) > > (Issue 1B) with ceph was having another error, just tested once on Train: > fatal: [undercloud]: FAILED! => { > "ceph_ansible_std_out_err": [ > "Using /home/stack/standalone-ansible-huva1ccw/ansible.cfg as config file", > "ERROR! the playbook: /usr/share/ceph-ansible/site-container.yml.sample could not be found" > ], > "failed_when_result": true > } It looks like ceph-ansible was executed by tripleo but that ceph-ansible isn't installed. Either install ceph-ansible or ensure your deployment command does not include: -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ John > > > > > All will be installed successfully, and able to instantiate VMs and these VMs are able to ping to each other. > > Then do "reboot" on this standalone host/server. > > Do instantiate a new VM, and then this new VM status will be ERROR. > > On Neutron logs: > Refusing to bind port due to no OVN chassis for host: centos82.localdomain bind_port > > > > Found the cause on OVN: > > Previous after installed (before reboot) > > [root at centos82 /]# ovn-sbctl show > Chassis "" > hostname: centos82.localdomain > Encap geneve > ip: "192.168.156.82" > options: {csum="true"} > [root at centos82 /]# > > > After reboot > > [root at centos82 /]# ovn-sbctl show > Chassis "" > hostname: centos82.domain.tld > Encap geneve > ip: "192.168.156.82" > options: {csum="true"} > [root at centos82 /]# > > > So that the OVN seems could not bind the port to different hostname (that already changed). > > > Environment > =========== > 1. Exact version of OpenStack you are running: Any, tested Train and Ussuri > 2. Centos 8.2 latest update > Others (without ceph): https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html > > > > Workaround > ========== > Don't use custom domain, instead use "localdomain" as on the document > > hostnamectl set-hostname centos82.localdomain > > [root at centos82 ~]# cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.168.156.82 centos82.localdomain centos82 > > Deploy, test instantiate, reboot, test instantiate, and working perfectly fine. > > > Question: For custom domain, where did I do wrongly? Is it expected or kind of bugs? > > > > > > Issue 2) This is not Openstack issue, perhaps bugs or perhaps my issue, but it is still related with the topic on Tripleo Standalone Deployment. > > On the same Standalone Deployment Guide (https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html) > with Centos 7.6 or 7.7 with "python-tripleoclient" and this has dependency of Docker 1.13.1, encounters kind of Dns querying issue which I posted here: https://serverfault.com/questions/1032816/centos-7-and-docker-1-13-1-error-timeout-exceeded-while-awaiting-headers-no > Perhaps if anybody knows how to resolve this. > Also, just wondering is there any way for "python-tripleoclient" to use newer docker-ce 1.19.x instead of docker 1.13.1? > > > > > Thanks > Best regards, > TY > From dwakefi2 at gmu.edu Tue Sep 8 18:51:20 2020 From: dwakefi2 at gmu.edu (Thomas Wakefield) Date: Tue, 8 Sep 2020 18:51:20 +0000 Subject: Kolla-ansible ironic Message-ID: All- We are new to using OpenStack and are testing out Kolla-ansible with hopes of using Ironic as a deployment tool. Our issue is we can’t use the openstack baremetal command, it’s not found after deployment. Our current test environment is built using Train on CentOS 7. And all other basic OpenStack functionality seems to be working with our Kolla install (nova, glance, horizon, etc). We followed these docs, https://docs.openstack.org/kolla-ansible/train/reference/bare-metal/ironic-guide.html , but when we get to running any “openstack baremetal” commands we don’t seem to have the baremetal commands available in openstack. Globals.yml lines that should be relavent: enable_horizon_ironic: "{{ enable_ironic | bool }}" enable_ironic: "yes" enable_ironic_ipxe: "yes" enable_ironic_neutron_agent: "{{ enable_neutron | bool and enable_ironic | bool }}" enable_ironic_pxe_uefi: "no" #enable_iscsid: "{{ (enable_cinder | bool and enable_cinder_backend_iscsi | bool) or enable_ironic | bool }}" ironic_dnsmasq_interface: "em1" # The following value must be set when enabling ironic, ironic_dnsmasq_dhcp_range: "192.168.2.230,192.168.2.239" ironic_dnsmasq_boot_file: "pxelinux.0" ironic_cleaning_network: "demo-net" Ironic is listed as an installed service, but you can see the baremetal commands are not found: root at orc-os5:~## openstack service list +----------------------------------+------------------+-------------------------+ | ID | Name | Type | +----------------------------------+------------------+-------------------------+ | 0e5119acbf384714ab11520fadce36bb | nova_legacy | compute_legacy | | 2ed83015047249f38b782901e03bcfc1 | ironic-inspector | baremetal-introspection | | 5d7aabf15bdc415387fac54fa1ca21df | ironic | baremetal | | 6d05cdce019347e9940389abed959ffb | neutron | network | | 7d9485969e504b2e90273af75e9b1713 | cinderv3 | volumev3 | | a11dc04e83ed4d9ba65474b9de947d1b | keystone | identity | | ad0c2db47b414b34b86a5f6a5aca597c | glance | image | | dcbbc90813714c989b82bece1c0d9d9f | nova | compute | | de0ee6b55486495296516e07d2e9e97c | heat | orchestration | | df605d671d88496d91530fbc01573589 | cinderv2 | volumev2 | | e211294ca78a418ea34d9c29d86b05f1 | placement | placement | | f62ba90bc0b94cb9b3d573605f800a1f | heat-cfn | cloudformation | +----------------------------------+------------------+-------------------------+ root at orc-os5:~## openstack baremetal openstack: 'baremetal' is not an openstack command. See 'openstack --help'. Did you mean one of these? credential create credential delete credential list credential set credential show Is there anything else that needs configured to activate ironic? Thanks in advance. Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Sep 8 19:10:34 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Sep 2020 21:10:34 +0200 Subject: Kolla-ansible ironic In-Reply-To: References: Message-ID: The openstack CLI only includes support for core OpenStack services. Support for additional services is implemented through plugins, generally included in the client package of each service. Run `pip install python-ironicclient` and you will get access to `openstack baremetal` commands. On Tue, 8 Sep 2020 at 20:51, Thomas Wakefield wrote: > All- > > > We are new to using OpenStack and are testing out Kolla-ansible with hopes > of using Ironic as a deployment tool. Our issue is we can’t use the > openstack baremetal command, it’s not found after deployment. Our current > test environment is built using Train on CentOS 7. And all other basic > OpenStack functionality seems to be working with our Kolla install (nova, > glance, horizon, etc). > > > > We followed these docs, > https://docs.openstack.org/kolla-ansible/train/reference/bare-metal/ironic-guide.html , > but when we get to running any “openstack baremetal” commands we don’t seem > to have the baremetal commands available in openstack. > > > > Globals.yml lines that should be relavent: > > > > enable_horizon_*ironic*: "{{ enable_*ironic* | bool }}" > > enable_*ironic*: "yes" > > enable_*ironic*_ipxe: "yes" > > enable_*ironic*_neutron_agent: "{{ enable_neutron | bool and enable_ > *ironic* | bool }}" > > enable_*ironic*_pxe_uefi: "no" > > #enable_iscsid: "{{ (enable_cinder | bool and enable_cinder_backend_iscsi > | bool) or enable_*ironic* | bool }}" > > *ironic*_dnsmasq_interface: "em1" > > # The following value must be set when enabling *ironic*, > > *ironic*_dnsmasq_dhcp_range: "192.168.2.230,192.168.2.239" > > *ironic*_dnsmasq_boot_file: "pxelinux.0" > > *ironic*_cleaning_network: "demo-net" > > > > > > Ironic is listed as an installed service, but you can see the baremetal > commands are not found: > > root at orc-os5:~## openstack service list > > > +----------------------------------+------------------+-------------------------+ > > | ID | Name | Type > | > > > +----------------------------------+------------------+-------------------------+ > > | 0e5119acbf384714ab11520fadce36bb | nova_legacy | compute_legacy > | > > | 2ed83015047249f38b782901e03bcfc1 | ironic-inspector | > baremetal-introspection | > > | 5d7aabf15bdc415387fac54fa1ca21df | ironic | baremetal > | > > | 6d05cdce019347e9940389abed959ffb | neutron | network > | > > | 7d9485969e504b2e90273af75e9b1713 | cinderv3 | volumev3 > | > > | a11dc04e83ed4d9ba65474b9de947d1b | keystone | identity > | > > | ad0c2db47b414b34b86a5f6a5aca597c | glance | image > | > > | dcbbc90813714c989b82bece1c0d9d9f | nova | compute > | > > | de0ee6b55486495296516e07d2e9e97c | heat | orchestration > | > > | df605d671d88496d91530fbc01573589 | cinderv2 | volumev2 > | > > | e211294ca78a418ea34d9c29d86b05f1 | placement | placement > | > > | f62ba90bc0b94cb9b3d573605f800a1f | heat-cfn | cloudformation > | > > > +----------------------------------+------------------+-------------------------+ > > root at orc-os5:~## openstack baremetal > > openstack: 'baremetal' is not an openstack command. See 'openstack --help'. > > Did you mean one of these? > > credential create > > credential delete > > credential list > > credential set > > credential show > > > > > > Is there anything else that needs configured to activate ironic? > > > > Thanks in advance. > > Tom > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwakefi2 at gmu.edu Tue Sep 8 19:14:59 2020 From: dwakefi2 at gmu.edu (Thomas Wakefield) Date: Tue, 8 Sep 2020 19:14:59 +0000 Subject: Kolla-ansible ironic In-Reply-To: References: Message-ID: Pierre- Huge thanks, that seems to have solved it. -Tom From: Pierre Riteau Date: Tuesday, September 8, 2020 at 3:11 PM To: Thomas Wakefield Cc: "openstack-discuss at lists.openstack.org" Subject: Re: Kolla-ansible ironic The openstack CLI only includes support for core OpenStack services. Support for additional services is implemented through plugins, generally included in the client package of each service. Run `pip install python-ironicclient` and you will get access to `openstack baremetal` commands. On Tue, 8 Sep 2020 at 20:51, Thomas Wakefield > wrote: All- We are new to using OpenStack and are testing out Kolla-ansible with hopes of using Ironic as a deployment tool. Our issue is we can’t use the openstack baremetal command, it’s not found after deployment. Our current test environment is built using Train on CentOS 7. And all other basic OpenStack functionality seems to be working with our Kolla install (nova, glance, horizon, etc). We followed these docs, https://docs.openstack.org/kolla-ansible/train/reference/bare-metal/ironic-guide.html , but when we get to running any “openstack baremetal” commands we don’t seem to have the baremetal commands available in openstack. Globals.yml lines that should be relavent: enable_horizon_ironic: "{{ enable_ironic | bool }}" enable_ironic: "yes" enable_ironic_ipxe: "yes" enable_ironic_neutron_agent: "{{ enable_neutron | bool and enable_ironic | bool }}" enable_ironic_pxe_uefi: "no" #enable_iscsid: "{{ (enable_cinder | bool and enable_cinder_backend_iscsi | bool) or enable_ironic | bool }}" ironic_dnsmasq_interface: "em1" # The following value must be set when enabling ironic, ironic_dnsmasq_dhcp_range: "192.168.2.230,192.168.2.239" ironic_dnsmasq_boot_file: "pxelinux.0" ironic_cleaning_network: "demo-net" Ironic is listed as an installed service, but you can see the baremetal commands are not found: root at orc-os5:~## openstack service list +----------------------------------+------------------+-------------------------+ | ID | Name | Type | +----------------------------------+------------------+-------------------------+ | 0e5119acbf384714ab11520fadce36bb | nova_legacy | compute_legacy | | 2ed83015047249f38b782901e03bcfc1 | ironic-inspector | baremetal-introspection | | 5d7aabf15bdc415387fac54fa1ca21df | ironic | baremetal | | 6d05cdce019347e9940389abed959ffb | neutron | network | | 7d9485969e504b2e90273af75e9b1713 | cinderv3 | volumev3 | | a11dc04e83ed4d9ba65474b9de947d1b | keystone | identity | | ad0c2db47b414b34b86a5f6a5aca597c | glance | image | | dcbbc90813714c989b82bece1c0d9d9f | nova | compute | | de0ee6b55486495296516e07d2e9e97c | heat | orchestration | | df605d671d88496d91530fbc01573589 | cinderv2 | volumev2 | | e211294ca78a418ea34d9c29d86b05f1 | placement | placement | | f62ba90bc0b94cb9b3d573605f800a1f | heat-cfn | cloudformation | +----------------------------------+------------------+-------------------------+ root at orc-os5:~## openstack baremetal openstack: 'baremetal' is not an openstack command. See 'openstack --help'. Did you mean one of these? credential create credential delete credential list credential set credential show Is there anything else that needs configured to activate ironic? Thanks in advance. Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Sep 8 19:18:23 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Sep 2020 21:18:23 +0200 Subject: [blazar] Wallaby PTG Message-ID: Hello, As mentioned in a recent IRC meeting [1], I likely won't be able to attend the PTG this year. Tetsuro Nakamura who is the only other active core reviewer at the moment said he won't have time either. Given that we still have lots of pending tasks on Blazar from the last PTG [2], I propose that we don't plan to meet during the PTG in October 2020. Best wishes, Pierre Riteau (priteau) [1] http://eavesdrop.openstack.org/meetings/blazar/2020/blazar.2020-08-18-09.00.log.html [2] https://etherpad.opendev.org/p/blazar-ptg-victoria -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Tue Sep 8 22:36:19 2020 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 9 Sep 2020 08:36:19 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> Message-ID: <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> > On 8 Sep 2020, at 3:13 pm, Sam Morrison wrote: > > Hi Yamamoto, > > >> On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto > wrote: > >> i'm talking to our infra folks but it might take longer than i hoped. >> if you or someone else can provide a public repo, it might be faster. >> (i have looked at launchpad PPA while ago. but it didn't seem >> straightforward given the complex build machinary in midonet.) > > Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in it and happy to use that for the time being. > >> >>> >>> I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. >> >> probably. > > Yeah this looks like a neutron or neutron-lib issue > >> >>> >>> For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. >> >> this thing? https://review.opendev.org/#/c/749866/ > > Yeah that fixed that issue. > > > I have been working to get everything fixed in this review [1] > > The pep8 job is working but not in the gate due to neutron issues [2] > The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service which I don’t really have any idea about, never used it. [3] These are failing because of this patch on tap-as-a-service https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca Really have no idea how this works, does anyone use tap-as-a-service with midonet and can help me fix it, else I’m wondering if we disable tests for taas and make it an unsupported feature for now. Sam > The tempest aio job is working well now, I’m not sure what tempest tests were run before but it’s just doing what ever is the default at the moment. > The tempest multinode job isn’t working due to what I think is networking issues between the 2 nodes. I don’t really know what I’m doing here so any pointers would be helpful. [4] > The grenade job is also failing because I also need to put these fixes on the stable/ussuri branch to make it work so will need to figure that out too > > Cheers, > Sam > > [1] https://review.opendev.org/#/c/749857/ > [2] https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b > [3] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html > [4] https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 > > >> >>> >>> >>> >>> I can now start to look into the devstack zuul jobs. >>> >>> Cheers, >>> Sam >>> >>> >>> [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack >>> [2] https://github.com/midonet/midonet/pull/9 >>> >>> >>> >>> >>>> On 1 Sep 2020, at 4:03 pm, Sam Morrison > wrote: >>>> >>>> >>>> >>>>> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto > wrote: >>>>> >>>>> hi, >>>>> >>>>> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison > wrote: >>>>>> >>>>>> >>>>>> >>>>>>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto > wrote: >>>>>>> >>>>>>> Sebastian, Sam, >>>>>>> >>>>>>> thank you for speaking up. >>>>>>> >>>>>>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >>>>>>> the major part for it is to make midonet itself to run on ubuntu >>>>>>> version used by the ci. (18.04, or maybe directly to 20.04) >>>>>>> https://midonet.atlassian.net/browse/MNA-1344 >>>>>>> iirc, the remaining blockers are: >>>>>>> * libreswan (used by vpnaas) >>>>>>> * vpp (used by fip64) >>>>>>> maybe it's the easiest to drop those features along with their >>>>>>> required components, if it's acceptable for your use cases. >>>>>> >>>>>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >>>>>> >>>>>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? >>>>> >>>>> it still exists. but i don't think it's maintained well. >>>>> let me find and ask someone in midokura who "owns" that part of infra. >>>>> >>>>> does it also involve some package-related modifications to midonet repo, right? >>>> >>>> >>>> Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow >>>> >>>> Sam >>>> >>>> >>>> >>>>> >>>>>> >>>>>> I’m keen to do the work but might need a bit of guidance to get started, >>>>>> >>>>>> Sam >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> alternatively you might want to make midonet run in a container. (so >>>>>>> that you can run it with older ubuntu, or even a container trimmed for >>>>>>> JVM) >>>>>>> there were a few attempts to containerize midonet. >>>>>>> i think this is the latest one: https://github.com/midonet/midonet-docker >>>>>>> >>>>>>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison > wrote: >>>>>>>> >>>>>>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >>>>>>>> >>>>>>>> I’m happy to help too. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Sam >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski > wrote: >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Thx Sebastian for stepping in to maintain the project. That is great news. >>>>>>>>> I think that at the beginning You should do 2 things: >>>>>>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>>>>>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>>>>>>>> >>>>>>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>>>>>>>> >>>>>>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann > wrote: >>>>>>>>>> >>>>>>>>>> Hi Slawek, >>>>>>>>>> >>>>>>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>>>>>>>>> >>>>>>>>>> Please let me know how to proceed and how we can be onboarded easily. >>>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> >>>>>>>>>> Sebastian >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Sebastian Saemann >>>>>>>>>> Head of Managed Services >>>>>>>>>> >>>>>>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>>>>>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>>>>>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>>>>>>>>> https://netways.de | sebastian.saemann at netways.de >>>>>>>>>> >>>>>>>>>> ** NETWAYS Web Services - https://nws.netways.de ** >>>>>>>>> >>>>>>>>> — >>>>>>>>> Slawek Kaplonski >>>>>>>>> Principal software engineer >>>>>>>>> Red Hat >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 8 22:56:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Sep 2020 17:56:05 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> Message-ID: <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> Updates: After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today and discuss the integration testing switch tomorrow in TC office hours. > * Part1: Migrating tox base job tomorrow (8th Sept): I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: - All patches in this series https://review.opendev.org/#/c/738328/ > * Part2: Migrating devstack/tempest base job on 10th sept: We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 -gmann ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > break the projects gate if not yet taken care of. Read below for the plan. > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > Progress: > ======= > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > plan. > > * Part1: Migrating tox base job tomorrow (8th Sept): > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > * Part2: Migrating devstack/tempest base job on 10th sept: > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > ** Bug#1882521 > ** DB migration issues, > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > Testing Till now: > ============ > * ~200 repos gate have been tested or fixed till now. > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > project repos if I am late to fix them): > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > * ~30repos fixes ready to merge: > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > Bugs Report: > ========== > > 1. Bug#1882521. (IN-PROGRESS) > There is open bug for nova/cinder where three tempest tests are failing for > volume detach operation. There is no clear root cause found yet > -https://bugs.launchpad.net/cinder/+bug/1882521 > We have skipped the tests in tempest base patch to proceed with the other > projects testing but this is blocking things for the migration. > > 2. DB migration issues (IN-PROGRESS) > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > nodeset conflict is resolved now and devstack provides all focal nodes now. > > 4. Bug#1886296. (IN-PROGRESS) > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > nd will release a new hacking version. After that project can move to new hacking and do not need > to maintain pyflakes version compatibility. > > 5. Bug#1886298. (IN-PROGRESS) > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > What work to be done on the project side: > ================================ > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > 1. Start a patch in your repo by making depends-on on either of below: > devstack base patch if you are using only devstack base jobs not tempest: > > Depends-on: https://review.opendev.org/#/c/731207/ > OR > tempest base patch if you are using the tempest base job (like devstack-tempest): > Depends-on: https://review.opendev.org/#/c/734700/ > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > you can test the complete gate jobs(unit/functional/doc/integration) together. > This and its base patches - https://review.opendev.org/#/c/738328/ > > Example: https://review.opendev.org/#/c/738126/ > > 2. If none of your project jobs override the nodeset then above patch will be > testing patch(do not merge) otherwise change the nodeset to focal. > Example: https://review.opendev.org/#/c/737370/ > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > this. > Example: https://review.opendev.org/#/c/744056/2 > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > this migration. > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > base patches. > > > Important things to note: > =================== > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > * Use gerrit topic 'migrate-to-focal' > * Do not backport any of the patches. > > > References: > ========= > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > [1] https://github.com/PyCQA/pyflakes/issues/367 > [2] https://review.opendev.org/#/c/739315/ > [3] https://review.opendev.org/#/c/739334/ > [4] https://github.com/pallets/markupsafe/issues/116 > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > -gmann > > From katonalala at gmail.com Wed Sep 9 06:52:22 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 9 Sep 2020 08:52:22 +0200 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> Message-ID: Hi, Could you please point to the issue with taas? Regards Lajos (lajoskatona) Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 0:44): > > > On 8 Sep 2020, at 3:13 pm, Sam Morrison wrote: > > Hi Yamamoto, > > > On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto wrote: > > > i'm talking to our infra folks but it might take longer than i hoped. > if you or someone else can provide a public repo, it might be faster. > (i have looked at launchpad PPA while ago. but it didn't seem > straightforward given the complex build machinary in midonet.) > > > Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in > it and happy to use that for the time being. > > > > I’m not sure why the pep8 job is failing, it is complaining about pecan > which makes me think this is an issue with neutron itself? Kinda stuck on > this one, it’s probably something silly. > > > probably. > > > Yeah this looks like a neutron or neutron-lib issue > > > > For the py3 unit tests they are now failing due to db migration errors in > tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron > getting rid of the liberty alembic branch and so we need to squash these on > these projects too. > > > this thing? https://review.opendev.org/#/c/749866/ > > > Yeah that fixed that issue. > > > I have been working to get everything fixed in this review [1] > > The pep8 job is working but not in the gate due to neutron issues [2] > The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service > which I don’t really have any idea about, never used it. [3] > > > These are failing because of this patch on tap-as-a-service > > https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca > > Really have no idea how this works, does anyone use tap-as-a-service with > midonet and can help me fix it, else I’m wondering if we disable tests for > taas and make it an unsupported feature for now. > > Sam > > > > The tempest aio job is working well now, I’m not sure what tempest tests > were run before but it’s just doing what ever is the default at the moment. > The tempest multinode job isn’t working due to what I think is networking > issues between the 2 nodes. I don’t really know what I’m doing here so any > pointers would be helpful. [4] > The grenade job is also failing because I also need to put these fixes on > the stable/ussuri branch to make it work so will need to figure that out too > > Cheers, > Sam > > [1] https://review.opendev.org/#/c/749857/ > [2] > https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b > [3] > https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html > [4] > https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 > > > > > > > I can now start to look into the devstack zuul jobs. > > Cheers, > Sam > > > [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack > [2] https://github.com/midonet/midonet/pull/9 > > > > > On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: > > > > On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: > > hi, > > On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: > > > > > On 1 Sep 2020, at 11:49 am, Takashi Yamamoto > wrote: > > Sebastian, Sam, > > thank you for speaking up. > > as Slawek said, the first (and probably the biggest) thing is to fix the > ci. > the major part for it is to make midonet itself to run on ubuntu > version used by the ci. (18.04, or maybe directly to 20.04) > https://midonet.atlassian.net/browse/MNA-1344 > iirc, the remaining blockers are: > * libreswan (used by vpnaas) > * vpp (used by fip64) > maybe it's the easiest to drop those features along with their > required components, if it's acceptable for your use cases. > > > We are running midonet-cluster and midolman on 18.04, we dropped those > package dependencies from our ubuntu package to get it working. > > We currently have built our own and host in our internal repo but happy to > help putting this upstream somehow. Can we upload them to the midonet apt > repo, does it still exist? > > > it still exists. but i don't think it's maintained well. > let me find and ask someone in midokura who "owns" that part of infra. > > does it also involve some package-related modifications to midonet repo, > right? > > > > Yes a couple, I will send up as as pull requests to > https://github.com/midonet/midonet today or tomorrow > > Sam > > > > > > I’m keen to do the work but might need a bit of guidance to get started, > > Sam > > > > > > > > alternatively you might want to make midonet run in a container. (so > that you can run it with older ubuntu, or even a container trimmed for > JVM) > there were a few attempts to containerize midonet. > i think this is the latest one: https://github.com/midonet/midonet-docker > > On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: > > > We (Nectar Research Cloud) use midonet heavily too, it works really well > and we haven’t found another driver that works for us. We tried OVN but it > just doesn’t scale to the size of environment we have. > > I’m happy to help too. > > Cheers, > Sam > > > > On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: > > Hi, > > Thx Sebastian for stepping in to maintain the project. That is great news. > I think that at the beginning You should do 2 things: > - sync with Takashi Yamamoto (I added him to the loop) as he is probably > most active current maintainer of this project, > - focus on fixing networking-midonet ci which is currently broken - all > scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move > to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and > finally add them to the ci again, > > I can of course help You with ci jobs if You need any help. Feel free to > ping me on IRC or email (can be off the list). > > On 29 Jul 2020, at 15:24, Sebastian Saemann > wrote: > > Hi Slawek, > > we at NETWAYS are running most of our neutron networking on top of midonet > and wouldn't be too happy if it gets deprecated and removed. So we would > like to take over the maintainer role for this part. > > Please let me know how to proceed and how we can be onboarded easily. > > Best regards, > > Sebastian > > -- > Sebastian Saemann > Head of Managed Services > > NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > https://netways.de | sebastian.saemann at netways.de > > ** NETWAYS Web Services - https://nws.netways.de ** > > > — > Slawek Kaplonski > Principal software engineer > Red Hat > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Sep 9 06:56:39 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 9 Sep 2020 08:56:39 +0200 Subject: [neutron] CI meeting 09/09/2020 cancelled Message-ID: <20200909065639.45fheyj5kworzgok@skaplons-mac> Hi, I'm off today and I will not be able to chair our CI meeting this week. So lets cancel it and see You on the meeting next week. -- Slawek Kaplonski Principal software engineer Red Hat From thierry at openstack.org Wed Sep 9 07:29:23 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 9 Sep 2020 09:29:23 +0200 Subject: [largescale-sig] Next meeting: September 9, 16utc Message-ID: <5983106a-fda8-2e5f-b4b3-1fe609f5843d@openstack.org> Hi everyone, Our next meeting will be a EU-US-friendly meeting, today Wednesday, September 9 at 16 UTC[1] in the #openstack-meeting-3 channel on IRC: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200909T16 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - all to contact US large deployment friends to invite them to next EU-US meeting - ttx to request Forum/PTG sessions - belmoreira, ttx to push for OSops resurrection - all to describe briefly how you solved metrics/billing in your deployment in https://etherpad.openstack.org/p/large-scale-sig-documentation - masahito to push latest patches to oslo.metrics - ttx to look into a basic test framework for oslo,metrics - amorin to see if oslo.metrics could be tested at OVH Talk to you all later, -- Thierry Carrez From skaplons at redhat.com Wed Sep 9 07:50:42 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 9 Sep 2020 09:50:42 +0200 Subject: [neutron] Flow drop on agent restart with openvswitch firewall driver In-Reply-To: References: Message-ID: <20200909075042.qyxbnq7li2zm5oo4@skaplons-mac> Hi, On Tue, Sep 08, 2020 at 02:46:29PM +0000, Alexis Deberg wrote: > Hi All, > > I'm looking for ideas as we need to upgrade our Neutron deployment and it looks like it would impact workloads a bit much for now to do so and i'm no master of the neutron code... > > We're running Neutron 14.0.2 with ml2 plugin and firewall_driver set as openvswitch. drop_flows_on_start is default False. > > Reading at some old bug reports my understanding was that a restart of the neutron-openvswitch-agent should not impact existing flows and be seamless, but this is not what I'm experiencing as I see some temporary drop(s) around when ovs-fctl del-flows/add-flows is called on br-int (either east-west traffic or north-south). I tried switching to iptables_hybrid driver instead and I don't see the issue in that case. > > e.g when a wget download is happening on an instance while the agent is restarting, I see the following: 2020-09-08 14:26:09 (12.2 MB/s) - Read error at byte 146971864/7416743936 (Success). Retrying > > I'm a bit lot so i'm wondering if that's expected/known behavior, if a workaround is possible.... I don't think it is expected behaviour. All flows should be first installed with new cookie id and then old ones should be removed. And that shouldn't impact existing traffic. > > Let me know if a bug report might be a better place to dig deeper or not or if you want additional information... or if I missed a closed bug. Yes, please report bug on Neutron's launchpad. And, if that is possible, please also try to reproduce the issue on current master branch (maybe deployed from devstack simply). > > Thanks ! -- Slawek Kaplonski Principal software engineer Red Hat From josephine.seifert at secustack.com Wed Sep 9 07:51:00 2020 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Wed, 9 Sep 2020 09:51:00 +0200 Subject: [Image Encryption] No meeting next week Message-ID: Hi, I will be on vacation next week, so there will be no meeting. Josephine(Luzi) From bdobreli at redhat.com Wed Sep 9 08:25:08 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 9 Sep 2020 10:25:08 +0200 Subject: [tripleo][ansible] ensure python dependencies on hosts for modules/plugins Message-ID: <4acfd8e1-5e9e-433e-3b1f-bd3e8a159033@redhat.com> Since most of tripleo-ansible modules do 'import foobar', we should ensure that we have the corresponding python packages installed on target hosts. Some of them, like python3-dmidecode may be in base Centos8 images. But some may not, especially for custom deployed-servers provided by users for deployments. Those packages must be tracked and ensured to be installed by tripleo (preferred), or validated deploy-time (nah...), or at least documented as the modules get created or changed by devs. That also applies to adding action plugins' deps for python-tripleoclient or tripleo-ansible perhaps. Shall we write a spec for that or just address that as a bug [0]? [0] https://bugs.launchpad.net/tripleo/+bug/1894957 -- Best regards, Bogdan Dobrelya, Irc #bogdando From balazs.gibizer at est.tech Wed Sep 9 10:26:50 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 09 Sep 2020 12:26:50 +0200 Subject: [nova] Feature Freeze is coming Message-ID: Hi, Nova will hit Feature Freeze for Victoria release on Thursday. Feature patches that are approved before end of Thursday can be rechecked and rebased to get them merged. Feature authors having patches that are not approved until the freeze but are close can request Feature Freeze Exception on the ML. Granting FFE requires ready patches and a core sponsor who will review those patches. Please observer that we only have 2 weeks before RC1 so time are short for FFEs. Cheers, gibi From sorrison at gmail.com Wed Sep 9 10:49:19 2020 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 9 Sep 2020 20:49:19 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> Message-ID: > On 9 Sep 2020, at 4:52 pm, Lajos Katona wrote: > > Hi, > Could you please point to the issue with taas? Networking-midonet unit tests [1] are failing with the addition of this patch [2] [1] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html [2] https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca I’m not really familiar with all of this so not sure how to fix these up. Cheers, Sam > > Regards > Lajos (lajoskatona) > > Sam Morrison > ezt írta (időpont: 2020. szept. 9., Sze, 0:44): > > >> On 8 Sep 2020, at 3:13 pm, Sam Morrison > wrote: >> >> Hi Yamamoto, >> >> >>> On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto > wrote: >> >>> i'm talking to our infra folks but it might take longer than i hoped. >>> if you or someone else can provide a public repo, it might be faster. >>> (i have looked at launchpad PPA while ago. but it didn't seem >>> straightforward given the complex build machinary in midonet.) >> >> Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in it and happy to use that for the time being. >> >>> >>>> >>>> I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. >>> >>> probably. >> >> Yeah this looks like a neutron or neutron-lib issue >> >>> >>>> >>>> For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. >>> >>> this thing? https://review.opendev.org/#/c/749866/ >> >> Yeah that fixed that issue. >> >> >> I have been working to get everything fixed in this review [1] >> >> The pep8 job is working but not in the gate due to neutron issues [2] >> The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service which I don’t really have any idea about, never used it. [3] > > These are failing because of this patch on tap-as-a-service > https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca > > Really have no idea how this works, does anyone use tap-as-a-service with midonet and can help me fix it, else I’m wondering if we disable tests for taas and make it an unsupported feature for now. > > Sam > > > >> The tempest aio job is working well now, I’m not sure what tempest tests were run before but it’s just doing what ever is the default at the moment. >> The tempest multinode job isn’t working due to what I think is networking issues between the 2 nodes. I don’t really know what I’m doing here so any pointers would be helpful. [4] >> The grenade job is also failing because I also need to put these fixes on the stable/ussuri branch to make it work so will need to figure that out too >> >> Cheers, >> Sam >> >> [1] https://review.opendev.org/#/c/749857/ >> [2] https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b >> [3] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html >> [4] https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 >> >> >>> >>>> >>>> >>>> >>>> I can now start to look into the devstack zuul jobs. >>>> >>>> Cheers, >>>> Sam >>>> >>>> >>>> [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack >>>> [2] https://github.com/midonet/midonet/pull/9 >>>> >>>> >>>> >>>> >>>>> On 1 Sep 2020, at 4:03 pm, Sam Morrison > wrote: >>>>> >>>>> >>>>> >>>>>> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto > wrote: >>>>>> >>>>>> hi, >>>>>> >>>>>> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison > wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto > wrote: >>>>>>>> >>>>>>>> Sebastian, Sam, >>>>>>>> >>>>>>>> thank you for speaking up. >>>>>>>> >>>>>>>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >>>>>>>> the major part for it is to make midonet itself to run on ubuntu >>>>>>>> version used by the ci. (18.04, or maybe directly to 20.04) >>>>>>>> https://midonet.atlassian.net/browse/MNA-1344 >>>>>>>> iirc, the remaining blockers are: >>>>>>>> * libreswan (used by vpnaas) >>>>>>>> * vpp (used by fip64) >>>>>>>> maybe it's the easiest to drop those features along with their >>>>>>>> required components, if it's acceptable for your use cases. >>>>>>> >>>>>>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >>>>>>> >>>>>>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? >>>>>> >>>>>> it still exists. but i don't think it's maintained well. >>>>>> let me find and ask someone in midokura who "owns" that part of infra. >>>>>> >>>>>> does it also involve some package-related modifications to midonet repo, right? >>>>> >>>>> >>>>> Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow >>>>> >>>>> Sam >>>>> >>>>> >>>>> >>>>>> >>>>>>> >>>>>>> I’m keen to do the work but might need a bit of guidance to get started, >>>>>>> >>>>>>> Sam >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> alternatively you might want to make midonet run in a container. (so >>>>>>>> that you can run it with older ubuntu, or even a container trimmed for >>>>>>>> JVM) >>>>>>>> there were a few attempts to containerize midonet. >>>>>>>> i think this is the latest one: https://github.com/midonet/midonet-docker >>>>>>>> >>>>>>>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison > wrote: >>>>>>>>> >>>>>>>>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >>>>>>>>> >>>>>>>>> I’m happy to help too. >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Sam >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski > wrote: >>>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Thx Sebastian for stepping in to maintain the project. That is great news. >>>>>>>>>> I think that at the beginning You should do 2 things: >>>>>>>>>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>>>>>>>>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>>>>>>>>> >>>>>>>>>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>>>>>>>>> >>>>>>>>>>> On 29 Jul 2020, at 15:24, Sebastian Saemann > wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Slawek, >>>>>>>>>>> >>>>>>>>>>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>>>>>>>>>> >>>>>>>>>>> Please let me know how to proceed and how we can be onboarded easily. >>>>>>>>>>> >>>>>>>>>>> Best regards, >>>>>>>>>>> >>>>>>>>>>> Sebastian >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Sebastian Saemann >>>>>>>>>>> Head of Managed Services >>>>>>>>>>> >>>>>>>>>>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>>>>>>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>>>>>>>>>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>>>>>>>>>> https://netways.de | sebastian.saemann at netways.de >>>>>>>>>>> >>>>>>>>>>> ** NETWAYS Web Services - https://nws.netways.de ** >>>>>>>>>> >>>>>>>>>> — >>>>>>>>>> Slawek Kaplonski >>>>>>>>>> Principal software engineer >>>>>>>>>> Red Hat >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza.b2008 at gmail.com Wed Sep 9 10:54:28 2020 From: reza.b2008 at gmail.com (Reza Bakhshayeshi) Date: Wed, 9 Sep 2020 15:24:28 +0430 Subject: Floating IP problem in HA OVN DVR with TripleO In-Reply-To: References: <20200908080900.efy7bs2qnkzpwbwk@skaplons-mac> Message-ID: Hi all, Thanks a lot for your guidance. I didn't have such a problem in TripleO Stein. Do you think using OVN DVR in a production environment is a wise choice? Regards, Reza On Tue, 8 Sep 2020 at 21:42, Michał Nasiadka wrote: > Hi Reza, > > Here is a related bug: > https://bugs.launchpad.net/bugs/1881041 > > I had to use ovn/ovs 2.13 builds from cbs to overcome this issue ( > https://cbs.centos.org/koji/buildinfo?buildID=30482) > > Regards, > Michal > > On Tue, 8 Sep 2020 at 18:52, Reza Bakhshayeshi > wrote: > >> Hi Roman, >> >> I'm using 'geneve' for my tenant networks. >> >> By the way, by pinging 8.8.8.8 from an instance with FIP, tcpdump on its >> Compute node shows an ARP request for every lost ping. Is it normal >> behaviour? >> >> 21:13:04.808508 ARP, Request who-has dns.google tell >> >> X.X.X.X >> >> >> >> , length 28 >> 21:13:05.808726 ARP, Request who-has dns.google tell >> >> X.X.X.X >> >> >> >> , length 28 >> 21:13:06.808900 ARP, Request who-has dns.google tell >> >> X.X.X.X >> >> >> >> , length 28 >> . >> . >> . >> X.X.X.X if FIP of VM. >> >> >> On Tue, 8 Sep 2020 at 17:21, Roman Safronov wrote: >> >>> Hi Reza, >>> >>> Are you using 'geneve' tenant networks or 'vlan' ones? I am asking >>> because with VLAN we have the following DVR issue [1] >>> >>> [1] Bug 1704596 - FIP traffix does not work on OVN-DVR setup when using >>> VLAN tenant network type >>> >>> >>> On Tue, Sep 8, 2020 at 2:04 PM Reza Bakhshayeshi >>> wrote: >>> >>>> Hi Slawek, >>>> >>>> I'm using the latest CentOS 8 Ussuri OVN packages at: >>>> https://trunk.rdoproject.org/centos8-ussuri/deps/latest/x86_64/ >>>> >>>> On both Controller and Compute I get: >>>> >>>> # rpm -qa | grep ovn >>>> ovn-host-20.03.0-4.el8.x86_64 >>>> ovn-20.03.0-4.el8.x86_64 >>>> >>>> # yum info ovn >>>> Installed Packages >>>> Name : ovn >>>> Version : 20.03.0 >>>> Release : 4.el8 >>>> Architecture : x86_64 >>>> Size : 12 M >>>> Source : ovn-20.03.0-4.el8.src.rpm >>>> Repository : @System >>>> From repo : delorean-ussuri-testing >>>> Summary : Open Virtual Network support >>>> URL : http://www.openvswitch.org/ >>>> License : ASL 2.0 and LGPLv2+ and SISSL >>>> >>>> Do you suggest installing ovn manually from source on containers? >>>> ي >>>> >>>> On Tue, 8 Sep 2020 at 12:39, Slawek Kaplonski >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Maybe You hit this bug [1]. Please check what ovn version do You have >>>>> and maybe >>>>> >>>>> >>>>> update it if needed. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Sep 07, 2020 at 06:23:44PM +0430, Reza Bakhshayeshi wrote: >>>>> >>>>> >>>>> > Hi all, >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > I deployed an environment with TripleO Ussuri with 3 HA Controllers >>>>> and >>>>> >>>>> >>>>> > some Compute nodes with neutron-ovn-dvr-ha.yaml >>>>> >>>>> >>>>> > Instances have Internet access through routers with SNAT traffic (in >>>>> this >>>>> >>>>> >>>>> > case traffic is routed via a controller node), and by assigning IP >>>>> address >>>>> >>>>> >>>>> > directly from provider network (not having a router). >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > But in case of assigning FIP from provider to an instance, VM >>>>> Internet >>>>> >>>>> >>>>> > connection is lost. >>>>> >>>>> >>>>> > Here is the output of router nat lists, which seems OK: >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > # ovn-nbctl lr-nat-list 587182a4-4d6b-41b0-9fd8-4c1be58811b0 >>>>> >>>>> >>>>> > TYPE EXTERNAL_IP EXTERNAL_PORT LOGICAL_IP >>>>> >>>>> >>>>> > EXTERNAL_MAC LOGICAL_PORT >>>>> >>>>> >>>>> > dnat_and_snat X.X.X.X 192.168.0.153 >>>>> >>>>> >>>>> > fa:16:3e:0a:86:4d e65bd8e9-5f95-4eb2-a316-97e86fbdb9b6 >>>>> >>>>> >>>>> > snat Y.Y.Y.Y 192.168.0.0/24 >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > I replaced FIP with X.X.X.X and router IP with Y.Y.Y.Y >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > When I remove * EXTERNAL_MAC* and *LOGICAL_PORT*, FIP works fine and >>>>> as it >>>>> >>>>> >>>>> > has to be, but traffic routes from a Controller node and it won't be >>>>> >>>>> >>>>> > distributed anymore. >>>>> >>>>> >>>>> > >>>>> >>>>> >>>>> > Any idea or suggestion would be grateful. >>>>> >>>>> >>>>> > Regards, >>>>> >>>>> >>>>> > Reza >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1834433 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> >>>>> Slawek Kaplonski >>>>> >>>>> >>>>> Principal software engineer >>>>> >>>>> >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> -- >>> >>> ROMAN SAFRONOV >>> >>> SENIOR QE, OPENSTACK NETWORKING >>> >>> Red Hat >>> >>> Israel >>> >>> M: +972545433957 >>> >>> >>> >>> >>> >> >> -- > Michał Nasiadka > mnasiadka at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Wed Sep 9 11:28:24 2020 From: john at johngarbutt.com (John Garbutt) Date: Wed, 9 Sep 2020 12:28:24 +0100 Subject: Application Credentials with federated users In-Reply-To: <7fc8d66653064962aa458e13124bcb9d@stfc.ac.uk> References: <7fc8d66653064962aa458e13124bcb9d@stfc.ac.uk> Message-ID: Hi Alex, In my experience it worked fine, with a major limitation about groups. This, merged in ussri, should have fixed the group issues: https://bugs.launchpad.net/keystone/+bug/1809116 I had planned on testing that by now, but that work hasn't been started/agreed yet. My current workaround for not having groups is for the federation mapping to add users directly into projects: https://github.com/RSE-Cambridge/cumulus-config I planned to map from an OIDC group attribute into a specific concrete project, but the above puts everyone in a holding project and does static role assignments, due to issues with group management in the OIDC provider. As an aside, this the way were were configuring keystone, incase that is important to making things work: https://github.com/RSE-Cambridge/cumulus-kayobe-config/tree/train-preprod/etc/kayobe/kolla/config/keystone https://github.com/RSE-Cambridge/cumulus-kayobe-config/blob/0dc43a0f5c7b76f6913dea0fdda2b1674511c3f4/etc/kayobe/kolla.yml#L122 Horizon and the CLI tools in train didn't really agree, I think the auth url is now missing "/v3", but I believe that is fixed in latest keystoneauth client: https://bugs.launchpad.net/keystoneauth/+bug/1876317 Hopefully that helps? Thanks, John On Tue, 8 Sep 2020 at 16:33, Alexander Dibbo - UKRI STFC wrote: > > Hi All, > > > > Is it possible for a user logging in via an oidc provider to generate application credentials? > > > > When I try it I get an error about there being no role for the user in the project. > > > > We map the users to groups based on assertions in their tokens. > > > > It looks like it would work if we mapped users individually to local users in keystone and then gave those roles. I would prefer to avoid using per user mappings for this if possible as it would be a lot of extra work for my team. > > > > Regards > > > > Alexander Dibbo – Cloud Architect / Cloud Operations Group Leader > > For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io > > To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk > > To receive notifications about the service please subscribe to our mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD > > To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ > > > > This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, conclusions or other information in this message and attachments that are not related directly to UKRI business are solely those of the author and do not represent the views of UKRI. From katonalala at gmail.com Wed Sep 9 12:18:37 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 9 Sep 2020 14:18:37 +0200 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> Message-ID: Hi, I pushed a fix for it https://review.opendev.org/750633, I added Deepak for reviewer as he is the owner of the taas patch. Sorry for the problem. Lajos (lajoskatona) Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 12:49): > > > On 9 Sep 2020, at 4:52 pm, Lajos Katona wrote: > > Hi, > Could you please point to the issue with taas? > > > Networking-midonet unit tests [1] are failing with the addition of this > patch [2] > > [1] > https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html > [2] > https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca > > I’m not really familiar with all of this so not sure how to fix these up. > > Cheers, > Sam > > > > > Regards > Lajos (lajoskatona) > > Sam Morrison ezt írta (időpont: 2020. szept. 9., > Sze, 0:44): > >> >> >> On 8 Sep 2020, at 3:13 pm, Sam Morrison wrote: >> >> Hi Yamamoto, >> >> >> On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto >> wrote: >> >> >> i'm talking to our infra folks but it might take longer than i hoped. >> if you or someone else can provide a public repo, it might be faster. >> (i have looked at launchpad PPA while ago. but it didn't seem >> straightforward given the complex build machinary in midonet.) >> >> >> Yeah that’s no problem, I’ve set up a repo with the latest midonet debs >> in it and happy to use that for the time being. >> >> >> >> I’m not sure why the pep8 job is failing, it is complaining about pecan >> which makes me think this is an issue with neutron itself? Kinda stuck on >> this one, it’s probably something silly. >> >> >> probably. >> >> >> Yeah this looks like a neutron or neutron-lib issue >> >> >> >> For the py3 unit tests they are now failing due to db migration errors in >> tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron >> getting rid of the liberty alembic branch and so we need to squash these on >> these projects too. >> >> >> this thing? https://review.opendev.org/#/c/749866/ >> >> >> Yeah that fixed that issue. >> >> >> I have been working to get everything fixed in this review [1] >> >> The pep8 job is working but not in the gate due to neutron issues [2] >> The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service >> which I don’t really have any idea about, never used it. [3] >> >> >> These are failing because of this patch on tap-as-a-service >> >> https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca >> >> Really have no idea how this works, does anyone use tap-as-a-service with >> midonet and can help me fix it, else I’m wondering if we disable tests for >> taas and make it an unsupported feature for now. >> >> Sam >> >> >> >> The tempest aio job is working well now, I’m not sure what tempest tests >> were run before but it’s just doing what ever is the default at the moment. >> The tempest multinode job isn’t working due to what I think is networking >> issues between the 2 nodes. I don’t really know what I’m doing here so any >> pointers would be helpful. [4] >> The grenade job is also failing because I also need to put these fixes on >> the stable/ussuri branch to make it work so will need to figure that out too >> >> Cheers, >> Sam >> >> [1] https://review.opendev.org/#/c/749857/ >> [2] >> https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b >> [3] >> https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html >> [4] >> https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 >> >> >> >> >> >> >> I can now start to look into the devstack zuul jobs. >> >> Cheers, >> Sam >> >> >> [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack >> [2] https://github.com/midonet/midonet/pull/9 >> >> >> >> >> On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: >> >> >> >> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto >> wrote: >> >> hi, >> >> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: >> >> >> >> >> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto >> wrote: >> >> Sebastian, Sam, >> >> thank you for speaking up. >> >> as Slawek said, the first (and probably the biggest) thing is to fix the >> ci. >> the major part for it is to make midonet itself to run on ubuntu >> version used by the ci. (18.04, or maybe directly to 20.04) >> https://midonet.atlassian.net/browse/MNA-1344 >> iirc, the remaining blockers are: >> * libreswan (used by vpnaas) >> * vpp (used by fip64) >> maybe it's the easiest to drop those features along with their >> required components, if it's acceptable for your use cases. >> >> >> We are running midonet-cluster and midolman on 18.04, we dropped those >> package dependencies from our ubuntu package to get it working. >> >> We currently have built our own and host in our internal repo but happy >> to help putting this upstream somehow. Can we upload them to the midonet >> apt repo, does it still exist? >> >> >> it still exists. but i don't think it's maintained well. >> let me find and ask someone in midokura who "owns" that part of infra. >> >> does it also involve some package-related modifications to midonet repo, >> right? >> >> >> >> Yes a couple, I will send up as as pull requests to >> https://github.com/midonet/midonet today or tomorrow >> >> Sam >> >> >> >> >> >> I’m keen to do the work but might need a bit of guidance to get started, >> >> Sam >> >> >> >> >> >> >> >> alternatively you might want to make midonet run in a container. (so >> that you can run it with older ubuntu, or even a container trimmed for >> JVM) >> there were a few attempts to containerize midonet. >> i think this is the latest one: https://github.com/midonet/midonet-docker >> >> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >> >> >> We (Nectar Research Cloud) use midonet heavily too, it works really well >> and we haven’t found another driver that works for us. We tried OVN but it >> just doesn’t scale to the size of environment we have. >> >> I’m happy to help too. >> >> Cheers, >> Sam >> >> >> >> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >> >> Hi, >> >> Thx Sebastian for stepping in to maintain the project. That is great news. >> I think that at the beginning You should do 2 things: >> - sync with Takashi Yamamoto (I added him to the loop) as he is probably >> most active current maintainer of this project, >> - focus on fixing networking-midonet ci which is currently broken - all >> scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move >> to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and >> finally add them to the ci again, >> >> I can of course help You with ci jobs if You need any help. Feel free to >> ping me on IRC or email (can be off the list). >> >> On 29 Jul 2020, at 15:24, Sebastian Saemann >> wrote: >> >> Hi Slawek, >> >> we at NETWAYS are running most of our neutron networking on top of >> midonet and wouldn't be too happy if it gets deprecated and removed. So we >> would like to take over the maintainer role for this part. >> >> Please let me know how to proceed and how we can be onboarded easily. >> >> Best regards, >> >> Sebastian >> >> -- >> Sebastian Saemann >> Head of Managed Services >> >> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >> https://netways.de | sebastian.saemann at netways.de >> >> ** NETWAYS Web Services - https://nws.netways.de ** >> >> >> — >> Slawek Kaplonski >> Principal software engineer >> Red Hat >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 9 13:04:09 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Sep 2020 08:04:09 -0500 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> Message-ID: <17472f764b8.1292d333d6181.3892285235847293323@ghanshyammann.com> Also we need to merge the networking-l2gw project new location fix - https://review.opendev.org/#/c/738046/ It's leading to many errors as pointed by AJaeger - https://zuul.opendev.org/t/openstack/config-errors -gmann ---- On Wed, 09 Sep 2020 07:18:37 -0500 Lajos Katona wrote ---- > Hi,I pushed a fix for it https://review.opendev.org/750633, I added Deepak for reviewer as he is the owner of the taas patch. > Sorry for the problem.Lajos (lajoskatona) > Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 12:49): > > > On 9 Sep 2020, at 4:52 pm, Lajos Katona wrote: > Hi,Could you please point to the issue with taas? > Networking-midonet unit tests [1] are failing with the addition of this patch [2] > [1] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html[2] https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca > I’m not really familiar with all of this so not sure how to fix these up. > Cheers,Sam > > > > RegardsLajos (lajoskatona) > Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 0:44): > > > On 8 Sep 2020, at 3:13 pm, Sam Morrison wrote: > Hi Yamamoto, > > On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto wrote: > i'm talking to our infra folks but it might take longer than i hoped. > if you or someone else can provide a public repo, it might be faster. > (i have looked at launchpad PPA while ago. but it didn't seem > straightforward given the complex build machinary in midonet.) > > Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in it and happy to use that for the time being. > > > I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. > > probably. > > Yeah this looks like a neutron or neutron-lib issue > > > For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. > > this thing? https://review.opendev.org/#/c/749866/ > > Yeah that fixed that issue. > > I have been working to get everything fixed in this review [1] > The pep8 job is working but not in the gate due to neutron issues [2]The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service which I don’t really have any idea about, never used it. [3] > These are failing because of this patch on tap-as-a-service https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca > Really have no idea how this works, does anyone use tap-as-a-service with midonet and can help me fix it, else I’m wondering if we disable tests for taas and make it an unsupported feature for now. > Sam > > > The tempest aio job is working well now, I’m not sure what tempest tests were run before but it’s just doing what ever is the default at the moment.The tempest multinode job isn’t working due to what I think is networking issues between the 2 nodes. I don’t really know what I’m doing here so any pointers would be helpful. [4]The grenade job is also failing because I also need to put these fixes on the stable/ussuri branch to make it work so will need to figure that out too > Cheers,Sam > [1] https://review.opendev.org/#/c/749857/[2] https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b[3] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html[4] https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 > > > > > > I can now start to look into the devstack zuul jobs. > > Cheers, > Sam > > > [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack > [2] https://github.com/midonet/midonet/pull/9 > > > > > On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: > > > > On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: > > hi, > > On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: > > > > On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: > > Sebastian, Sam, > > thank you for speaking up. > > as Slawek said, the first (and probably the biggest) thing is to fix the ci. > the major part for it is to make midonet itself to run on ubuntu > version used by the ci. (18.04, or maybe directly to 20.04) > https://midonet.atlassian.net/browse/MNA-1344 > iirc, the remaining blockers are: > * libreswan (used by vpnaas) > * vpp (used by fip64) > maybe it's the easiest to drop those features along with their > required components, if it's acceptable for your use cases. > > We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. > > We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? > > it still exists. but i don't think it's maintained well. > let me find and ask someone in midokura who "owns" that part of infra. > > does it also involve some package-related modifications to midonet repo, right? > > > Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow > > Sam > > > > > > I’m keen to do the work but might need a bit of guidance to get started, > > Sam > > > > > > > > alternatively you might want to make midonet run in a container. (so > that you can run it with older ubuntu, or even a container trimmed for > JVM) > there were a few attempts to containerize midonet. > i think this is the latest one: https://github.com/midonet/midonet-docker > > On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: > > We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. > > I’m happy to help too. > > Cheers, > Sam > > > > On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: > > Hi, > > Thx Sebastian for stepping in to maintain the project. That is great news. > I think that at the beginning You should do 2 things: > - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, > - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, > > I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). > > On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: > > Hi Slawek, > > we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. > > Please let me know how to proceed and how we can be onboarded easily. > > Best regards, > > Sebastian > > -- > Sebastian Saemann > Head of Managed Services > > NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > https://netways.de | sebastian.saemann at netways.de > > ** NETWAYS Web Services - https://nws.netways.de ** > > — > Slawek Kaplonski > Principal software engineer > Red Hat > > > > > From cjeanner at redhat.com Wed Sep 9 14:33:38 2020 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 9 Sep 2020 16:33:38 +0200 Subject: [tripleo][ansible] ensure python dependencies on hosts for modules/plugins In-Reply-To: <4acfd8e1-5e9e-433e-3b1f-bd3e8a159033@redhat.com> References: <4acfd8e1-5e9e-433e-3b1f-bd3e8a159033@redhat.com> Message-ID: On 9/9/20 10:25 AM, Bogdan Dobrelya wrote: > Since most of tripleo-ansible modules do 'import foobar', we should > ensure that we have the corresponding python packages installed on > target hosts. Some of them, like python3-dmidecode may be in base > Centos8 images. But some may not, especially for custom deployed-servers > provided by users for deployments. > > Those packages must be tracked and ensured to be installed by tripleo > (preferred), or validated deploy-time (nah...), or at least documented > as the modules get created or changed by devs. > > That also applies to adding action plugins' deps for > python-tripleoclient or tripleo-ansible perhaps. > > Shall we write a spec for that or just address that as a bug [0]? > > [0] https://bugs.launchpad.net/tripleo/+bug/1894957 > If we're talking only about tripleo-ansible, we might "just" add the new dependencies in the spec file in tripleo-ansible-distgit. If we're talking more broadly, as said in the LP, a meta-package (tripleo-dependencies for instance) might be a nice thing, since it would allow to take care of: - package pinning (staring at YOU, podman) - package dependencies As for a proper spec, it might be interesting for future reference. Not really sure if it's really needed, but... Cheers, C. -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aschultz at redhat.com Wed Sep 9 14:46:58 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 9 Sep 2020 08:46:58 -0600 Subject: [tripleo][ansible] ensure python dependencies on hosts for modules/plugins In-Reply-To: References: <4acfd8e1-5e9e-433e-3b1f-bd3e8a159033@redhat.com> Message-ID: On Wed, Sep 9, 2020 at 8:43 AM Cédric Jeanneret wrote: > > > On 9/9/20 10:25 AM, Bogdan Dobrelya wrote: > > Since most of tripleo-ansible modules do 'import foobar', we should > > ensure that we have the corresponding python packages installed on > > target hosts. Some of them, like python3-dmidecode may be in base > > Centos8 images. But some may not, especially for custom deployed-servers > > provided by users for deployments. > > > > Those packages must be tracked and ensured to be installed by tripleo > > (preferred), or validated deploy-time (nah...), or at least documented > > as the modules get created or changed by devs. > > > > That also applies to adding action plugins' deps for > > python-tripleoclient or tripleo-ansible perhaps. > > > > Shall we write a spec for that or just address that as a bug [0]? > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1894957 > > > > If we're talking only about tripleo-ansible, we might "just" add the new > dependencies in the spec file in tripleo-ansible-distgit. > If we're talking more broadly, as said in the LP, a meta-package > (tripleo-dependencies for instance) might be a nice thing, since it > would allow to take care of: > - package pinning (staring at YOU, podman) > - package dependencies > So the problem here is tripleo-ansible is not installed on the remote system. So modules need packages installed there. Currently this is solved in two ways: 1) overcloud-full contains all the packages 2) use tripleo_bootstrap role to install dependencies While having a meta-package that we install that lists all the deps would be nice you still have the same issue to ensure it ends up where it needs to be. I don't think we need to over engineer this as we already have two ways that have been in place for many releases now. Thanks, -Alex > As for a proper spec, it might be interesting for future reference. Not > really sure if it's really needed, but... > > Cheers, > > C. > > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > From mnaser at vexxhost.com Wed Sep 9 14:58:06 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 Sep 2020 10:58:06 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. We've also included a few references to some important mailing list threads that you should check out. # Patches ## Open Reviews - Add openstack/osops to Ops Docs and Tooling SIG https://review.opendev.org/749835 - Reinstate weekly meetings https://review.opendev.org/749279 - Create starter-kit:kubernetes-in-virt tag https://review.opendev.org/736369 - Resolution to define distributed leadership for projects https://review.opendev.org/744995 - Add assert:supports-standalone https://review.opendev.org/722399 - Retire devstack-plugin-pika project https://review.opendev.org/748730 - Remove tc:approved-release tag https://review.opendev.org/749363 - Retire the devstack-plugin-zmq project https://review.opendev.org/748731 ## Project Updates - Add openstack-helm-deployments to openstack-helm https://review.opendev.org/748302 - Add openstack-ansible/os_senlin role https://review.opendev.org/748677 - kolla-cli: deprecation - Mark as deprecated https://review.opendev.org/749694 - Move ansible-role-XXX-hsm projects to Barbican team https://review.opendev.org/748027 ## General Changes - Drop all exceptions for legacy validation https://review.opendev.org/745403 ## Abandoned Changes - kolla-cli: deprecation - Mark kolla-cli as Deprecated https://review.opendev.org/749746 # Email Threads - Focal Goal Update #4: http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017078.html - Forum Submissions Open: http://lists.openstack.org/pipermail/openstack-discuss/2020-September/016933.html # Other Reminders - Forum CFP closes Sept 14th! - PTG Signup closes Sept 11th! Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From thierry at openstack.org Wed Sep 9 15:05:40 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 9 Sep 2020 17:05:40 +0200 Subject: [summit] Schedule is Live for the 2020 Virtual Open Infrastructure Summit Message-ID: <1211c678-52fb-8509-5e67-0fc4e93c8eb2@openstack.org> Hi everyone, We are excited to announce that the schedule[1] for the virtual 2020 Open Infrastructure Summit is now live featuring keynotes and sessions from users like Volvo, Workday, Société Générale and Ant Group: [1] https://www.openstack.org/summit/2020/summit-schedule The virtual event takes place October 19-23 and includes more than 100 sessions and thousands of attendees are expected to participate, representing 30+ open source communities from more than 100 countries. Sessions for the virtual summit are led by users from global enterprises and research institutions building and operating open infrastructure at scale. The Summit includes: - Sessions spanning 30+ open source projects from technical community leaders and organizations including Alibaba Cloud, AT&T, China Mobile, CERN, European Weather Cloud, GE Digital and Volvo Cars and many more. - Collaborative sessions with project leaders and open source communities, including Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, and Zuul. - Get hands on workshops around open source technologies directly from the developers and operators building the software. - Familiarize with the updated Certified OpenStack Administrator (COA) exam and test your OpenStack knowledge at the COA sessions, sponsored by the OpenStack Foundation, in collaboration with Mirantis. Now what? Register[2] for your free virtual Summit pass and meet the users, developers, and vendors who are building and operating open infrastructure on October 19-23! [2] https://www.eventbrite.com/e/open-infrastructure-summit-2020-tickets-96967218561 Thank you to our Summit Headline, Premier and Exhibitor sponsors: Huawei, Cisco, InMotion Hosting, Trilio and ZTE. Event sponsors gain visibility with a wide array of open source infrastructure developers, operators and decision makers. Download the Open Infrastructure Summit sponsor prospectus[3] for more information. [3] https://www.openstack.org/summit/2020/sponsors/ Questions? Reach out to summit at openstack.org “See you” in October! Cheers, -- Thierry Carrez (ttx) From stephenfin at redhat.com Wed Sep 9 15:48:14 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 09 Sep 2020 16:48:14 +0100 Subject: [nova] Changes for out-of-tree drivers Message-ID: <1dea07eda965f2a2f9a38b59d885fe905c62205b.camel@redhat.com> We're aiming to remove the long-deprecated XenAPI driver before Victoria ends. With that removed, there are a number of XenAPI-specific virt driver APIs that we plan to remove in follow-ups. These are noted at [1]. If you maintain an out-of-tree driver, you will need to account for these changes. Cheers, Stephen [1] https://review.opendev.org/#/c/749300/1/nova/virt/driver.py From mordred at inaugust.com Wed Sep 9 16:10:52 2020 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 9 Sep 2020 11:10:52 -0500 Subject: Moving on Message-ID: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Hi everybody, After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. I wish everyone all the best, and I hope life conspires to keep us all connected. Thank you to everyone for an amazing 10 years. Monty From amy at demarco.com Wed Sep 9 16:17:31 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 9 Sep 2020 11:17:31 -0500 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: Monty, It has been a pleasure being in the community with you all these years and I can't even begin to count or describe all you've done for it. Thank you for being a part of the journey and best wishes on your future endeavours. Amy (spotz) On Wed, Sep 9, 2020 at 11:11 AM Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the > next challenge. Actually, the time came a few weeks ago, but writing > farewells has always been something I’m particularly bad at. My last day at > Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything > for 10 years before, and I’ll be very surprised if I do anything else for > 10 years again. While I’m excited about the new things on my plate, I’ll > obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing > any OpenStack things as part of my day job. I’m not sure how much spare > time I’ll have to be able to contribute. I’m going to hold off on resigning > core memberships pending a better understanding of that. I think it’s safe > to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all > connected. > > Thank you to everyone for an amazing 10 years. > > Monty > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 9 16:19:30 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Sep 2020 16:19:30 +0000 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <20200909161930.m2xichrlfmhu4tmv@yuggoth.org> On 2020-09-09 11:10:52 -0500 (-0500), Monty Taylor wrote: > After 10 years of OpenStack, the time has come for me to move on > to the next challenge. [...] Don't think this gets you out of coming to visit once the current crisis blows over! Also, you won't be missed because I'll make sure to continue to pester you with absurd questions. After all, you know where all the bodies are buried. Best of luck on your new gig! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From artem.goncharov at gmail.com Wed Sep 9 16:24:19 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 9 Sep 2020 18:24:19 +0200 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <4248B7F8-BD6F-48F0-B510-5BBDD70D7EE6@gmail.com> Thanks a lot, Monty, for doing such a great job in and for the community. We will terribly miss chatting with you on a daily basis, but the life goes on, as you said. Your heritage is so huge, that taking it over is not something easy :-) Definitely all of us would be glad to keep you core as long as you want and as long as at all possible (even agains your wish). Best regards in your new life section and hope to still have chance to have a beer with you in person. Artem > On 9. Sep 2020, at 18:10, Monty Taylor wrote: > > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty > _______________________________________________ > Zuul-discuss mailing list > Zuul-discuss at lists.zuul-ci.org > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss From mnaser at vexxhost.com Wed Sep 9 16:26:50 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 Sep 2020 12:26:50 -0400 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: Thanks for everything Monty. It's been a pleasure working alongside you since first meeting in person at PyCon in Montreal, quite a long time ago. :) Good luck with everything, and hope to see you in some way in the future :) On Wed, Sep 9, 2020 at 12:11 PM Monty Taylor wrote: > > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty -- Mohammed Naser VEXXHOST, Inc. From juliaashleykreger at gmail.com Wed Sep 9 16:31:12 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 9 Sep 2020 09:31:12 -0700 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: Monty, It has been an absolute pleasure working with you, and I'm sure paths will cross again in the future. Even if it is for just a good cup of coffee or just a reunion of stackers. -Julia On Wed, Sep 9, 2020 at 9:12 AM Monty Taylor wrote: > > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty From gmann at ghanshyammann.com Wed Sep 9 16:46:13 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Sep 2020 11:46:13 -0500 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <17473c2b34c.fd061e2319472.3266951661606969893@ghanshyammann.com> Thank you Monty for everything and being a learning model in the community. You are one of the inspiring personalities for me in OSS world and motivate me to learn & do more. -gmann ---- On Wed, 09 Sep 2020 11:10:52 -0500 Monty Taylor wrote ---- > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty > From kevin at cloudnull.com Wed Sep 9 16:55:18 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Wed, 9 Sep 2020 11:55:18 -0500 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: It's been an absolute pleasure working with you. Thank you for everything you've done for the community, in and outside of OpenStack. Please keep in touch. I'd love to know more about your new endeavours, which I'm sure will be wildly successful. -- Kevin Carter IRC: Cloudnull On Wed, Sep 9, 2020 at 11:16 AM Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the > next challenge. Actually, the time came a few weeks ago, but writing > farewells has always been something I’m particularly bad at. My last day at > Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything > for 10 years before, and I’ll be very surprised if I do anything else for > 10 years again. While I’m excited about the new things on my plate, I’ll > obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing > any OpenStack things as part of my day job. I’m not sure how much spare > time I’ll have to be able to contribute. I’m going to hold off on resigning > core memberships pending a better understanding of that. I think it’s safe > to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all > connected. > > Thank you to everyone for an amazing 10 years. > > Monty > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Sep 9 17:04:51 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 9 Sep 2020 12:04:51 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib In-Reply-To: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> References: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> Message-ID: On 9/8/20 10:45 AM, Ghanshyam Mann wrote: > Hello Team, > > This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and > base job switch is planned to be switched by today[1]. > > There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. > We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. > > Opinion? As I noted in the meeting, if we have to do this to keep the gates working then I'd rather do it as an FFE than have to backport all of the relevant patches. IMHO we should only decline this FFE if we are going to also change our statement of support for Python/Ubuntu in Victoria. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html > > -gmann > > From jungleboyj at gmail.com Wed Sep 9 17:32:39 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 9 Sep 2020 12:32:39 -0500 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <2b723849-a83e-16e0-c920-2eeecfd382fa@gmail.com> Monty, It has been great to work with you through OpenStack.  Thank you for all you have done to support the community and keep it innovative. Best of luck at your next endeavor and I look forward to crossing paths with you in the future! Best wishes, Jay On 9/9/2020 11:10 AM, Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty From yan.y.zhao at intel.com Wed Sep 9 02:13:09 2020 From: yan.y.zhao at intel.com (Yan Zhao) Date: Wed, 9 Sep 2020 10:13:09 +0800 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200908164130.2fe0d106.cohuck@redhat.com> References: <20200818113652.5d81a392.cohuck@redhat.com> <20200820003922.GE21172@joy-OptiPlex-7040> <20200819212234.223667b3@x1.home> <20200820031621.GA24997@joy-OptiPlex-7040> <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> Message-ID: <20200909021308.GA1277@joy-OptiPlex-7040> > > still, I'd like to put it more explicitly to make ensure it's not missed: > > the reason we want to specify compatible_type as a trait and check > > whether target compatible_type is the superset of source > > compatible_type is for the consideration of backward compatibility. > > e.g. > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > generation device may be of mdev type xxx-v5-yyy. > > with the compatible_type traits, the old generation device is still > > able to be regarded as compatible to newer generation device even their > > mdev types are not equal. > > If you want to support migration from v4 to v5, can't the (presumably > newer) driver that supports v5 simply register the v4 type as well, so > that the mdev can be created as v4? (Just like QEMU versioned machine > types work.) yes, it should work in some conditions. but it may not be that good in some cases when v5 and v4 in the name string of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for gen9) e.g. (1). when src mdev type is v4 and target mdev type is v5 as software does not support it initially, and v4 and v5 identify hardware differences. then after software upgrade, v5 is now compatible to v4, should the software now downgrade mdev type from v5 to v4? not sure if moving hardware generation info into a separate attribute from mdev type name is better. e.g. remove v4, v5 in mdev type, while use compatible_pci_ids to identify compatibility. (2) name string of mdev type is composed by "driver_name + type_name". in some devices, e.g. qat, different generations of devices are binding to drivers of different names, e.g. "qat-v4", "qat-v5". then though type_name is equal, mdev type is not equal. e.g. "qat-v4-type1", "qat-v5-type1". Thanks Yan From yan.y.zhao at intel.com Wed Sep 9 05:37:56 2020 From: yan.y.zhao at intel.com (Yan Zhao) Date: Wed, 9 Sep 2020 13:37:56 +0800 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200831044344.GB13784@joy-OptiPlex-7040> References: <20200818091628.GC20215@redhat.com> <20200818113652.5d81a392.cohuck@redhat.com> <20200820003922.GE21172@joy-OptiPlex-7040> <20200819212234.223667b3@x1.home> <20200820031621.GA24997@joy-OptiPlex-7040> <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> Message-ID: <20200909053755.GA721@joy-OptiPlex-7040> hi All, Per our previous discussion, there are two main concerns to the previous proposal: (1) it's currently hard for openstack to match mdev types. (2) complicated. so, we further propose below changes: (1) requiring two compatible mdevs to have the same mdev type for now. (though kernel still exposes compatible_type attributes for future use) (2) requiring 1:1 match for other attributes under sysfs type node for now (those attributes are specified via compatible_ but with only 1 value in it.) (3) do not match attributes under device instance node. rather, they are regarded as part of resource claiming process. so src and dest values are ensured to be 1:1. A dynamic_resources attribute under sysfs node is added to list the attributes under device instance that mgt tools need to ensure 1:1 from src and dest. the "aggregator" attribute under device instance node is such one that needs to be listed. Those listed attributes can actually be treated as device state set by vendor driver during live migration. but we still want to ask for them to be set by mgt tools before live migration starts, in oder to reduce the chance of live migration failure. do you like those changes? after the changes, the sysfs interface would look like blow: |- [parent physical device] |--- Vendor-specific-attributes [optional] |--- [mdev_supported_types] | |--- [] | | |--- create | | |--- name | | |--- available_instances | | |--- device_api | | |--- software_version | | |--- compatible_type | | |--- compatible_ | | |--- compatible_ | | |--- dynamic_resources | | |--- description | | |--- [devices] - device_api : exact match between src and dest is required. its value can be one of "vfio-pci", "vfio-platform", "vfio-amba", "vfio-ccw", "vfio-ap" - software_version: version of vendor driver. in major.minor.bugfix scheme. dest major should be equal to src major, dest minor should be no less than src minor. once migration stream related code changed, vendor drivers need to bump the version. - compatible_type: not used by mgt tools currently. vendor drivers can provide this attribute, but need to know that mgt apps would ignore it. when in future mgt tools support this attribute, it would allow migration across different mdev types, so that devices of older generation may be able to migrate to newer generations. - compatible_: for device api specific attributes, e.g. compatible_subchannel_type, dest values should be superset of arc values. vendor drivers can specify only one value in this attribute, in order to do exact match between src and dest. It's ok for mgt tools to only read one value in the attribute so that src:dest values are 1:1. - compatible_: for mdev type specific attributes, e.g. compatible_pci_ids, compatible_chpid_type dest values should be superset of arc values. vendor drivers can specify only one value in the attribute in order to do exact match between src and dest. It's ok for mgt tools to only read one value in the attribute so that src:dest values are 1:1. - dynamic_resources: though defined statically under , this attribute lists attributes under device instance that need to be set as part of claiming dest resources. e.g. $cat dynamic_resources: aggregator, fps,... then after dest device is created, values of its device attributes need to be set to that of src device attributes. Failure in syncing src device values to dest device values is treated the same as failing to claiming dest resources. attributes under device instance that are not listed in this attribute would not be part of resource checking in mgt tools. Thanks Yan From jungleboyj at gmail.com Wed Sep 9 18:23:14 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 9 Sep 2020 13:23:14 -0500 Subject: [tc] monthly meeting In-Reply-To: References: Message-ID: All, Apologies for not making the meeting and for not getting caught up on e-mail until now.    Was on vacation last week. Jay On 9/2/2020 2:06 PM, Mohammed Naser wrote: > Hi everyone, > > Here’s the agenda for our monthly TC meeting. It will happen tomorrow > (Thursday the 3rd) at 1400 UTC in #openstack-tc and I will be your > chair. > > If you can’t attend, please put your name in the “Apologies for > Absence” section. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > ## ACTIVE INITIATIVES > * Follow up on past action items > * OpenStack User-facing APIs and CLIs (belmoreira) > * W cycle goal selection start > * Completion of retirement cleanup (gmann): > https://etherpad.opendev.org/p/tc-retirement-cleanup > * Audit and clean-up tags (gmann) > + Remove tc:approved-release tag https://review.opendev.org/#/c/749363 > > Thank you, > Mohammed > From kennelson11 at gmail.com Wed Sep 9 18:28:18 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 9 Sep 2020 11:28:18 -0700 Subject: vPTG October 2020 Team Signup Reminder In-Reply-To: <5F13B10F-C0C5-4761-8AD2-9B3A55F67441@openstack.org> References: <5F13B10F-C0C5-4761-8AD2-9B3A55F67441@openstack.org> Message-ID: Hello Everyone! This is your final reminder! You have until *September 11th at 7:00 UTC to sign up your team for the PTG! You must complete **BOTH** the survey[1] AND reserve time in the ethercalc[2] to sign up your team. * And don't forget to register! [3] - TheKendalls (diablo_rojo & wendallkaters) [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey [2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb [3] PTG Registration: https://october2020ptg.eventbrite.com On Mon, Aug 31, 2020 at 10:39 AM Kendall Waters wrote: > Hello Everyone! > > Wanted to give you all a reminder that the deadline for signing up teams > for the PTG is approaching! > > The virtual PTG will be held from Monday October 26th to Friday October > 30th, 2020. > > *To signup your team, you must complete **BOTH** the survey[1] AND > reserve time in the ethercalc[2] by September 11th at 7:00 UTC.* > > We ask that the PTL/SIG Chair/Team lead sign up for time to have their > discussions in with 4 rules/guidelines. > > 1. Cross project discussions (like SIGs or support project teams) should > be scheduled towards the start of the week so that any discussions that > might shape those of other teams happen first. > 2. No team should sign up for more than 4 hours per UTC day to help keep > participants actively engaged. > 3. No team should sign up for more than 16 hours across all time slots to > avoid burning out our contributors and to enable participation in multiple > teams discussions. > > Once your team is signed up, please register[3]! And remind your team to > register! Registration is free, but since it will be how we contact you > with passwords, event details, etc. it is still important! > > If you have any questions, please let us know. > > -The Kendalls (diablo_rojo & wendallkaters) > > [1] Team Survey: > https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey > [2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb > [3] PTG Registration: https://october2020ptg.eventbrite.com > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 9 19:05:17 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Sep 2020 14:05:17 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> Message-ID: <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > Updates: > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > and discuss the integration testing switch tomorrow in TC office hours. > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > - All patches in this series https://review.opendev.org/#/c/738328/ All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. You can find most of the fixes for possible failure in this topic: - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) -gmann > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > -gmann > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > break the projects gate if not yet taken care of. Read below for the plan. > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > Progress: > > ======= > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > plan. > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > ** Bug#1882521 > > ** DB migration issues, > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > Testing Till now: > > ============ > > * ~200 repos gate have been tested or fixed till now. > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > project repos if I am late to fix them): > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > * ~30repos fixes ready to merge: > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > Bugs Report: > > ========== > > > > 1. Bug#1882521. (IN-PROGRESS) > > There is open bug for nova/cinder where three tempest tests are failing for > > volume detach operation. There is no clear root cause found yet > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > We have skipped the tests in tempest base patch to proceed with the other > > projects testing but this is blocking things for the migration. > > > > 2. DB migration issues (IN-PROGRESS) > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > 4. Bug#1886296. (IN-PROGRESS) > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > nd will release a new hacking version. After that project can move to new hacking and do not need > > to maintain pyflakes version compatibility. > > > > 5. Bug#1886298. (IN-PROGRESS) > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > What work to be done on the project side: > > ================================ > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > 1. Start a patch in your repo by making depends-on on either of below: > > devstack base patch if you are using only devstack base jobs not tempest: > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > OR > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > Example: https://review.opendev.org/#/c/738126/ > > > > 2. If none of your project jobs override the nodeset then above patch will be > > testing patch(do not merge) otherwise change the nodeset to focal. > > Example: https://review.opendev.org/#/c/737370/ > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > this. > > Example: https://review.opendev.org/#/c/744056/2 > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > this migration. > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > base patches. > > > > > > Important things to note: > > =================== > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > * Use gerrit topic 'migrate-to-focal' > > * Do not backport any of the patches. > > > > > > References: > > ========= > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > [2] https://review.opendev.org/#/c/739315/ > > [3] https://review.opendev.org/#/c/739334/ > > [4] https://github.com/pallets/markupsafe/issues/116 > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > -gmann > > > > > > From kennelson11 at gmail.com Wed Sep 9 19:21:52 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 9 Sep 2020 12:21:52 -0700 Subject: [tc] [all] Topics for Cross Community Discussion with Kubernetes ... In-Reply-To: References: Message-ID: Hello :) Wanted to revive this thread and see if we maybe wanted to add it to our PTG topics? We could maybe see if any of them are available/want to join our PTG sessions to chat about things. -Kendall (diablo_rojo) On Fri, Jul 10, 2020 at 5:33 AM Jay Bryant wrote: > All, > > Recently, the OpenStack TC has reached out to the Kubernetes Steering > Committee for input as we have proposed adding a > starter-kit:kubernetes-in-virt tag for projects in OpenStack. This > request was received positively and as a result the TC has started > brainstorming other topics that we could approach with the k8s community > in this [1] etherpad. > > If you have topics that may be appropriate for this discussion please > see the etherpad and add your ideas. > > Thanks! > > Jay > > IRC: jungleboyj > > [1] https://etherpad.opendev.org/p/kubernetes-cross-community-topics > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 9 19:24:55 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Sep 2020 19:24:55 +0000 Subject: [infra][tact-sig] October 2020 vPTG space for TaCT SIG work Message-ID: <20200909183945.sw2mvkhoqlqb4nql@yuggoth.org> Just one last quick check, does anyone think we need dedicated TaCT SIG space in the vPTG schedule? I can snag some for us if so. There seemed to be some consensus on IRC that the OpenStack Testing and Collaboration Tools SIG can just glom onto OpenDev and OpenStack QA team vPTG sessions where relevant, which seems entirely prudent to me, and means we don't need our own separate times to talk about things which are neither relevant to OpenDev nor QA (which I estimate to be roughly nil). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mthode at mthode.org Wed Sep 9 19:25:43 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 9 Sep 2020 14:25:43 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib In-Reply-To: References: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> Message-ID: <20200909192543.b2d2ksruoqtbgcfy@mthode.org> On 20-09-09 12:04:51, Ben Nemec wrote: > > On 9/8/20 10:45 AM, Ghanshyam Mann wrote: > > Hello Team, > > > > This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and > > base job switch is planned to be switched by today[1]. > > > > There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal > > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > > > If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. > > We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. > > Opinion? > > As I noted in the meeting, if we have to do this to keep the gates working > then I'd rather do it as an FFE than have to backport all of the relevant > patches. IMHO we should only decline this FFE if we are going to also change > our statement of support for Python/Ubuntu in Victoria. > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html > > > > -gmann > > > > > https://review.opendev.org/#/c/750089 seems like the only functional change. It has my ACK with my requirements hat on. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jungleboyj at gmail.com Wed Sep 9 19:26:21 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 9 Sep 2020 14:26:21 -0500 Subject: [tc] [all] Topics for Cross Community Discussion with Kubernetes ... In-Reply-To: References: Message-ID: <8ed717c5-aa88-fe07-ae58-ca0fdf3c35d8@gmail.com> On 9/9/2020 2:21 PM, Kendall Nelson wrote: > Hello :) > > Wanted to revive this thread and see if we maybe wanted to add it to > our PTG topics? We could maybe see if any of them are available/want > to join our PTG sessions to chat about things. > > -Kendall (diablo_rojo) > Kendall, Thank you for reviving this.  I think this would be good to add to the agenda for our PTG topics and to invite k8s people to join if possible. Jay > On Fri, Jul 10, 2020 at 5:33 AM Jay Bryant > wrote: > > All, > > Recently, the OpenStack TC has reached out to the Kubernetes Steering > Committee for input as we have proposed adding a > starter-kit:kubernetes-in-virt tag for projects in OpenStack. This > request was received positively and as a result the TC has started > brainstorming other topics that we could approach with the k8s > community > in this [1] etherpad. > > If you have topics that may be appropriate for this discussion please > see the etherpad and add your ideas. > > Thanks! > > Jay > > IRC: jungleboyj > > [1] https://etherpad.opendev.org/p/kubernetes-cross-community-topics > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Sep 9 19:33:09 2020 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 9 Sep 2020 15:33:09 -0400 Subject: [tc] [all] Topics for Cross Community Discussion with Kubernetes ... In-Reply-To: References: Message-ID: <20200909193309.cpjuda2y2jqsht5a@barron.net> On 09/09/20 12:21 -0700, Kendall Nelson wrote: >Hello :) > >Wanted to revive this thread and see if we maybe wanted to add it to our >PTG topics? We could maybe see if any of them are available/want to join >our PTG sessions to chat about things. > >-Kendall (diablo_rojo) +1, and I'd love to join the sessions. > >On Fri, Jul 10, 2020 at 5:33 AM Jay Bryant wrote: > >> All, >> >> Recently, the OpenStack TC has reached out to the Kubernetes Steering >> Committee for input as we have proposed adding a >> starter-kit:kubernetes-in-virt tag for projects in OpenStack. This >> request was received positively and as a result the TC has started >> brainstorming other topics that we could approach with the k8s community >> in this [1] etherpad. >> >> If you have topics that may be appropriate for this discussion please >> see the etherpad and add your ideas. >> >> Thanks! >> >> Jay >> >> IRC: jungleboyj >> >> [1] https://etherpad.opendev.org/p/kubernetes-cross-community-topics >> >> >> From alexis.deberg at ubisoft.com Wed Sep 9 18:30:55 2020 From: alexis.deberg at ubisoft.com (Alexis Deberg) Date: Wed, 9 Sep 2020 18:30:55 +0000 Subject: [neutron] Flow drop on agent restart with openvswitch firewall driver In-Reply-To: <20200909075042.qyxbnq7li2zm5oo4@skaplons-mac> References: , <20200909075042.qyxbnq7li2zm5oo4@skaplons-mac> Message-ID: Sure, opened https://bugs.launchpad.net/neutron/+bug/1895038 with all the details I got at hand. As I said in the bug report, I'll try to reproduce with a up to date devstack asap. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 9 19:54:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Sep 2020 14:54:05 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib In-Reply-To: <20200909192543.b2d2ksruoqtbgcfy@mthode.org> References: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> <20200909192543.b2d2ksruoqtbgcfy@mthode.org> Message-ID: <174746eb120.1229d07d224552.356509349559116522@ghanshyammann.com> ---- On Wed, 09 Sep 2020 14:25:43 -0500 Matthew Thode wrote ---- > On 20-09-09 12:04:51, Ben Nemec wrote: > > > > On 9/8/20 10:45 AM, Ghanshyam Mann wrote: > > > Hello Team, > > > > > > This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and > > > base job switch is planned to be switched by today[1]. > > > > > > There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal > > > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > > > > > If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. > > > We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. > > > Opinion? > > > > As I noted in the meeting, if we have to do this to keep the gates working > > then I'd rather do it as an FFE than have to backport all of the relevant > > patches. IMHO we should only decline this FFE if we are going to also change > > our statement of support for Python/Ubuntu in Victoria. > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html > > > > > > -gmann > > > > > > > > > > https://review.opendev.org/#/c/750089 seems like the only functional > change. It has my ACK with my requirements hat on. yeah, and this one changing one test with #noqa - https://review.opendev.org/#/c/744323/5 The rest all are l-c bump. Also all the tox base jobs are migrated to Focal now. - http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017136.html > > -- > Matthew Thode > From rosmaita.fossdev at gmail.com Wed Sep 9 20:18:36 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 9 Sep 2020 16:18:36 -0400 Subject: [cinder][requirements] FFE request for os-brick Message-ID: <15d7208b-f006-0e06-3502-55309de3f8f4@gmail.com> Hello Requirements Team, The victoria release of os-brick 4.0.0 that we did last week contained a bugfix that requires a heavyweight binary dependency. Unfortunately, we did not realize this until yesterday. The cinder team reverted that patch [0] and replaced it with a lightweight fix [1] that does not require any new dependencies and has proposed the release of os-brick 4.0.1 [2]. We are asking for a requirements freeze exception so that os-brick 4.0.1 can be included in the victoria openstack release. [0] https://review.opendev.org/750649 [1] https://review.opendev.org/750655 [2] https://review.opendev.org/#/c/750808/ thank you, brian From skaplons at redhat.com Wed Sep 9 20:43:16 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 9 Sep 2020 22:43:16 +0200 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <20200909204316.tfuvkvc6rcih6akq@skaplons-mac> Thank You Monty for all what You have done for OpenStack and SDK especially. It was big pleasure to work with You. All the best in Your new role! On Wed, Sep 09, 2020 at 11:10:52AM -0500, Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty > -- Slawek Kaplonski Principal software engineer Red Hat From mthode at mthode.org Wed Sep 9 21:09:59 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 9 Sep 2020 16:09:59 -0500 Subject: [cinder][requirements] FFE request for os-brick In-Reply-To: <15d7208b-f006-0e06-3502-55309de3f8f4@gmail.com> References: <15d7208b-f006-0e06-3502-55309de3f8f4@gmail.com> Message-ID: <20200909210959.ijzgfw672rxwi7lu@mthode.org> On 20-09-09 16:18:36, Brian Rosmaita wrote: > Hello Requirements Team, > > The victoria release of os-brick 4.0.0 that we did last week contained a > bugfix that requires a heavyweight binary dependency. Unfortunately, we did > not realize this until yesterday. The cinder team reverted that patch [0] > and replaced it with a lightweight fix [1] that does not require any new > dependencies and has proposed the release of os-brick 4.0.1 [2]. We are > asking for a requirements freeze exception so that os-brick 4.0.1 can be > included in the victoria openstack release. > > [0] https://review.opendev.org/750649 > [1] https://review.opendev.org/750655 > [2] https://review.opendev.org/#/c/750808/ > > > thank you, > brian > This looks fine to me. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Wed Sep 9 21:19:16 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 9 Sep 2020 14:19:16 -0700 Subject: [release][heat][karbor][patrole][requirements][swift][tempest][vitrage] Cycle With Intermediary Unreleased Deliverables Message-ID: Hello! Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Victoria cycle: heat-agents karbor-dashboard karbor patrole requirements swift tempest vitrage-dashboard vitrage Those should be released ASAP, and in all cases before $rc1-deadline, so that we have a release to include in the final $series release. - Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 9 21:36:35 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 9 Sep 2020 14:36:35 -0700 Subject: [all][PTL][release] Victoria Cycle Highlights In-Reply-To: References: Message-ID: Hello Everyone! Wanted to give you another reminder! Looking forward to see your highlights by the end of the week! -Kendall (diablo_rojo) On Tue, Sep 1, 2020 at 2:34 PM Kendall Nelson wrote: > Hello Everyone! > > It's time to start thinking about calling out 'cycle-highlights' in your > deliverables! > > As PTLs, you probably get many pings and emails from various parties > (marketing, management, journalists, etc) asking for highlights of what > is new and what significant changes are coming in the new release. By > putting them all in the same place it makes them easy to reference because > they get compiled into a pretty website like this from the last few > releases: Rocky[1], Stein[2], Train[3], Ussuri[4]. > > As usual, we don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > is a good place to start. > > *The deadline for cycle highlights is the end of the R-5 week [5] on Sept > 11th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/train/$PROJECT.yaml in the > openstack/releases repo like this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/train/highlights.html file > or the output of the build-openstack-sphinx-docs job under html/train/ > highlights.html. > > Can't wait to see what you've accomplished! > > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/rocky/highlights.html > [2] https://releases.openstack.org/stein/highlights.html > [3] https://releases.openstack.org/train/highlights.html > [4] https://releases.openstack.org/ussuri/highlights.html > [5] htt > > https://releases.openstack.org/victoria/schedule.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Wed Sep 9 22:07:42 2020 From: feilong at catalyst.net.nz (feilong) Date: Thu, 10 Sep 2020 10:07:42 +1200 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <1218aab0-3c6f-3ad3-81b0-114752aba5d3@catalyst.net.nz> Thank you for all you have done for OpenStack and the community, Monty. On 10/09/20 4:10 am, Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ From Arkady.Kanevsky at dell.com Wed Sep 9 22:24:27 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 9 Sep 2020 22:24:27 +0000 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: Monty, Thank you very very much for all you time and dedication to OpenStack and Zuul. It was a pleasure working with you. And best of luck on your new endeavor. We will miss you. Arkady -----Original Message----- From: Monty Taylor Sent: Wednesday, September 9, 2020 11:11 AM To: openstack-discuss; service-discuss at lists.opendev.org; Zuul-discuss at lists.zuul-ci.org Subject: Moving on [EXTERNAL EMAIL] Hi everybody, After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. I wish everyone all the best, and I hope life conspires to keep us all connected. Thank you to everyone for an amazing 10 years. Monty From jimmy at openstack.org Wed Sep 9 22:57:00 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 9 Sep 2020 17:57:00 -0500 Subject: REMINDER: 2020 Virtual Summit: Forum Submissions Now Accepted Message-ID: Hello Everyone! We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Virtual Forum page [4]. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. The timeline for submissions is as follows: Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org. Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. Sept 28th | Forum schedule final Oct 19th | Forum begins! If you have questions or concerns, please reach out to speakersupport at openstack.org (mailto:speakersupport at openstack.org). Cheers, Jimmy [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/2020/ [3] https://cfp.openstack.org [4]https://wiki.openstack.org/wiki/Forum/Virtual2020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Sep 9 23:15:14 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 10 Sep 2020 01:15:14 +0200 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <4965dddc-9480-ecff-1004-6aeb2fc39619@debian.org> On 9/9/20 6:10 PM, Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > Thank you to everyone for an amazing 10 years. > > Monty > Monty, We'll miss you! Both for your technical excellence, and for the nicer person that you are. Good luck for whatever's next. Cheers, Thomas Goirand (zigo) From sorrison at gmail.com Thu Sep 10 00:58:43 2020 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 10 Sep 2020 10:58:43 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <17472f764b8.1292d333d6181.3892285235847293323@ghanshyammann.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> <17472f764b8.1292d333d6181.3892285235847293323@ghanshyammann.com> Message-ID: <2AB30A6D-9B6C-4D18-8FAB-C1022965657A@gmail.com> OK thanks for the fix for TaaS, https://review.opendev.org/#/c/750633/4 should be good to be merged (even though its failing) Also https://review.opendev.org/#/c/749641/3 should be good to go. This will get all the unit tests working. The pep8 tests are broken due to the pecan 1.4.0 issue being discussed at https://review.opendev.org/#/c/747419/ My zuul v3 aio tempest devstack job is working well now, still having some issues with the multinode one which I’m working on now. Sam > On 9 Sep 2020, at 11:04 pm, Ghanshyam Mann wrote: > > Also we need to merge the networking-l2gw project new location fix > > - https://review.opendev.org/#/c/738046/ > > It's leading to many errors as pointed by AJaeger - https://zuul.opendev.org/t/openstack/config-errors > > > -gmann > > ---- On Wed, 09 Sep 2020 07:18:37 -0500 Lajos Katona wrote ---- >> Hi,I pushed a fix for it https://review.opendev.org/750633, I added Deepak for reviewer as he is the owner of the taas patch. >> Sorry for the problem.Lajos (lajoskatona) >> Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 12:49): >> >> >> On 9 Sep 2020, at 4:52 pm, Lajos Katona wrote: >> Hi,Could you please point to the issue with taas? >> Networking-midonet unit tests [1] are failing with the addition of this patch [2] >> [1] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html[2] https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca >> I’m not really familiar with all of this so not sure how to fix these up. >> Cheers,Sam >> >> >> >> RegardsLajos (lajoskatona) >> Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 0:44): >> >> >> On 8 Sep 2020, at 3:13 pm, Sam Morrison wrote: >> Hi Yamamoto, >> >> On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto wrote: >> i'm talking to our infra folks but it might take longer than i hoped. >> if you or someone else can provide a public repo, it might be faster. >> (i have looked at launchpad PPA while ago. but it didn't seem >> straightforward given the complex build machinary in midonet.) >> >> Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in it and happy to use that for the time being. >> >> >> I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. >> >> probably. >> >> Yeah this looks like a neutron or neutron-lib issue >> >> >> For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. >> >> this thing? https://review.opendev.org/#/c/749866/ >> >> Yeah that fixed that issue. >> >> I have been working to get everything fixed in this review [1] >> The pep8 job is working but not in the gate due to neutron issues [2]The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service which I don’t really have any idea about, never used it. [3] >> These are failing because of this patch on tap-as-a-service https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca >> Really have no idea how this works, does anyone use tap-as-a-service with midonet and can help me fix it, else I’m wondering if we disable tests for taas and make it an unsupported feature for now. >> Sam >> >> >> The tempest aio job is working well now, I’m not sure what tempest tests were run before but it’s just doing what ever is the default at the moment.The tempest multinode job isn’t working due to what I think is networking issues between the 2 nodes. I don’t really know what I’m doing here so any pointers would be helpful. [4]The grenade job is also failing because I also need to put these fixes on the stable/ussuri branch to make it work so will need to figure that out too >> Cheers,Sam >> [1] https://review.opendev.org/#/c/749857/[2] https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b[3] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html[4] https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 >> >> >> >> >> >> I can now start to look into the devstack zuul jobs. >> >> Cheers, >> Sam >> >> >> [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack >> [2] https://github.com/midonet/midonet/pull/9 >> >> >> >> >> On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: >> >> >> >> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: >> >> hi, >> >> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: >> >> >> >> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: >> >> Sebastian, Sam, >> >> thank you for speaking up. >> >> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >> the major part for it is to make midonet itself to run on ubuntu >> version used by the ci. (18.04, or maybe directly to 20.04) >> https://midonet.atlassian.net/browse/MNA-1344 >> iirc, the remaining blockers are: >> * libreswan (used by vpnaas) >> * vpp (used by fip64) >> maybe it's the easiest to drop those features along with their >> required components, if it's acceptable for your use cases. >> >> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >> >> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? >> >> it still exists. but i don't think it's maintained well. >> let me find and ask someone in midokura who "owns" that part of infra. >> >> does it also involve some package-related modifications to midonet repo, right? >> >> >> Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow >> >> Sam >> >> >> >> >> >> I’m keen to do the work but might need a bit of guidance to get started, >> >> Sam >> >> >> >> >> >> >> >> alternatively you might want to make midonet run in a container. (so >> that you can run it with older ubuntu, or even a container trimmed for >> JVM) >> there were a few attempts to containerize midonet. >> i think this is the latest one: https://github.com/midonet/midonet-docker >> >> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >> >> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >> >> I’m happy to help too. >> >> Cheers, >> Sam >> >> >> >> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >> >> Hi, >> >> Thx Sebastian for stepping in to maintain the project. That is great news. >> I think that at the beginning You should do 2 things: >> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >> >> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >> >> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: >> >> Hi Slawek, >> >> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >> >> Please let me know how to proceed and how we can be onboarded easily. >> >> Best regards, >> >> Sebastian >> >> -- >> Sebastian Saemann >> Head of Managed Services >> >> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >> https://netways.de | sebastian.saemann at netways.de >> >> ** NETWAYS Web Services - https://nws.netways.de ** >> >> — >> Slawek Kaplonski >> Principal software engineer >> Red Hat >> >> >> >> >> From gmann at ghanshyammann.com Thu Sep 10 03:22:13 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Sep 2020 22:22:13 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib In-Reply-To: <174746eb120.1229d07d224552.356509349559116522@ghanshyammann.com> References: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> <20200909192543.b2d2ksruoqtbgcfy@mthode.org> <174746eb120.1229d07d224552.356509349559116522@ghanshyammann.com> Message-ID: <1747608f93e.d7ec085f28282.4985395579846058200@ghanshyammann.com> ---- On Wed, 09 Sep 2020 14:54:05 -0500 Ghanshyam Mann wrote ---- > > ---- On Wed, 09 Sep 2020 14:25:43 -0500 Matthew Thode wrote ---- > > On 20-09-09 12:04:51, Ben Nemec wrote: > > > > > > On 9/8/20 10:45 AM, Ghanshyam Mann wrote: > > > > Hello Team, > > > > > > > > This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and > > > > base job switch is planned to be switched by today[1]. > > > > > > > > There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal > > > > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > > > > > > > If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. > > > > We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. > > > > Opinion? > > > > > > As I noted in the meeting, if we have to do this to keep the gates working > > > then I'd rather do it as an FFE than have to backport all of the relevant > > > patches. IMHO we should only decline this FFE if we are going to also change > > > our statement of support for Python/Ubuntu in Victoria. > > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html > > > > > > > > -gmann > > > > > > > > > > > > > > > https://review.opendev.org/#/c/750089 seems like the only functional > > change. It has my ACK with my requirements hat on. NOTE: There is one py3.8 bug fix also merged in oslo.uitls which is not yet released. This made py3.8 job voting in oslo.utils gate. - https://review.opendev.org/#/c/750216/ Rest all l-c bump are now passing on Focal - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) -gmann > > yeah, and this one changing one test with #noqa - https://review.opendev.org/#/c/744323/5 > The rest all are l-c bump. > > Also all the tox base jobs are migrated to Focal now. > - http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017136.html > > > > > -- > > Matthew Thode > > > > From gmann at ghanshyammann.com Thu Sep 10 03:32:38 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Sep 2020 22:32:38 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> Message-ID: <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> Updates: Fixed a few more projects today which I found failing on Focal: - OpenStack SDKs repos : ready to merge - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. - Keystone: Fix is up, it should pass now. - Manila: Fix is up, it should pass gate. - Tacker: Ready to merge - neutron-dynamic-routing: Ready to merge - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. this is the patch -https://review.opendev.org/#/c/743080/ Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa No more energy left for today, I will continue the remaining work tomorrow. -gmann ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > Updates: > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > - All patches in this series https://review.opendev.org/#/c/738328/ > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > You can find most of the fixes for possible failure in this topic: > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > -gmann > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > -gmann > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > Hello Everyone, > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > Progress: > > > ======= > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > plan. > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > ** Bug#1882521 > > > ** DB migration issues, > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > Testing Till now: > > > ============ > > > * ~200 repos gate have been tested or fixed till now. > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > project repos if I am late to fix them): > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > * ~30repos fixes ready to merge: > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > Bugs Report: > > > ========== > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > There is open bug for nova/cinder where three tempest tests are failing for > > > volume detach operation. There is no clear root cause found yet > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > We have skipped the tests in tempest base patch to proceed with the other > > > projects testing but this is blocking things for the migration. > > > > > > 2. DB migration issues (IN-PROGRESS) > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > to maintain pyflakes version compatibility. > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > What work to be done on the project side: > > > ================================ > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > OR > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > this. > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > this migration. > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > base patches. > > > > > > > > > Important things to note: > > > =================== > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > * Use gerrit topic 'migrate-to-focal' > > > * Do not backport any of the patches. > > > > > > > > > References: > > > ========= > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > [2] https://review.opendev.org/#/c/739315/ > > > [3] https://review.opendev.org/#/c/739334/ > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > -gmann > > > > > > > > > > > > From tonyppe at gmail.com Thu Sep 10 04:35:32 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Thu, 10 Sep 2020 12:35:32 +0800 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues Message-ID: Hi all, hope you are all keeping safe and well. I am looking for information on the following two issues that I have which surrounds Magnum project: 1. Magnum does not support Openstack API with HTTPS 2. Magnum forces compute nodes to consume disk capacity for instance data My environment: Openstack Train deployed using Kayobe (Kolla-ansible). With regards to the HTTPS issue, Magnum stops working after enabling HTTPS because the certificate / CA certificate is not trusted by Magnum. The certificate which I am using is one that was purchased from GoDaddy and is trusted in web browsers (and is valid), just not trusted by the Magnum component. Regarding compute node disk consumption issue - I'm at a loss with regards to this and so I'm looking for more information about why this is being done and is there any way that I could avoid it? I have storage provided by a Cinder integration and so the consumption of compute node disk for instance data I need to avoid. Any information the community could provide to me with regards to the above would be much appreciated. I would very much like to use the Magnum project in this deployment for Kubernetes deployment within projects. Thanks in advance, Regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Sep 10 07:18:02 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 10 Sep 2020 09:18:02 +0200 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> Message-ID: I've triaged this for Kolla and Kolla-Ansible too. Bifrost is also affected (but it's on Storyboard). -yoctozepto On Thu, Sep 10, 2020 at 5:42 AM Ghanshyam Mann wrote: > > Updates: > > Fixed a few more projects today which I found failing on Focal: > > - OpenStack SDKs repos : ready to merge > - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. > - Keystone: Fix is up, it should pass now. > - Manila: Fix is up, it should pass gate. > - Tacker: Ready to merge > - neutron-dynamic-routing: Ready to merge > - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. > this is the patch -https://review.opendev.org/#/c/743080/ > > Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa > > No more energy left for today, I will continue the remaining work tomorrow. > > -gmann > > ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > > Updates: > > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > > - All patches in this series https://review.opendev.org/#/c/738328/ > > > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > > You can find most of the fixes for possible failure in this topic: > > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > -gmann > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > -gmann > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > Hello Everyone, > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > Progress: > > > > ======= > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > plan. > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > ** Bug#1882521 > > > > ** DB migration issues, > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > Testing Till now: > > > > ============ > > > > * ~200 repos gate have been tested or fixed till now. > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > project repos if I am late to fix them): > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > * ~30repos fixes ready to merge: > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > Bugs Report: > > > > ========== > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > volume detach operation. There is no clear root cause found yet > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > projects testing but this is blocking things for the migration. > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > to maintain pyflakes version compatibility. > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > What work to be done on the project side: > > > > ================================ > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > OR > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > this. > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > this migration. > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > base patches. > > > > > > > > > > > > Important things to note: > > > > =================== > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > * Use gerrit topic 'migrate-to-focal' > > > > * Do not backport any of the patches. > > > > > > > > > > > > References: > > > > ========= > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > [2] https://review.opendev.org/#/c/739315/ > > > > [3] https://review.opendev.org/#/c/739334/ > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > From feilong at catalyst.net.nz Thu Sep 10 08:01:38 2020 From: feilong at catalyst.net.nz (feilong) Date: Thu, 10 Sep 2020 20:01:38 +1200 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi Tony, Sorry for the late response for your thread. For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS and it works. For the 2nd issue, I think we were misunderstanding the nodes disk capacity. I was assuming you're talking about the k8s nodes, but seems you're talking about the physical compute host. I still don't think it's a Magnum issue because a k8s master/worker nodes are just normal Nova instances and managed by Heat. So I would suggest you use a simple HOT to test it, you can use this https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 Most of the cloud providers or organizations who have adopted Magnum are using Ceph as far as I know, just FYI. On 10/09/20 4:35 pm, Tony Pearce wrote: > Hi all, hope you are all keeping safe and well. I am looking for > information on the following two issues that I have which surrounds > Magnum project: > > 1. Magnum does not support Openstack API with HTTPS > 2. Magnum forces compute nodes to consume disk capacity for instance data > > My environment: Openstack Train deployed using Kayobe (Kolla-ansible).  > > With regards to the HTTPS issue, Magnum stops working after enabling > HTTPS because the certificate / CA certificate is not trusted by > Magnum. The certificate which I am using is one that was purchased > from GoDaddy and is trusted in web browsers (and is valid), just not > trusted by the Magnum component.  > > Regarding compute node disk consumption issue - I'm at a loss with > regards to this and so I'm looking for more information about why this > is being done and is there any way that I could avoid it?  I have > storage provided by a Cinder integration and so the consumption of > compute node disk for instance data I need to avoid.  > > Any information the community could provide to me with regards to the > above would be much appreciated. I would very much like to use the > Magnum project in this deployment for Kubernetes deployment within > projects.  > > Thanks in advance,  > > Regards, > > Tony -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Thu Sep 10 08:04:51 2020 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 10 Sep 2020 15:04:51 +0700 Subject: Moving on In-Reply-To: <4965dddc-9480-ecff-1004-6aeb2fc39619@debian.org> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> <4965dddc-9480-ecff-1004-6aeb2fc39619@debian.org> Message-ID: Oooh, wooow! You seemed a person who were going to be with OpenStack like… forever! Really sorry to hear your OpenStack time is about to end. Such a huge loss for us. Anyway, I wish you all the best at whatever you’re going to do! Thanks Renat Akhmerov @Nokia On 10 Sep 2020, 06:16 +0700, Thomas Goirand , wrote: > On 9/9/20 6:10 PM, Monty Taylor wrote: > > Hi everybody, > > > > After 10 years of OpenStack, the time has come for me to move on to the next challenge. Actually, the time came a few weeks ago, but writing farewells has always been something I’m particularly bad at. My last day at Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > > > I’m at a loss for words as to what more to say. I’ve never done anything for 10 years before, and I’ll be very surprised if I do anything else for 10 years again. While I’m excited about the new things on my plate, I’ll obviously miss everyone. > > > > As I am no longer being paid by an OpenStack employer, I will not be doing any OpenStack things as part of my day job. I’m not sure how much spare time I’ll have to be able to contribute. I’m going to hold off on resigning core memberships pending a better understanding of that. I think it’s safe to assume I won’t be able to continue on as SDK PTL though. > > > > I wish everyone all the best, and I hope life conspires to keep us all connected. > > > > Thank you to everyone for an amazing 10 years. > > > > Monty > > > > Monty, > > We'll miss you! Both for your technical excellence, and for the nicer > person that you are. Good luck for whatever's next. > > Cheers, > > Thomas Goirand (zigo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Sep 10 08:37:38 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 10 Sep 2020 10:37:38 +0200 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: <9d6ea1dd-440a-45be-8d74-add6db3222a2@openstack.org> Monty Taylor wrote: > [...] > Thank you to everyone for an amazing 10 years. > [...] Good luck in your future endeavors, Monty. I traveled the world and the seven seas with you, from walking the streets of Jerusalem, to eating fries by the Ixelles ponds in Brussels, to drinking cocktails in a Dallas pool. Looking forward to when we can do that again! Cheers, -- Thierry Carrez (ttx) From vijarad1 at in.ibm.com Thu Sep 10 09:22:08 2020 From: vijarad1 at in.ibm.com (Vijayendra R Radhakrishna) Date: Thu, 10 Sep 2020 09:22:08 +0000 Subject: Cloudinit resets network scripts to default configuration DHCP once Config Drive is removed after sometime on Openstack platform Message-ID: An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Sep 10 09:29:33 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 10 Sep 2020 11:29:33 +0200 Subject: [largescale-sig] Next meeting: September 9, 16utc In-Reply-To: <5983106a-fda8-2e5f-b4b3-1fe609f5843d@openstack.org> References: <5983106a-fda8-2e5f-b4b3-1fe609f5843d@openstack.org> Message-ID: Two new US-based participants joined the meeting, James Penick and Erik Andersson. We identified a new work area that the SIG should work on: meaningful monitoring. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-09-09-16.00.html TODOs: - all to describe briefly how you solved metrics/billing in your deployment in https://etherpad.openstack.org/p/large-scale-sig-documentation - ttx to look into a basic test framework for oslo.metrics - masahito to push latest patches to oslo.metrics - amorin to see if oslo.metrics could be tested at OVH - ttx to file Scaling Stories forum session, with amorin and someone from penick's team to help get it off the ground Next meetings: Sep 23, 8:00UTC; Oct 7, 16:00UTC (#openstack-meeting-3) -- Thierry Carrez (ttx) From yasufum.o at gmail.com Thu Sep 10 09:31:17 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Thu, 10 Sep 2020 18:31:17 +0900 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> Message-ID: <236b2c69-530a-2266-08e3-170b86c16a9d@gmail.com> Hi gmann, Sorry for that we've not merged your patch to Tacker because devstack on Focal fails in functional test. It seems gnocchi installation on Focal has some problems. Anyway, although this issue isn't fixed yet, we'll proceed to merge the patch immediately. Thanks, Yasufumi On 2020/09/10 12:32, Ghanshyam Mann wrote: > Updates: > > Fixed a few more projects today which I found failing on Focal: > > - OpenStack SDKs repos : ready to merge > - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. > - Keystone: Fix is up, it should pass now. > - Manila: Fix is up, it should pass gate. > - Tacker: Ready to merge > - neutron-dynamic-routing: Ready to merge > - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. > this is the patch -https://review.opendev.org/#/c/743080/ > > Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa > > No more energy left for today, I will continue the remaining work tomorrow. > > -gmann > > ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > > Updates: > > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > > - All patches in this series https://review.opendev.org/#/c/738328/ > > > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > > You can find most of the fixes for possible failure in this topic: > > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > -gmann > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > -gmann > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > Hello Everyone, > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > Progress: > > > > ======= > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > plan. > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > ** Bug#1882521 > > > > ** DB migration issues, > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > Testing Till now: > > > > ============ > > > > * ~200 repos gate have been tested or fixed till now. > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > project repos if I am late to fix them): > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > * ~30repos fixes ready to merge: > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > Bugs Report: > > > > ========== > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > volume detach operation. There is no clear root cause found yet > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > projects testing but this is blocking things for the migration. > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > to maintain pyflakes version compatibility. > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > What work to be done on the project side: > > > > ================================ > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > OR > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > this. > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > this migration. > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > base patches. > > > > > > > > > > > > Important things to note: > > > > =================== > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > * Use gerrit topic 'migrate-to-focal' > > > > * Do not backport any of the patches. > > > > > > > > > > > > References: > > > > ========= > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > [2] https://review.opendev.org/#/c/739315/ > > > > [3] https://review.opendev.org/#/c/739334/ > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > From radoslaw.piliszek at gmail.com Thu Sep 10 11:10:44 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 10 Sep 2020 13:10:44 +0200 Subject: Is Storyboard really the future? Message-ID: Hi fellow OpenStackers, The subject is the question I am posing. The general recommendation is to use Storyboard for new projects [1]. However, Storyboard does not seem to be receiving enough love recently [2] [3]. It's generally deemed as slow [4] and is missing quite a few usability enhancements [2]. Considering the alternative, and the actual previous recommendation, is Launchpad, I find it no-brainer to revert to recommending Launchpad still, even paving a way for projects wishing to escape the Storyboard nightmare. :-) Don't get me wrong, I really like the Story/Task orientation but Storyboard results in a frustrating experience. In addition to being slow, it has issues with search/filter, sorting and pagination which make issue browsing an utter pain. I know Launchpad is not without issues but it's certainly a much better platform at the moment. And many projects are still there. The Launchpad-Storyboard split is also introducing confusion for users [5] and coordinability issues for teams as we need to cross-link manually to get proper visibility. All in all, I ask you to consider recommending Launchpad again and encourage OpenStack projects to move to Launchpad. Extra note: I find it in a similar spot as ask.o.o - nice it has been tried, but unfortunately it did not stand the test of time. [1] https://docs.opendev.org/opendev/infra-manual/latest/creators.html [2] https://storyboard.openstack.org/#!/project/opendev/storyboard [3] https://opendev.org/opendev/storyboard/commits/branch/master [4] https://storyboard.openstack.org/#!/story/2007829 [5] https://storyboard.openstack.org/#!/story/2000890 -yoctozepto From mnaser at vexxhost.com Thu Sep 10 11:55:32 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 10 Sep 2020 07:55:32 -0400 Subject: [tc] meeting summary Message-ID: Hi everyone, Here's a summary of what happened in our TC monthly meeting last Thursday, September 3. # ATTENDEES (LINES SAID) - mnaser (72) - gmann (35) - ttx (22) - diablo_rojo (19) - belmoreira (18) - knikolla (16) - njohnston (11) - fungi (7) - smcginnis (2) - ricolin (2) # MEETING SUMMARY 1. Rollcall (mnaser, 14:01:13) 2. Follow up on past action items (mnaser, 14:04:27) - http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-08-06-14.00.html (mnaser, 14:04:59) 3. OpenStack User-facing APIs and CLIs (belmoreira, 14:11:33) 4. W cycle goal selection start (mnaser, 14:33:43) 5. Completion of retirement cleanup (gmann, 14:34:45) - https://etherpad.opendev.org/p/tc-retirement-cleanup (mnaser, 14:34:52) - https://review.opendev.org/#/c/745403/ (mnaser, 14:35:06) 6. Audit and clean-up tags (gmann, 14:37:25) - https://review.opendev.org/#/c/749363/ (mnaser, 14:37:35) 7. open discussion (mnaser, 14:43:54) # ACTION ITEMS - tc-members to follow up and review "Resolution to define distributed leadership for projects". - mnaser schedule session with sig-arch and k8s steering committee. - njohnston to find someone to work with on getting goals groomed/proposed for W cycle. - belmoreira/knikolla figure out logistics of a document with gaps within osc. - diablo_rojo help schedule forum session for OSC gaps. To read the full logs of the meeting, please refer to http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-09-03-14.01.log.html Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. From smooney at redhat.com Thu Sep 10 12:06:52 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 10 Sep 2020 13:06:52 +0100 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: <0116e7bfa3c5b0dc81b5f6086c96f4e4d51b7627.camel@redhat.com> On Thu, 2020-09-10 at 13:10 +0200, Radosław Piliszek wrote: > Hi fellow OpenStackers, > > The subject is the question I am posing. > The general recommendation is to use Storyboard for new projects [1]. > However, Storyboard does not seem to be receiving enough love recently [2] [3]. > It's generally deemed as slow [4] and is missing quite a few usability > enhancements [2]. > Considering the alternative, and the actual previous recommendation, > is Launchpad, I find it no-brainer to revert to recommending Launchpad > still, even paving a way for projects wishing to escape the Storyboard > nightmare. :-) > > Don't get me wrong, I really like the Story/Task orientation but > Storyboard results in a frustrating experience. In addition to being > slow, it has issues with search/filter, sorting and pagination which > make issue browsing an utter pain. > I know Launchpad is not without issues but it's certainly a much > better platform at the moment. > And many projects are still there. > > The Launchpad-Storyboard split is also introducing confusion for users > [5] and coordinability issues for teams as we need to cross-link > manually to get proper visibility. > > All in all, I ask you to consider recommending Launchpad again and > encourage OpenStack projects to move to Launchpad. some porjects like nova and neutron never left launchpad and personally i had hopped they never would so that is still my preference but as far as i know there is notihgn preventing any project form move too or moving from launchpad as it stands. if kolla* waht move they can without needing to change any other policies the cookie cutter templeate for seting up new repo also support both you can select it when you create the repo. im not sure what the default is currently but you are given the choice if i recall correctly. > > Extra note: I find it in a similar spot as ask.o.o - nice it has been > tried, but unfortunately it did not stand the test of time. > > [1] https://docs.opendev.org/opendev/infra-manual/latest/creators.html > [2] https://storyboard.openstack.org/#!/project/opendev/storyboard > [3] https://opendev.org/opendev/storyboard/commits/branch/master > [4] https://storyboard.openstack.org/#!/story/2007829 > [5] https://storyboard.openstack.org/#!/story/2000890 > > -yoctozepto > From elod.illes at est.tech Thu Sep 10 12:40:53 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 10 Sep 2020 14:40:53 +0200 Subject: [cinder][stable][infra] branch freeze for ocata, pike In-Reply-To: References: Message-ID: <56e12f48-f2f8-0a26-1830-093c9fe9d9db@est.tech> Hi Infra Team, While reviewing Andreas' patch [1], I have realized, that Cinder's stable/ocata and stable/pike branches were not deleted yet, however those branches were EOL'd already (see mail below). According to the process [2], since the EOL patches have merged already, if @Brian doesn't object, can you please delete - cinder stable/ocata - cinder stable/pike Thanks in advance, Előd [1] https://review.opendev.org/#/c/750887/ [2] https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life On 2020. 07. 28. 16:23, Brian Rosmaita wrote: > tl;dr - do not approve any backports to stable/ocata or stable/pike in > any Cinder project deliverable > > stable/ocata has been tagged with ocata-eol in cinder, os-brick, > python-cinderclient, and python-brick-cinderclient-ext.  Nothing > should be merged into stable/ocata in any of these repositories during > the interim period before the branches are deleted. > > stable/pike: the changes discussed in [0] have merged, and I've > proposed the pike-eol tags [1].  Nothing should be merged into > stable/pike in any of our code repositories from now until the > branches are deleted. > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016076.html > [1] https://review.opendev.org/#/c/742523/ > From elmiko at redhat.com Thu Sep 10 12:41:37 2020 From: elmiko at redhat.com (Michael McCune) Date: Thu, 10 Sep 2020 08:41:37 -0400 Subject: Moving on In-Reply-To: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> References: <4625D3D3-CDE8-4E4C-9318-013DC2895F26@inaugust.com> Message-ID: Monty, it was a true pleasure having the opportunity to cross paths and collaborate with you. 10 years is a great run and we are richer for your contributions, best wishes for your next adventure =) peace o/ On Wed, Sep 9, 2020 at 12:16 PM Monty Taylor wrote: > Hi everybody, > > After 10 years of OpenStack, the time has come for me to move on to the > next challenge. Actually, the time came a few weeks ago, but writing > farewells has always been something I’m particularly bad at. My last day at > Red Hat was actually July 31 and I’m now working in a non-OpenStack job. > > I’m at a loss for words as to what more to say. I’ve never done anything > for 10 years before, and I’ll be very surprised if I do anything else for > 10 years again. While I’m excited about the new things on my plate, I’ll > obviously miss everyone. > > As I am no longer being paid by an OpenStack employer, I will not be doing > any OpenStack things as part of my day job. I’m not sure how much spare > time I’ll have to be able to contribute. I’m going to hold off on resigning > core memberships pending a better understanding of that. I think it’s safe > to assume I won’t be able to continue on as SDK PTL though. > > I wish everyone all the best, and I hope life conspires to keep us all > connected. > > Thank you to everyone for an amazing 10 years. > > Monty > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Thu Sep 10 12:42:39 2020 From: aj at suse.com (Andreas Jaeger) Date: Thu, 10 Sep 2020 14:42:39 +0200 Subject: [cinder][stable] branch freeze for ocata, pike In-Reply-To: References: Message-ID: <5af2fc9b-dfeb-f7cf-a491-fb4eab14f76f@suse.com> On 28.07.20 16:23, Brian Rosmaita wrote: > tl;dr - do not approve any backports to stable/ocata or stable/pike in > any Cinder project deliverable > > stable/ocata has been tagged with ocata-eol in cinder, os-brick, > python-cinderclient, and python-brick-cinderclient-ext.  Nothing should > be merged into stable/ocata in any of these repositories during the > interim period before the branches are deleted. When do you plan to delete those branches? We have Zuul jobs that are broken, for example due to removal of devstack-plugin-zmq and we either should remove these from the branch or delete the branch. Currently Zuul complains about broken jobs. The two changes I talk about are: https://review.opendev.org/750887 https://review.opendev.org/750886 Andreas > > stable/pike: the changes discussed in [0] have merged, and I've > proposed the pike-eol tags [1].  Nothing should be merged into > stable/pike in any of our code repositories from now until the branches > are deleted. > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016076.html > > [1] https://review.opendev.org/#/c/742523/ > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From radoslaw.piliszek at gmail.com Thu Sep 10 13:22:44 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 10 Sep 2020 15:22:44 +0200 Subject: Is Storyboard really the future? In-Reply-To: <0116e7bfa3c5b0dc81b5f6086c96f4e4d51b7627.camel@redhat.com> References: <0116e7bfa3c5b0dc81b5f6086c96f4e4d51b7627.camel@redhat.com> Message-ID: Hi Sean, On Thu, Sep 10, 2020 at 2:06 PM Sean Mooney wrote: > > On Thu, 2020-09-10 at 13:10 +0200, Radosław Piliszek wrote: > > Hi fellow OpenStackers, > > > > The subject is the question I am posing. > > The general recommendation is to use Storyboard for new projects [1]. > > However, Storyboard does not seem to be receiving enough love recently [2] [3]. > > It's generally deemed as slow [4] and is missing quite a few usability > > enhancements [2]. > > Considering the alternative, and the actual previous recommendation, > > is Launchpad, I find it no-brainer to revert to recommending Launchpad > > still, even paving a way for projects wishing to escape the Storyboard > > nightmare. :-) > > > > Don't get me wrong, I really like the Story/Task orientation but > > Storyboard results in a frustrating experience. In addition to being > > slow, it has issues with search/filter, sorting and pagination which > > make issue browsing an utter pain. > > I know Launchpad is not without issues but it's certainly a much > > better platform at the moment. > > And many projects are still there. > > > > The Launchpad-Storyboard split is also introducing confusion for users > > [5] and coordinability issues for teams as we need to cross-link > > manually to get proper visibility. > > > > All in all, I ask you to consider recommending Launchpad again and > > encourage OpenStack projects to move to Launchpad. > some porjects like nova and neutron never left launchpad and personally > i had hopped they never would so that is still my preference but as far > as i know there is notihgn preventing any project form move too or moving > from launchpad as it stands. Kolla did neither. We only have Kayobe that's on Storyboard (due to recommendation). I did not want to sound like it was enforced. It is not - as far as I understand it. The thing is: recommending a perceivably worse solution does not seem like a good idea to me. It also does not benefit the scene to split it between two worlds. -yoctozepto > if kolla* waht move they can without needing to change any other policies > the cookie cutter templeate for seting up new repo also support both you can > select it when you create the repo. im not sure what the default is currently > but you are given the choice if i recall correctly. > > > > Extra note: I find it in a similar spot as ask.o.o - nice it has been > > tried, but unfortunately it did not stand the test of time. > > > > [1] https://docs.opendev.org/opendev/infra-manual/latest/creators.html > > [2] https://storyboard.openstack.org/#!/project/opendev/storyboard > > [3] https://opendev.org/opendev/storyboard/commits/branch/master > > [4] https://storyboard.openstack.org/#!/story/2007829 > > [5] https://storyboard.openstack.org/#!/story/2000890 > > > > -yoctozepto > > > From rosmaita.fossdev at gmail.com Thu Sep 10 13:30:20 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 10 Sep 2020 09:30:20 -0400 Subject: [cinder][stable][infra] branch freeze for ocata, pike In-Reply-To: <56e12f48-f2f8-0a26-1830-093c9fe9d9db@est.tech> References: <56e12f48-f2f8-0a26-1830-093c9fe9d9db@est.tech> Message-ID: <2013dd9e-386a-1190-8e5b-33338cc44d59@gmail.com> On 9/10/20 8:40 AM, Előd Illés wrote: > Hi Infra Team, > > While reviewing Andreas' patch [1], I have realized, that Cinder's > stable/ocata and stable/pike branches were not deleted yet, however > those branches were EOL'd already (see mail below). > > According to the process [2], since the EOL patches have merged already, > if @Brian doesn't object, can you please delete > > - cinder stable/ocata > - cinder stable/pike I have no objection, but I haven't pushed the infra team about the actual branch deletion because as far as I know, cinder is the first project to actually request removal, and I suspect all sorts of stuff will break. I suggest we wait until at least after RC-time to give us all one less thing to worry about. As far as avoiding breakage goes, I put up two patches to devstack so that it will check out the -eol tag of cinder/brick/cinderclient instead of the stable branch, but I suspect these only scratch the surface of what can be broken once the cinder project branches are deleted. https://review.opendev.org/#/c/742953/ https://review.opendev.org/#/c/742952/ Sean suggested in an earlier thread on this topic [0] that instead of deleting very old EM branches that some projects have EOL'd project-by-project, we should just delete them wholesale across openstack. That makes a lot of sense to me. [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015115.html cheers, brian > > Thanks in advance, > > Előd > > [1] https://review.opendev.org/#/c/750887/ > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > > > On 2020. 07. 28. 16:23, Brian Rosmaita wrote: >> tl;dr - do not approve any backports to stable/ocata or stable/pike in >> any Cinder project deliverable >> >> stable/ocata has been tagged with ocata-eol in cinder, os-brick, >> python-cinderclient, and python-brick-cinderclient-ext.  Nothing >> should be merged into stable/ocata in any of these repositories during >> the interim period before the branches are deleted. >> >> stable/pike: the changes discussed in [0] have merged, and I've >> proposed the pike-eol tags [1].  Nothing should be merged into >> stable/pike in any of our code repositories from now until the >> branches are deleted. >> >> [0] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016076.html >> >> [1] https://review.opendev.org/#/c/742523/ >> > From gmann at ghanshyammann.com Thu Sep 10 13:43:15 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Sep 2020 08:43:15 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <236b2c69-530a-2266-08e3-170b86c16a9d@gmail.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> <236b2c69-530a-2266-08e3-170b86c16a9d@gmail.com> Message-ID: <17478418b45.c9cb264d62678.8113988885859095234@ghanshyammann.com> ---- On Thu, 10 Sep 2020 04:31:17 -0500 Yasufumi Ogawa wrote ---- > Hi gmann, > > Sorry for that we've not merged your patch to Tacker because devstack on > Focal fails in functional test. It seems gnocchi installation on Focal > has some problems. > > Anyway, although this issue isn't fixed yet, we'll proceed to merge the > patch immediately. Thanks Yasufumi. I reported the gnoochi issue in the below storyboard and tried to reach out to the ceilometer team also but found not get any response. I will check what to do on this blocker. https://storyboard.openstack.org/#!/story/2008121 -gmann > > Thanks, > Yasufumi > > On 2020/09/10 12:32, Ghanshyam Mann wrote: > > Updates: > > > > Fixed a few more projects today which I found failing on Focal: > > > > - OpenStack SDKs repos : ready to merge > > - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. > > - Keystone: Fix is up, it should pass now. > > - Manila: Fix is up, it should pass gate. > > - Tacker: Ready to merge > > - neutron-dynamic-routing: Ready to merge > > - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. > > this is the patch -https://review.opendev.org/#/c/743080/ > > > > Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa > > > > No more energy left for today, I will continue the remaining work tomorrow. > > > > -gmann > > > > ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > > > Updates: > > > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > > > - All patches in this series https://review.opendev.org/#/c/738328/ > > > > > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > > > You can find most of the fixes for possible failure in this topic: > > > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > > > -gmann > > > > > > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > > > > -gmann > > > > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > > Hello Everyone, > > > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > Progress: > > > > > ======= > > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > > plan. > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > > > ** Bug#1882521 > > > > > ** DB migration issues, > > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > > > > Testing Till now: > > > > > ============ > > > > > * ~200 repos gate have been tested or fixed till now. > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > > project repos if I am late to fix them): > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > > > * ~30repos fixes ready to merge: > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > > > > Bugs Report: > > > > > ========== > > > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > > volume detach operation. There is no clear root cause found yet > > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > > projects testing but this is blocking things for the migration. > > > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > > to maintain pyflakes version compatibility. > > > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > > > > What work to be done on the project side: > > > > > ================================ > > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > > OR > > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > > this. > > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > > this migration. > > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > > base patches. > > > > > > > > > > > > > > > Important things to note: > > > > > =================== > > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > > * Use gerrit topic 'migrate-to-focal' > > > > > * Do not backport any of the patches. > > > > > > > > > > > > > > > References: > > > > > ========= > > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > > [2] https://review.opendev.org/#/c/739315/ > > > > > [3] https://review.opendev.org/#/c/739334/ > > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Thu Sep 10 13:46:19 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Sep 2020 08:46:19 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> Message-ID: <17478445846.1153c6daf62908.4233237804379232164@ghanshyammann.com> ---- On Thu, 10 Sep 2020 02:18:02 -0500 Radosław Piliszek wrote ---- > I've triaged this for Kolla and Kolla-Ansible too. > > Bifrost is also affected (but it's on Storyboard). Thanks yoctozepto for fixing these. -gmann > > -yoctozepto > > On Thu, Sep 10, 2020 at 5:42 AM Ghanshyam Mann wrote: > > > > Updates: > > > > Fixed a few more projects today which I found failing on Focal: > > > > - OpenStack SDKs repos : ready to merge > > - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. > > - Keystone: Fix is up, it should pass now. > > - Manila: Fix is up, it should pass gate. > > - Tacker: Ready to merge > > - neutron-dynamic-routing: Ready to merge > > - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. > > this is the patch -https://review.opendev.org/#/c/743080/ > > > > Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa > > > > No more energy left for today, I will continue the remaining work tomorrow. > > > > -gmann > > > > ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > > > Updates: > > > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > > > - All patches in this series https://review.opendev.org/#/c/738328/ > > > > > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > > > You can find most of the fixes for possible failure in this topic: > > > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > > > -gmann > > > > > > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > > > > -gmann > > > > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > > Hello Everyone, > > > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > Progress: > > > > > ======= > > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > > plan. > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > > > ** Bug#1882521 > > > > > ** DB migration issues, > > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > > > > Testing Till now: > > > > > ============ > > > > > * ~200 repos gate have been tested or fixed till now. > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > > project repos if I am late to fix them): > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > > > * ~30repos fixes ready to merge: > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > > > > Bugs Report: > > > > > ========== > > > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > > volume detach operation. There is no clear root cause found yet > > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > > projects testing but this is blocking things for the migration. > > > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > > to maintain pyflakes version compatibility. > > > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > > > > What work to be done on the project side: > > > > > ================================ > > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > > OR > > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > > this. > > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > > this migration. > > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > > base patches. > > > > > > > > > > > > > > > Important things to note: > > > > > =================== > > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > > * Use gerrit topic 'migrate-to-focal' > > > > > * Do not backport any of the patches. > > > > > > > > > > > > > > > References: > > > > > ========= > > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > > [2] https://review.opendev.org/#/c/739315/ > > > > > [3] https://review.opendev.org/#/c/739334/ > > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > From elod.illes at est.tech Thu Sep 10 13:59:24 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 10 Sep 2020 15:59:24 +0200 Subject: [cinder][stable][infra] branch freeze for ocata, pike In-Reply-To: <2013dd9e-386a-1190-8e5b-33338cc44d59@gmail.com> References: <56e12f48-f2f8-0a26-1830-093c9fe9d9db@est.tech> <2013dd9e-386a-1190-8e5b-33338cc44d59@gmail.com> Message-ID: On 2020. 09. 10. 15:30, Brian Rosmaita wrote: > On 9/10/20 8:40 AM, Előd Illés wrote: >> Hi Infra Team, >> >> While reviewing Andreas' patch [1], I have realized, that Cinder's >> stable/ocata and stable/pike branches were not deleted yet, however >> those branches were EOL'd already (see mail below). >> >> According to the process [2], since the EOL patches have merged >> already, if @Brian doesn't object, can you please delete >> >> - cinder stable/ocata >> - cinder stable/pike > > I have no objection, but I haven't pushed the infra team about the > actual branch deletion because as far as I know, cinder is the first > project to actually request removal, and I suspect all sorts of stuff > will break.  I suggest we wait until at least after RC-time to give us > all one less thing to worry about. Sound good to me, thanks Brian! > > As far as avoiding breakage goes, I put up two patches to devstack so > that it will check out the -eol tag of cinder/brick/cinderclient > instead of the stable branch, but I suspect these only scratch the > surface of what can be broken once the cinder project branches are > deleted. > > https://review.opendev.org/#/c/742953/ > https://review.opendev.org/#/c/742952/ > > Sean suggested in an earlier thread on this topic [0] that instead of > deleting very old EM branches that some projects have EOL'd > project-by-project, we should just delete them wholesale across > openstack.  That makes a lot of sense to me. Ocata is quite abandoned nowadays so that makes sense. However, Pike has been active in the past months [3] more or less, so mass-deletion is not an option for Pike, I think. Thanks, Előd [3] https://review.opendev.org/#/q/branch:stable/pike+status:merged > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015115.html > > > cheers, > brian > >> >> Thanks in advance, >> >> Előd >> >> [1] https://review.opendev.org/#/c/750887/ >> [2] >> https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life >> >> >> >> On 2020. 07. 28. 16:23, Brian Rosmaita wrote: >>> tl;dr - do not approve any backports to stable/ocata or stable/pike >>> in any Cinder project deliverable >>> >>> stable/ocata has been tagged with ocata-eol in cinder, os-brick, >>> python-cinderclient, and python-brick-cinderclient-ext.  Nothing >>> should be merged into stable/ocata in any of these repositories >>> during the interim period before the branches are deleted. >>> >>> stable/pike: the changes discussed in [0] have merged, and I've >>> proposed the pike-eol tags [1].  Nothing should be merged into >>> stable/pike in any of our code repositories from now until the >>> branches are deleted. >>> >>> [0] >>> http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016076.html >>> >>> [1] https://review.opendev.org/#/c/742523/ >>> >> > From akekane at redhat.com Thu Sep 10 14:37:45 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 10 Sep 2020 20:07:45 +0530 Subject: [requirements][FFE] Cinder multiple stores support In-Reply-To: References: Message-ID: Hi Team, FFE is not needed as the failures were related to tests only and does not impact actual functionality. Thanks & Best Regards, Abhishek Kekane On Tue, Sep 8, 2020 at 8:53 PM Abhishek Kekane wrote: > Hi Team, > > The reason for failure is we are suppressing Deprecation warning into > error in glance [1] and we are using those deprecated parameters in > glance_store. > This is the reason why it is only failing in functional tests [2] and not > in actual scenarios. > > [1] > https://opendev.org/openstack/glance/src/branch/master/glance/tests/unit/fixtures.py#L133-L136 > [2]https://review.opendev.org/#/c/750144/ > > Thanks & Best Regards, > > Abhishek Kekane > > > On Tue, Sep 8, 2020 at 8:48 PM Rajat Dhasmana wrote: > >> Hi Team, >> >> Last week we released glance_store 2.3.0 which adds support for configuring cinder multiple stores as glance backend. >> While adding functional tests in glance for the same [1], we have noticed that it is failing with some hard requirements from oslo side to use project_id instead of tenant and user_id instead of user. >> It is really strange behavior as this failure occurs only in functional tests but works properly in the actual environment without any issue. The fix is proposed in glance_store [2] to resolve this issue. >> >> I would like to apply for FFE with this glance_store patch [2] to be approved and release a new version of glance_store 2.3.1. >> >> Kindly provide approval for the same. >> >> [1] https://review.opendev.org/#/c/750144/ >> [2] https://review.opendev.org/#/c/750131/ >> >> Thanks and Regards, >> Rajat Dhasmana >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Thu Sep 10 15:07:25 2020 From: corvus at inaugust.com (James E. Blair) Date: Thu, 10 Sep 2020 08:07:25 -0700 Subject: Farewell Party for Monty Message-ID: <878sdhy8k2.fsf@meyer.lemoncheese.net> Hi, Monty is starting a new gig and won't be spending as much time with us. Since we all haven't seen each other in a while, let's have one more beer[1] together and say farewell. Join us for a virtual going-away party on meetpad tomorrow (Friday) at 21:00 UTC at this URL: https://meetpad.opendev.org/farewell-mordred Stop by and chat for old time's sake. -Jim [1] Bring your own beer. From rdhasman at redhat.com Thu Sep 10 15:21:26 2020 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 10 Sep 2020 20:51:26 +0530 Subject: [gance_store][FFE] Cinder multiple stores support In-Reply-To: References: Message-ID: Hi Glance Team, The cinder multiple store feature has merged and the glance store dependency is only on the functional tests so we don't need an FFE anymore. Thanks and Regards Rajat Dhasmana On Mon, Sep 7, 2020 at 9:56 PM Abhishek Kekane wrote: > +1 from me, glance_store 2.3.0 contains the actual functionality and > glance functionality patch [1] is also in good shape. > > [1] https://review.opendev.org/#/c/748039/11 > > Thanks & Best Regards, > > Abhishek Kekane > > > On Mon, Sep 7, 2020 at 9:40 PM Rajat Dhasmana wrote: > >> Hi Team, >> >> Last week we released glance_store 2.3.0 which adds support for configuring cinder multiple stores as glance backend. >> While adding functional tests in glance for the same [1], we have noticed that it is failing with some hard requirements from oslo side to use project_id instead of tenant and user_id instead of user. >> It is really strange behavior as this failure occurs only in functional tests but works properly in the actual environment without any issue. The fix is proposed in glance_store [2] to resolve this issue. >> >> I would like to apply for FFE with this glance_store patch [2] to be approved and release a new version of glance_store 2.3.1. >> >> Kindly provide approval for the same. >> >> [1] https://review.opendev.org/#/c/750144/ >> [2] https://review.opendev.org/#/c/750131/ >> >> Thanks and Regards, >> Rajat Dhasmana >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Sep 10 15:47:04 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Sep 2020 15:47:04 +0000 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: <20200910154704.3erw242ynqldlq63@yuggoth.org> On 2020-09-10 13:10:44 +0200 (+0200), Radosław Piliszek wrote: > The subject is the question I am posing. The general > recommendation is to use Storyboard for new projects [1]. I agree that recommending a service without context is likely to cause problems. StoryBoard is a service provided within OpenDev, and I don't think we anticipate stopping to provide that service to projects who wish to make use of it. Its use is not at all mandatory. The deployment of it in OpenDev is at least minimally functional and sufficient for light duty, though I understand people who are in a situation of needing to interact with their defect and task trackers constantly likely find some aspects of it frustrating. > However, Storyboard does not seem to be receiving enough love > recently [2] [3]. Yep, nobody works on it full-time and it could use some additional developers, reviewers and sysadmins to help regain momentum. For example, I could use some help figuring out fine-grained user permissions for Rackspace's Swift-like object store, which is currently blocking more effective vetting of the proposed client-side support for Adam's story attachments work. We would also love assistance getting the current Puppet module we're managing our deployment with replaced by Ansible/docker-compose orchestration of the container images we've started publishing to DockerHub. Even just helping us triage and tag new stories for opendev/storyboard and opendev/storyboard-webclient would be appreciated. > It's generally deemed as slow [4] Preliminary testing suggests https://review.opendev.org/742046 will increase performance for the queries behind most common views by an order of magnitude or more. > and is missing quite a few usability enhancements [2]. Considering > the alternative, and the actual previous recommendation, is > Launchpad, I find it no-brainer to revert to recommending > Launchpad still, even paving a way for projects wishing to escape > the Storyboard nightmare. :-) The smiley doesn't particularly soften the fact that you just rudely referred to the product of someone's hard work and volunteered time as a "nightmare." One problem we were hoping to solve which Launchpad doesn't help us with is that we have a number of potential contributors and users who have balked at collaborating through OpenDev because our services require them to have an "Ubuntu" login even though they're not users of (and perhaps work for rivals/competitors of) that distro. Once our Central Auth/SSO spec reaches implementation, being able to offer some sort of cross-project task and defect tracking integrated with our Gerrit code reviews, and using the same authentication, gives projects who want to not require members of their communities to have an UbuntuOne account that option. > Don't get me wrong, I really like the Story/Task orientation but > Storyboard results in a frustrating experience. In addition to > being slow, it has issues with search/filter, sorting and > pagination which make issue browsing an utter pain. I know > Launchpad is not without issues but it's certainly a much better > platform at the moment. And many projects are still there. Again, I think Launchpad is a fine platform for some projects. It's designed around bug tracking for packaging work targeting various Ubuntu releases, but that doesn't mean it can't also be used effectively for other sorts of activities (as evidenced by the many software projects who do). They've recently improved their development environment setup and build instructions too, so working on a patch to fix something there isn't nearly as challenging as it once was. If you use Launchpad and want to improve some aspect of it, I wholeheartedly encourage you to try to collaborate with its maintainers on that. And if projects want to move to (or back to) Launchpad, I don't have a problem with that and am happy to get them database exports of their SB stories and tasks... I think we can just set the corresponding project entries to inactive so they can't be selected for new tasks, though that will need a bit of testing to confirm. > The Launchpad-Storyboard split is also introducing confusion for > users [5] and coordinability issues for teams as we need to > cross-link manually to get proper visibility. I'm not entirely convinced. Users are going to be confused and sometimes open bugs in the wrong places regardless. Back when the OpenStack Infra team existed and had a catch-all LP project for tracking infrastructure-related issues and incidents, users often got equally confused and opened Nova bugs under that. They also still constantly wander into the #openstack-infra IRC channel asking us how to run OpenStack. Turning off StoryBoard won't solve that. Honestly, I doubt anything will (or even can) solve that. As for cross-linking, you have to do that today if someone mistakenly opens a Nova bug which turns out to be a Qemu or KVM issue instead. It's unrealistic to expect all F/LOSS projects to use one common tracker. > All in all, I ask you to consider recommending Launchpad again and > encourage OpenStack projects to move to Launchpad. I agree we shouldn't be recommending StoryBoard over other platforms without providing some context as to when projects might consider using it. I also won't attempt to dissuade anyone who wants to move their tracking to other open source based services like (but not necessarily limited to) Launchpad. Different projects have different needs and no one work management tool is going to satisfy everyone. > Extra note: I find it in a similar spot as ask.o.o - nice it has been > tried, but unfortunately it did not stand the test of time. > > [1] https://docs.opendev.org/opendev/infra-manual/latest/creators.html > [2] https://storyboard.openstack.org/#!/project/opendev/storyboard > [3] https://opendev.org/opendev/storyboard/commits/branch/master > [4] https://storyboard.openstack.org/#!/story/2007829 > [5] https://storyboard.openstack.org/#!/story/2000890 I don't personally think it's quite the same situation as Ask OpenStack, though I can see where you might draw parallels. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tonyliu0592 at hotmail.com Thu Sep 10 16:42:01 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 10 Sep 2020 16:42:01 +0000 Subject: [Keystone] 'list' object has no attribute 'get' Message-ID: Is this known issue with openstack-keystone-17.0.0-1.el8.noarch? 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context [req-3bcdd315-1975-4d8a-969d-166dd3e8a3b6 113ee63a9ed0466794e24d069efc302c 4c142a681d884010ab36a7ac687d910c - default default] 'list' object has no attribute 'get': AttributeError: 'list' object has no attribute 'get' 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context Traceback (most recent call last): 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 103, in _inner 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return method(self, request) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 353, in process_request 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context resp = super(AuthContextMiddleware, self).process_request(request) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 411, in process_request 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context allow_expired=allow_expired) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 445, in _do_fetch_token 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context data = self.fetch_token(token, **kwargs) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 248, in fetch_token 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token, access_rules_support=ACCESS_RULES_MIN_VERSION) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 145, in validate_token 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token = self._validate_token(token_id) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "", line 2, in _validate_token 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in get_or_create_for_user_func 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context key, user_func, timeout, should_cache_fn, (arg, kw) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in get_or_create 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context async_creator, 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._enter() 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 94, in _enter 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context generated = self._enter_create(value, createdtime) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 180, in _enter_create 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self.creator() 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 916, in gen_value 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context *creator_args[0], **creator_args[1] 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 179, in _validate_token 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token.mint(token_id, issued_at) 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 579, in mint 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self._validate_token_resources() 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 471, in _validate_token_resources 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if self.project and not self.project_domain.get('enabled'): 2020-09-10 09:35:53.638 28 ERROR keystone.server.flask.request_processing.middleware.auth_context AttributeError: 'list' object has no attribute 'get' Thanks! Tony From radoslaw.piliszek at gmail.com Thu Sep 10 16:45:20 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 10 Sep 2020 18:45:20 +0200 Subject: Is Storyboard really the future? In-Reply-To: <20200910154704.3erw242ynqldlq63@yuggoth.org> References: <20200910154704.3erw242ynqldlq63@yuggoth.org> Message-ID: Hi Jeremy, First of all, thank you for the writeup, it is really helpful and contributes a lot to the discussion. On Thu, Sep 10, 2020 at 5:59 PM Jeremy Stanley wrote: > > On 2020-09-10 13:10:44 +0200 (+0200), Radosław Piliszek wrote: > > The subject is the question I am posing. The general > > recommendation is to use Storyboard for new projects [1]. > > I agree that recommending a service without context is likely to > cause problems. StoryBoard is a service provided within OpenDev, and > I don't think we anticipate stopping to provide that service to > projects who wish to make use of it. Its use is not at all > mandatory. The deployment of it in OpenDev is at least minimally > functional and sufficient for light duty, though I understand people > who are in a situation of needing to interact with their defect and > task trackers constantly likely find some aspects of it frustrating. > > > However, Storyboard does not seem to be receiving enough love > > recently [2] [3]. > > Yep, nobody works on it full-time and it could use some additional > developers, reviewers and sysadmins to help regain momentum. For > example, I could use some help figuring out fine-grained user > permissions for Rackspace's Swift-like object store, which is > currently blocking more effective vetting of the proposed > client-side support for Adam's story attachments work. We would also > love assistance getting the current Puppet module we're managing our > deployment with replaced by Ansible/docker-compose orchestration of > the container images we've started publishing to DockerHub. Even > just helping us triage and tag new stories for opendev/storyboard > and opendev/storyboard-webclient would be appreciated. I feel you. I could not so far convince anyone to support me to work on it, mostly because Jira/GitHub/GitLab/Launchpad exists. Not to mention many small internal projects are happy with just Trello. :-) I feel this might be due to the audience Storyboard tries to cater to (larger cross-project work) which is not a common requirement (or rather just hard to organise for oneself). > > It's generally deemed as slow [4] > > Preliminary testing suggests https://review.opendev.org/742046 will > increase performance for the queries behind most common views by an > order of magnitude or more. That would be awesome. > > and is missing quite a few usability enhancements [2]. Considering > > the alternative, and the actual previous recommendation, is > > Launchpad, I find it no-brainer to revert to recommending > > Launchpad still, even paving a way for projects wishing to escape > > the Storyboard nightmare. :-) > > The smiley doesn't particularly soften the fact that you just rudely > referred to the product of someone's hard work and volunteered time > as a "nightmare." Ouch, you are right. I should have picked my words more responsibly. I guess this was caused by one of longer sessions on Storyboard. I sincerely apologise and hope nobody was offended. As I wrote further below that line, I really appreciate Storyboard otherwise. It's just that it does not really shine compared to these days. > One problem we were hoping to solve which Launchpad doesn't help us > with is that we have a number of potential contributors and users > who have balked at collaborating through OpenDev because our > services require them to have an "Ubuntu" login even though they're > not users of (and perhaps work for rivals/competitors of) that > distro. Once our Central Auth/SSO spec reaches implementation, being > able to offer some sort of cross-project task and defect tracking > integrated with our Gerrit code reviews, and using the same > authentication, gives projects who want to not require members of > their communities to have an UbuntuOne account that option. I know this issue a bit. Hard to make everyone like each other. As for Storyboard, since it still uses Ubuntu One for now, I could not obviously see that as counting in favour of Storyboard. :-) > > Don't get me wrong, I really like the Story/Task orientation but > > Storyboard results in a frustrating experience. In addition to > > being slow, it has issues with search/filter, sorting and > > pagination which make issue browsing an utter pain. I know > > Launchpad is not without issues but it's certainly a much better > > platform at the moment. And many projects are still there. > > Again, I think Launchpad is a fine platform for some projects. It's > designed around bug tracking for packaging work targeting various > Ubuntu releases, but that doesn't mean it can't also be used > effectively for other sorts of activities (as evidenced by the many > software projects who do). They've recently improved their > development environment setup and build instructions too, so working > on a patch to fix something there isn't nearly as challenging as it > once was. If you use Launchpad and want to improve some aspect of > it, I wholeheartedly encourage you to try to collaborate with its > maintainers on that. And if projects want to move to (or back to) > Launchpad, I don't have a problem with that and am happy to get them > database exports of their SB stories and tasks... I think we can > just set the corresponding project entries to inactive so they can't > be selected for new tasks, though that will need a bit of testing to > confirm. > > > The Launchpad-Storyboard split is also introducing confusion for > > users [5] and coordinability issues for teams as we need to > > cross-link manually to get proper visibility. > > I'm not entirely convinced. Users are going to be confused and > sometimes open bugs in the wrong places regardless. Back when the > OpenStack Infra team existed and had a catch-all LP project for > tracking infrastructure-related issues and incidents, users often > got equally confused and opened Nova bugs under that. They also > still constantly wander into the #openstack-infra IRC channel asking > us how to run OpenStack. Turning off StoryBoard won't solve that. > Honestly, I doubt anything will (or even can) solve that. As for > cross-linking, you have to do that today if someone mistakenly opens > a Nova bug which turns out to be a Qemu or KVM issue instead. It's > unrealistic to expect all F/LOSS projects to use one common tracker. You are right there that we can't necessarily solve this for everyone. But at the moment it's confusing in that projects are partially there and partially elsewhere *because of the recommendation*. Obviously one can't do anything about escalation to libvirt/qemu/kernel bugzillas but those are external projects. For OpenStack projects we can have better guidelines. > > All in all, I ask you to consider recommending Launchpad again and > > encourage OpenStack projects to move to Launchpad. > > I agree we shouldn't be recommending StoryBoard over other platforms > without providing some context as to when projects might consider > using it. I also won't attempt to dissuade anyone who wants to move > their tracking to other open source based services like (but not > necessarily limited to) Launchpad. Different projects have different > needs and no one work management tool is going to satisfy everyone. I could not express it better. > > Extra note: I find it in a similar spot as ask.o.o - nice it has been > > tried, but unfortunately it did not stand the test of time. > > > > [1] https://docs.opendev.org/opendev/infra-manual/latest/creators.html > > [2] https://storyboard.openstack.org/#!/project/opendev/storyboard > > [3] https://opendev.org/opendev/storyboard/commits/branch/master > > [4] https://storyboard.openstack.org/#!/story/2007829 > > [5] https://storyboard.openstack.org/#!/story/2000890 > > I don't personally think it's quite the same situation as Ask > OpenStack, though I can see where you might draw parallels. I see how that could sound harsh as well. I meant its confusing effect rather than the need to disable Storyboard altogether and make it disappear, no. All in all, I'd love to see Storyboard flourish as the approach appeals to me, just the UX is far from ideal at the moment. I meant this thread to be against the recommendation, not the software/instance itself. The recommendation introduced a feeling that Storyboard *should* be used, Launchpad is not really mentioned any longer either. To reiterate, I don't think it sounds like it *must* be used (and surely is not a requirement) but *should* is enough to cause bad experience for both sides (users trying to report and teams trying to keep track of reported issues). > -- > Jeremy Stanley From cohuck at redhat.com Thu Sep 10 12:38:22 2020 From: cohuck at redhat.com (Cornelia Huck) Date: Thu, 10 Sep 2020 14:38:22 +0200 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200909021308.GA1277@joy-OptiPlex-7040> References: <20200818113652.5d81a392.cohuck@redhat.com> <20200820003922.GE21172@joy-OptiPlex-7040> <20200819212234.223667b3@x1.home> <20200820031621.GA24997@joy-OptiPlex-7040> <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> Message-ID: <20200910143822.2071eca4.cohuck@redhat.com> On Wed, 9 Sep 2020 10:13:09 +0800 Yan Zhao wrote: > > > still, I'd like to put it more explicitly to make ensure it's not missed: > > > the reason we want to specify compatible_type as a trait and check > > > whether target compatible_type is the superset of source > > > compatible_type is for the consideration of backward compatibility. > > > e.g. > > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > > generation device may be of mdev type xxx-v5-yyy. > > > with the compatible_type traits, the old generation device is still > > > able to be regarded as compatible to newer generation device even their > > > mdev types are not equal. > > > > If you want to support migration from v4 to v5, can't the (presumably > > newer) driver that supports v5 simply register the v4 type as well, so > > that the mdev can be created as v4? (Just like QEMU versioned machine > > types work.) > yes, it should work in some conditions. > but it may not be that good in some cases when v5 and v4 in the name string > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for > gen9) > > e.g. > (1). when src mdev type is v4 and target mdev type is v5 as > software does not support it initially, and v4 and v5 identify hardware > differences. My first hunch here is: Don't introduce types that may be compatible later. Either make them compatible, or make them distinct by design, and possibly add a different, compatible type later. > then after software upgrade, v5 is now compatible to v4, should the > software now downgrade mdev type from v5 to v4? > not sure if moving hardware generation info into a separate attribute > from mdev type name is better. e.g. remove v4, v5 in mdev type, while use > compatible_pci_ids to identify compatibility. If the generations are compatible, don't mention it in the mdev type. If they aren't, use distinct types, so that management software doesn't have to guess. At least that would be my naive approach here. > > (2) name string of mdev type is composed by "driver_name + type_name". > in some devices, e.g. qat, different generations of devices are binding to > drivers of different names, e.g. "qat-v4", "qat-v5". > then though type_name is equal, mdev type is not equal. e.g. > "qat-v4-type1", "qat-v5-type1". I guess that shows a shortcoming of that "driver_name + type_name" approach? Or maybe I'm just confused. From smooney at redhat.com Thu Sep 10 12:50:11 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 10 Sep 2020 13:50:11 +0100 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200910143822.2071eca4.cohuck@redhat.com> References: <20200818113652.5d81a392.cohuck@redhat.com> <20200820003922.GE21172@joy-OptiPlex-7040> <20200819212234.223667b3@x1.home> <20200820031621.GA24997@joy-OptiPlex-7040> <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> Message-ID: <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > On Wed, 9 Sep 2020 10:13:09 +0800 > Yan Zhao wrote: > > > > > still, I'd like to put it more explicitly to make ensure it's not missed: > > > > the reason we want to specify compatible_type as a trait and check > > > > whether target compatible_type is the superset of source > > > > compatible_type is for the consideration of backward compatibility. > > > > e.g. > > > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > > > generation device may be of mdev type xxx-v5-yyy. > > > > with the compatible_type traits, the old generation device is still > > > > able to be regarded as compatible to newer generation device even their > > > > mdev types are not equal. > > > > > > If you want to support migration from v4 to v5, can't the (presumably > > > newer) driver that supports v5 simply register the v4 type as well, so > > > that the mdev can be created as v4? (Just like QEMU versioned machine > > > types work.) > > > > yes, it should work in some conditions. > > but it may not be that good in some cases when v5 and v4 in the name string > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for > > gen9) > > > > e.g. > > (1). when src mdev type is v4 and target mdev type is v5 as > > software does not support it initially, and v4 and v5 identify hardware > > differences. > > My first hunch here is: Don't introduce types that may be compatible > later. Either make them compatible, or make them distinct by design, > and possibly add a different, compatible type later. > > > then after software upgrade, v5 is now compatible to v4, should the > > software now downgrade mdev type from v5 to v4? > > not sure if moving hardware generation info into a separate attribute > > from mdev type name is better. e.g. remove v4, v5 in mdev type, while use > > compatible_pci_ids to identify compatibility. > > If the generations are compatible, don't mention it in the mdev type. > If they aren't, use distinct types, so that management software doesn't > have to guess. At least that would be my naive approach here. yep that is what i would prefer to see too. > > > > > (2) name string of mdev type is composed by "driver_name + type_name". > > in some devices, e.g. qat, different generations of devices are binding to > > drivers of different names, e.g. "qat-v4", "qat-v5". > > then though type_name is equal, mdev type is not equal. e.g. > > "qat-v4-type1", "qat-v5-type1". > > I guess that shows a shortcoming of that "driver_name + type_name" > approach? Or maybe I'm just confused. yes i really dont like haveing the version in the mdev-type name i would stongly perfger just qat-type-1 wehere qat is just there as a way of namespacing. although symmetric-cryto, asymmetric-cryto and compression woudl be a better name then type-1, type-2, type-3 if that is what they would end up mapping too. e.g. qat-compression or qat-aes is a much better name then type-1 higher layers of software are unlikely to parse the mdev names but as a human looking at them its much eaiser to understand if the names are meaningful. the qat prefix i think is important however to make sure that your mdev-types dont colide with other vendeors mdev types. so i woudl encurage all vendors to prefix there mdev types with etiher the device name or the vendor. > From cboylan at sapwetik.org Thu Sep 10 16:49:08 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 10 Sep 2020 09:49:08 -0700 Subject: [cinder][stable] branch freeze for ocata, pike In-Reply-To: <5af2fc9b-dfeb-f7cf-a491-fb4eab14f76f@suse.com> References: <5af2fc9b-dfeb-f7cf-a491-fb4eab14f76f@suse.com> Message-ID: <28b72a5a-e136-4233-854e-8eebbfd25933@www.fastmail.com> On Thu, Sep 10, 2020, at 5:42 AM, Andreas Jaeger wrote: > On 28.07.20 16:23, Brian Rosmaita wrote: > > tl;dr - do not approve any backports to stable/ocata or stable/pike in > > any Cinder project deliverable > > > > stable/ocata has been tagged with ocata-eol in cinder, os-brick, > > python-cinderclient, and python-brick-cinderclient-ext.  Nothing should > > be merged into stable/ocata in any of these repositories during the > > interim period before the branches are deleted. > > When do you plan to delete those branches? We have Zuul jobs that are > broken, for example due to removal of devstack-plugin-zmq and we either > should remove these from the branch or delete the branch. Currently Zuul > complains about broken jobs. > > The two changes I talk about are: > https://review.opendev.org/750887 > https://review.opendev.org/750886 I think we should go ahead and land those if we are waiting for a coordinated branch deletion. The zuul configs are branch specific and should be adjustable outside of normal backport procedures, particularly if they are causing problems like global zuul config errors. We've force merged some of these changes on stable branches in other projects if CI is generally unstable. Let us know if that is appropriate for this situation as well. > > Andreas > > > > > stable/pike: the changes discussed in [0] have merged, and I've > > proposed the pike-eol tags [1].  Nothing should be merged into > > stable/pike in any of our code repositories from now until the branches > > are deleted. > > > > [0] > > http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016076.html > > > > [1] https://review.opendev.org/#/c/742523/ > > From tonyliu0592 at hotmail.com Thu Sep 10 17:19:47 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 10 Sep 2020 17:19:47 +0000 Subject: [Keystone] KeyError: 'domain_id' Message-ID: Here is another exception. Any clues? 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context [req-534d9855-8113-450d-8f9f-d93c0d961d24 113ee63a9ed0466794e24d069efc302c 4c142a681d884010ab36a7ac687d910c - default default] 'domain_id': KeyError: 'domain_id' 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context Traceback (most recent call last): 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 103, in _inner 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return method(self, request) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 353, in process_request 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context resp = super(AuthContextMiddleware, self).process_request(request) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 411, in process_request 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context allow_expired=allow_expired) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 445, in _do_fetch_token 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context data = self.fetch_token(token, **kwargs) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 248, in fetch_token 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token, access_rules_support=ACCESS_RULES_MIN_VERSION) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 145, in validate_token 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token = self._validate_token(token_id) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "", line 2, in _validate_token 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in get_or_create_for_user_func 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context key, user_func, timeout, should_cache_fn, (arg, kw) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in get_or_create 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context async_creator, 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._enter() 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 94, in _enter 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context generated = self._enter_create(value, createdtime) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 180, in _enter_create 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self.creator() 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 916, in gen_value 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context *creator_args[0], **creator_args[1] 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 179, in _validate_token 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token.mint(token_id, issued_at) 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 580, in mint 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self._validate_token_user() 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 503, in _validate_token_user 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if not self.user_domain.get('enabled'): 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 141, in user_domain 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.user['domain_id'] 2020-09-10 10:16:45.050 28 ERROR keystone.server.flask.request_processing.middleware.auth_context KeyError: 'domain_id' Thanks! Tony From tonyliu0592 at hotmail.com Thu Sep 10 17:23:06 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 10 Sep 2020 17:23:06 +0000 Subject: [Keystone] socket.timeout: timed out Message-ID: Any clues on this timeout exception? 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context [req-534d9855-8113-450d-8f9f-d93c0d961d24 113ee63a9ed0466794e24d069efc302c 4c142a681d884010ab36a7ac687d910c - default default] timed out: socket.timeout: timed out 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context Traceback (most recent call last): 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 103, in _inner 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return method(self, request) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 353, in process_request 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context resp = super(AuthContextMiddleware, self).process_request(request) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 411, in process_request 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context allow_expired=allow_expired) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 445, in _do_fetch_token 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context data = self.fetch_token(token, **kwargs) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 248, in fetch_token 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token, access_rules_support=ACCESS_RULES_MIN_VERSION) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 145, in validate_token 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token = self._validate_token(token_id) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "", line 2, in _validate_token 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in get_or_create_for_user_func 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context key, user_func, timeout, should_cache_fn, (arg, kw) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in get_or_create 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context async_creator, 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._enter() 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 94, in _enter 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context generated = self._enter_create(value, createdtime) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 180, in _enter_create 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self.creator() 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 916, in gen_value 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context *creator_args[0], **creator_args[1] 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 179, in _validate_token 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token.mint(token_id, issued_at) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 579, in mint 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self._validate_token_resources() 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 471, in _validate_token_resources 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if self.project and not self.project_domain.get('enabled'): 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 176, in project_domain 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.project['domain_id'] 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "", line 2, in get_domain 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in get_or_create_for_user_func 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context key, user_func, timeout, should_cache_fn, (arg, kw) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in get_or_create 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context async_creator, 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._enter() 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 87, in _enter 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = value_fn() 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 902, in get_value 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = self.backend.get(key) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/cache/_context_cache.py", line 74, in get 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = self.proxied.get(key) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/backends/memcached.py", line 168, in get 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = self.client.get(key) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/oslo_cache/backends/memcache_pool.py", line 32, in _run_method 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return getattr(client, __name)(*args, **kwargs) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1129, in get 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._get('get', key) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1074, in _get 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context server, key = self._get_server(key) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 446, in _get_server 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if server.connect(): 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1391, in connect 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if self._get_socket(): 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1423, in _get_socket 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.flush() 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1498, in flush 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.expect(b'OK') 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1473, in expect 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context line = self.readline(raise_exception) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1459, in readline 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context data = recv(4096) 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context socket.timeout: timed out Thanks! Tony From elod.illes at est.tech Thu Sep 10 17:28:10 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 10 Sep 2020 19:28:10 +0200 Subject: [cinder][stable] branch freeze for ocata, pike In-Reply-To: <28b72a5a-e136-4233-854e-8eebbfd25933@www.fastmail.com> References: <5af2fc9b-dfeb-f7cf-a491-fb4eab14f76f@suse.com> <28b72a5a-e136-4233-854e-8eebbfd25933@www.fastmail.com> Message-ID: <1ff00e0c-0819-710e-b984-c09e5861fba5@est.tech> Just discussed with Clark and Fungi on IRC, that since the branches are already tagged (*-eol), merging the patches could cause some confusions. So it's easier to just wait until RC, as Brian suggested. Thanks, Előd On 2020. 09. 10. 18:49, Clark Boylan wrote: > On Thu, Sep 10, 2020, at 5:42 AM, Andreas Jaeger wrote: >> On 28.07.20 16:23, Brian Rosmaita wrote: >>> tl;dr - do not approve any backports to stable/ocata or stable/pike in >>> any Cinder project deliverable >>> >>> stable/ocata has been tagged with ocata-eol in cinder, os-brick, >>> python-cinderclient, and python-brick-cinderclient-ext.  Nothing should >>> be merged into stable/ocata in any of these repositories during the >>> interim period before the branches are deleted. >> When do you plan to delete those branches? We have Zuul jobs that are >> broken, for example due to removal of devstack-plugin-zmq and we either >> should remove these from the branch or delete the branch. Currently Zuul >> complains about broken jobs. >> >> The two changes I talk about are: >> https://review.opendev.org/750887 >> https://review.opendev.org/750886 > I think we should go ahead and land those if we are waiting for a coordinated branch deletion. The zuul configs are branch specific and should be adjustable outside of normal backport procedures, particularly if they are causing problems like global zuul config errors. We've force merged some of these changes on stable branches in other projects if CI is generally unstable. Let us know if that is appropriate for this situation as well. > >> Andreas >> >>> stable/pike: the changes discussed in [0] have merged, and I've >>> proposed the pike-eol tags [1].  Nothing should be merged into >>> stable/pike in any of our code repositories from now until the branches >>> are deleted. >>> >>> [0] >>> http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016076.html >>> >>> [1] https://review.opendev.org/#/c/742523/ >>> From fungi at yuggoth.org Thu Sep 10 17:55:27 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Sep 2020 17:55:27 +0000 Subject: Is Storyboard really the future? In-Reply-To: References: <20200910154704.3erw242ynqldlq63@yuggoth.org> Message-ID: <20200910175526.7phhyrdv3fw3trmf@yuggoth.org> On 2020-09-10 18:45:20 +0200 (+0200), Radosław Piliszek wrote: [...] > You are right there that we can't necessarily solve this for > everyone. But at the moment it's confusing in that projects are > partially there and partially elsewhere *because of the > recommendation*. Obviously one can't do anything about escalation > to libvirt/qemu/kernel bugzillas but those are external projects. > For OpenStack projects we can have better guidelines. Also, while maybe not the perfect solution, the code browser at https://opendev.org/openstack/nova has a prominent Issues link which takes you directly to their https://bugs.launchpad.net/nova page (for example). [...] > I meant this thread to be against the recommendation, not the > software/instance itself. The recommendation introduced a feeling > that Storyboard *should* be used, Launchpad is not really > mentioned any longer either. To reiterate, I don't think it sounds > like it *must* be used (and surely is not a requirement) but > *should* is enough to cause bad experience for both sides (users > trying to report and teams trying to keep track of reported > issues). Yes, perhaps part of the disconnect here is that StoryBoard is one of the services provided by OpenDev so the OpenDev Manual is of course going to describe how to make use of it. We do also provide some Launchpad integration which warrants documenting, but as we don't actually run Launchpad we aren't going to maintain extensive documentation for the platform itself. On the other hand, the OpenStack Contributor Guide, OpenStack Project Teams Guide, or similar OpenStack-specific documentation certainly *can* document it in much greater detail if that's useful to the OpenStack community at large. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tonyliu0592 at hotmail.com Thu Sep 10 18:06:46 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 10 Sep 2020 18:06:46 +0000 Subject: [Keystone] TypeError: list indices must be integers or slices, not str Message-ID: Any clues to this error? 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context Traceback (most recent call last): 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 103, in _inner 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context return method(self, request) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 353, in process_request 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context resp = super(AuthContextMiddleware, self).process_request(request) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 411, in process_request 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context allow_expired=allow_expired) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 445, in _do_fetch_token 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context data = self.fetch_token(token, **kwargs) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 248, in fetch_token 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context token, access_rules_support=ACCESS_RULES_MIN_VERSION) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 146, in validate_token 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context self._is_valid_token(token, window_seconds=window_seconds) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 199, in _is_valid_token 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context self.check_revocation(token) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 137, in check_revocation 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context return self.check_revocation_v3(token) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 133, in check_revocation_v3 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context token_values = self.revoke_api.model.build_token_values(token) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/revoke_model.py", line 245, in build_token_values 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context if token.roles is not None: 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 458, in roles 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context roles = self._get_project_roles() 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 423, in _get_project_roles 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context roles.append({'id': r['id'], 'name': r['name']}) 2020-09-10 11:03:44.913 30 ERROR keystone.server.flask.request_processing.middleware.auth_context TypeError: list indices must be integers or slices, not str Thanks! Tony From skaplons at redhat.com Thu Sep 10 19:34:13 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 10 Sep 2020 21:34:13 +0200 Subject: [neutron] Drivers meeting 11.09.2020 cancelled Message-ID: <20200910193413.rxfqazr3zfhr5bst@skaplons-mac> Hi, There is no any RFE for tomorrow drivers meeting so lets cancel it and focus on review of the opened patches during that time. See You all next week on the meetings. Have a great weekend :) -- Slawek Kaplonski Principal software engineer Red Hat From mrunge at matthias-runge.de Thu Sep 10 20:13:45 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Thu, 10 Sep 2020 22:13:45 +0200 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <17478418b45.c9cb264d62678.8113988885859095234@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> <236b2c69-530a-2266-08e3-170b86c16a9d@gmail.com> <17478418b45.c9cb264d62678.8113988885859095234@ghanshyammann.com> Message-ID: <67dc8bf5-ec1c-92c8-bc8e-e4aa3c855dfe@matthias-runge.de> On 10/09/2020 15:43, Ghanshyam Mann wrote: > ---- On Thu, 10 Sep 2020 04:31:17 -0500 Yasufumi Ogawa wrote ---- > > Hi gmann, > > > > Sorry for that we've not merged your patch to Tacker because devstack on > > Focal fails in functional test. It seems gnocchi installation on Focal > > has some problems. > > > > Anyway, although this issue isn't fixed yet, we'll proceed to merge the > > patch immediately. > > Thanks Yasufumi. > > I reported the gnoochi issue in the below storyboard and tried to reach out to the ceilometer team > also but found not get any response. I will check what to do on this blocker. > > https://storyboard.openstack.org/#!/story/2008121 > > So, how did you reach out, or who did you contact? Since gnocchi is a separate project outside of OpenStack, you should report these issues on https://github.com/gnocchixyz/gnocchi/issues. Especially, one should use the usual way to report issues for a project. Thank you for your patch for ceilometer, I did a review on it but did not get an answer to my question. Matthias > -gmann > > > > > Thanks, > > Yasufumi > > > > On 2020/09/10 12:32, Ghanshyam Mann wrote: > > > Updates: > > > > > > Fixed a few more projects today which I found failing on Focal: > > > > > > - OpenStack SDKs repos : ready to merge > > > - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. > > > - Keystone: Fix is up, it should pass now. > > > - Manila: Fix is up, it should pass gate. > > > - Tacker: Ready to merge > > > - neutron-dynamic-routing: Ready to merge > > > - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. > > > this is the patch -https://review.opendev.org/#/c/743080/ > > > > > > Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa > > > > > > No more energy left for today, I will continue the remaining work tomorrow. > > > > > > -gmann > > > > > > ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > > > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > > > > Updates: > > > > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > > > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > > > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > > > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > > > > - All patches in this series https://review.opendev.org/#/c/738328/ > > > > > > > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > > > > You can find most of the fixes for possible failure in this topic: > > > > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > > > Hello Everyone, > > > > > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > > > Progress: > > > > > > ======= > > > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > > > plan. > > > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > > > > > ** Bug#1882521 > > > > > > ** DB migration issues, > > > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > > > > > > > Testing Till now: > > > > > > ============ > > > > > > * ~200 repos gate have been tested or fixed till now. > > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > > > project repos if I am late to fix them): > > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > > > > > * ~30repos fixes ready to merge: > > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > > > > > > > Bugs Report: > > > > > > ========== > > > > > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > > > volume detach operation. There is no clear root cause found yet > > > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > > > projects testing but this is blocking things for the migration. > > > > > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > > > to maintain pyflakes version compatibility. > > > > > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > > > > > > > What work to be done on the project side: > > > > > > ================================ > > > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > > > OR > > > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > > > this. > > > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > > > this migration. > > > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > > > base patches. > > > > > > > > > > > > > > > > > > Important things to note: > > > > > > =================== > > > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > > > * Use gerrit topic 'migrate-to-focal' > > > > > > * Do not backport any of the patches. > > > > > > > > > > > > > > > > > > References: > > > > > > ========= > > > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > > > [2] https://review.opendev.org/#/c/739315/ > > > > > > [3] https://review.opendev.org/#/c/739334/ > > > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Thu Sep 10 23:05:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Sep 2020 18:05:21 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <67dc8bf5-ec1c-92c8-bc8e-e4aa3c855dfe@matthias-runge.de> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <17476128313.f7b6001e28321.7088729119972703547@ghanshyammann.com> <236b2c69-530a-2266-08e3-170b86c16a9d@gmail.com> <17478418b45.c9cb264d62678.8113988885859095234@ghanshyammann.com> <67dc8bf5-ec1c-92c8-bc8e-e4aa3c855dfe@matthias-runge.de> Message-ID: <1747a4427d1.b47bcff480039.5625062444763280406@ghanshyammann.com> ---- On Thu, 10 Sep 2020 15:13:45 -0500 Matthias Runge wrote ---- > On 10/09/2020 15:43, Ghanshyam Mann wrote: > > ---- On Thu, 10 Sep 2020 04:31:17 -0500 Yasufumi Ogawa wrote ---- > > > Hi gmann, > > > > > > Sorry for that we've not merged your patch to Tacker because devstack on > > > Focal fails in functional test. It seems gnocchi installation on Focal > > > has some problems. > > > > > > Anyway, although this issue isn't fixed yet, we'll proceed to merge the > > > patch immediately. > > > > Thanks Yasufumi. > > > > I reported the gnoochi issue in the below storyboard and tried to reach out to the ceilometer team > > also but found not get any response. I will check what to do on this blocker. > > > > https://storyboard.openstack.org/#!/story/2008121 > > > > > > > So, how did you reach out, or who did you contact? > > Since gnocchi is a separate project outside of OpenStack, you should > report these issues on https://github.com/gnocchixyz/gnocchi/issues. > Especially, one should use the usual way to report issues for a project. > > Thank you for your patch for ceilometer, I did a review on it but did > not get an answer to my question. Hi Matthias, I posted about this on the telemetry IRC channel. I have reported it on gnoochi github also - https://github.com/gnocchixyz/gnocchi/issues/1069 For the ceilometer patch, I updated it with lxml==4.2.3 which worked fine, hope it is ok now. -gmann > > Matthias > > > -gmann > > > > > > > > Thanks, > > > Yasufumi > > > > > > On 2020/09/10 12:32, Ghanshyam Mann wrote: > > > > Updates: > > > > > > > > Fixed a few more projects today which I found failing on Focal: > > > > > > > > - OpenStack SDKs repos : ready to merge > > > > - All remaining Oslo lib fixes: we are discussing FFE on these in separate ML thread. > > > > - Keystone: Fix is up, it should pass now. > > > > - Manila: Fix is up, it should pass gate. > > > > - Tacker: Ready to merge > > > > - neutron-dynamic-routing: Ready to merge > > > > - Cinder- it seems l-c job still failing. I will dig into it tomorrow or it will be appreciated if anyone can take a look before my morning. > > > > this is the patch -https://review.opendev.org/#/c/743080/ > > > > > > > > Note: all tox based jobs (Except py36/3.7) are running on Focal now so If any of you gate failing, feel free to ping me on #openstack-qa > > > > > > > > No more energy left for today, I will continue the remaining work tomorrow. > > > > > > > > -gmann > > > > > > > > ---- On Wed, 09 Sep 2020 14:05:17 -0500 Ghanshyam Mann wrote ---- > > > > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > > > > > > Updates: > > > > > > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > > > > > > and discuss the integration testing switch tomorrow in TC office hours. > > > > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > > > I have checked it again and fixed many repos that are up for review and merge. Most python clients are already fixed > > > > > > or their fixes are up for merge so they can make it before the feature freeze on 10th. If any repo is broken then it will be pretty quick > > > > > > to fix by lower constraint bump (see the example under https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > > > > > > > Even if any of the fixes miss the victoria release then those can be backported easily. I am opening the tox base jobs migration to merge: > > > > > > - All patches in this series https://review.opendev.org/#/c/738328/ > > > > > > > > > > All these tox base jobs are merged now and running on Focal. If any of your repo is failing, please fix on priority or ping me on IRC if failure not clear. > > > > > You can find most of the fixes for possible failure in this topic: > > > > > - https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > > > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > > > > > > > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) > > > > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > > > > Hello Everyone, > > > > > > > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > > > > > Progress: > > > > > > > ======= > > > > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > > > > plan. > > > > > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > > > > > > > ** Bug#1882521 > > > > > > > ** DB migration issues, > > > > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > > > > > > > > > > Testing Till now: > > > > > > > ============ > > > > > > > * ~200 repos gate have been tested or fixed till now. > > > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > > > > project repos if I am late to fix them): > > > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > > > > > > > * ~30repos fixes ready to merge: > > > > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > > > > > > > > > > Bugs Report: > > > > > > > ========== > > > > > > > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > > > > volume detach operation. There is no clear root cause found yet > > > > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > > > > projects testing but this is blocking things for the migration. > > > > > > > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > > > > to maintain pyflakes version compatibility. > > > > > > > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > > > > > > > > > > What work to be done on the project side: > > > > > > > ================================ > > > > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > > > > OR > > > > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > > > > this. > > > > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > > > > this migration. > > > > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > > > > base patches. > > > > > > > > > > > > > > > > > > > > > Important things to note: > > > > > > > =================== > > > > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > > > > * Use gerrit topic 'migrate-to-focal' > > > > > > > * Do not backport any of the patches. > > > > > > > > > > > > > > > > > > > > > References: > > > > > > > ========= > > > > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > > > > [2] https://review.opendev.org/#/c/739315/ > > > > > > > [3] https://review.opendev.org/#/c/739334/ > > > > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From laurentfdumont at gmail.com Fri Sep 11 00:19:57 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 10 Sep 2020 20:19:57 -0400 Subject: [neutron] Flow drop on agent restart with openvswitch firewall driver In-Reply-To: References: <20200909075042.qyxbnq7li2zm5oo4@skaplons-mac> Message-ID: I'll see if I can reproduce this as well. We are running OVS as well in a RH env. (it would be nice to know because we are also restarting the agent sometimes :pray:) On Wed, Sep 9, 2020 at 3:39 PM Alexis Deberg wrote: > Sure, opened https://bugs.launchpad.net/neutron/+bug/1895038 with all the > details I got at hand. > As I said in the bug report, I'll try to reproduce with a up to date > devstack asap. > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Sep 11 01:51:42 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 10 Sep 2020 21:51:42 -0400 Subject: [cinder] propose Lucio Seki for cinder core Message-ID: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, answering questions in IRC, and participating in the Cinder weekly meetings and at the midcycles. He's been particularly thorough and helpful in his reviews of backend drivers, and has also been helpful in giving pointers to new driver maintainers who are setting up third party CI for their drivers. Having Lucio as a core reviewer will help improve the team's review bandwidth without sacrificing review quality. In the absence of objections, I'll add Lucio to the core team just before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC in #openstack-meeting-alt). Please communicate any concerns to me before that time. cheers, brian From sorrison at gmail.com Fri Sep 11 03:15:51 2020 From: sorrison at gmail.com (Sam Morrison) Date: Fri, 11 Sep 2020 13:15:51 +1000 Subject: [neutron][networking-midonet] Maintainers needed In-Reply-To: <2AB30A6D-9B6C-4D18-8FAB-C1022965657A@gmail.com> References: <0AC5AC07-E97E-43CC-B344-A3E992B8CCA4@netways.de> <610412AF-AADF-44BD-ABA2-BA289B7C8F8A@redhat.com> <5E2F5826-559E-42E9-84C5-FA708E5A122A@gmail.com> <43C4AF2B-C5C0-40EB-B621-AC6799471D01@gmail.com> <92959221-0353-4D48-8726-8FE71AFEA652@gmail.com> <4D778DBF-F505-462F-B85D-0B372085FA72@gmail.com> <5B9D2CB0-8B81-4533-A072-9A51B4A44364@gmail.com> <17472f764b8.1292d333d6181.3892285235847293323@ghanshyammann.com> <2AB30A6D-9B6C-4D18-8FAB-C1022965657A@gmail.com> Message-ID: <7E82CBA5-8352-4C16-B726-1ADCAA925163@gmail.com> Made some more progress, got single and multinode working on bionic. I’ve added a centos8 which is failing because it can’t find yum or yum_install to install the packages, needs more investigation. Will have a look into that next week. The grenade job also won’t work until these changes get merged and back ported to ussuri I think. I made these 2 jobs non-voting for now. So now the only thing preventing the lucrative green +1 is the pep8 job which is failing because of neutron :-( So I think https://review.opendev.org/#/c/749857/ is now ready for review. Thanks, Sam > On 10 Sep 2020, at 10:58 am, Sam Morrison wrote: > > OK thanks for the fix for TaaS, https://review.opendev.org/#/c/750633/4 should be good to be merged (even though its failing) > > Also https://review.opendev.org/#/c/749641/3 should be good to go. This will get all the unit tests working. > > The pep8 tests are broken due to the pecan 1.4.0 issue being discussed at https://review.opendev.org/#/c/747419/ > > My zuul v3 aio tempest devstack job is working well now, still having some issues with the multinode one which I’m working on now. > > Sam > > > >> On 9 Sep 2020, at 11:04 pm, Ghanshyam Mann wrote: >> >> Also we need to merge the networking-l2gw project new location fix >> >> - https://review.opendev.org/#/c/738046/ >> >> It's leading to many errors as pointed by AJaeger - https://zuul.opendev.org/t/openstack/config-errors >> >> >> -gmann >> >> ---- On Wed, 09 Sep 2020 07:18:37 -0500 Lajos Katona wrote ---- >>> Hi,I pushed a fix for it https://review.opendev.org/750633, I added Deepak for reviewer as he is the owner of the taas patch. >>> Sorry for the problem.Lajos (lajoskatona) >>> Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 12:49): >>> >>> >>> On 9 Sep 2020, at 4:52 pm, Lajos Katona wrote: >>> Hi,Could you please point to the issue with taas? >>> Networking-midonet unit tests [1] are failing with the addition of this patch [2] >>> [1] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html[2] https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca >>> I’m not really familiar with all of this so not sure how to fix these up. >>> Cheers,Sam >>> >>> >>> >>> RegardsLajos (lajoskatona) >>> Sam Morrison ezt írta (időpont: 2020. szept. 9., Sze, 0:44): >>> >>> >>> On 8 Sep 2020, at 3:13 pm, Sam Morrison wrote: >>> Hi Yamamoto, >>> >>> On 4 Sep 2020, at 6:47 pm, Takashi Yamamoto wrote: >>> i'm talking to our infra folks but it might take longer than i hoped. >>> if you or someone else can provide a public repo, it might be faster. >>> (i have looked at launchpad PPA while ago. but it didn't seem >>> straightforward given the complex build machinary in midonet.) >>> >>> Yeah that’s no problem, I’ve set up a repo with the latest midonet debs in it and happy to use that for the time being. >>> >>> >>> I’m not sure why the pep8 job is failing, it is complaining about pecan which makes me think this is an issue with neutron itself? Kinda stuck on this one, it’s probably something silly. >>> >>> probably. >>> >>> Yeah this looks like a neutron or neutron-lib issue >>> >>> >>> For the py3 unit tests they are now failing due to db migration errors in tap-as-a-service, l2-gateway and vpnaas issues I think caused by neutron getting rid of the liberty alembic branch and so we need to squash these on these projects too. >>> >>> this thing? https://review.opendev.org/#/c/749866/ >>> >>> Yeah that fixed that issue. >>> >>> I have been working to get everything fixed in this review [1] >>> The pep8 job is working but not in the gate due to neutron issues [2]The py36/py38 jobs have 2 tests failing both relating to tap-as-a-service which I don’t really have any idea about, never used it. [3] >>> These are failing because of this patch on tap-as-a-service https://opendev.org/x/tap-as-a-service/commit/8332a396b1b046eb370c0cb377d836d0c6b6d6ca >>> Really have no idea how this works, does anyone use tap-as-a-service with midonet and can help me fix it, else I’m wondering if we disable tests for taas and make it an unsupported feature for now. >>> Sam >>> >>> >>> The tempest aio job is working well now, I’m not sure what tempest tests were run before but it’s just doing what ever is the default at the moment.The tempest multinode job isn’t working due to what I think is networking issues between the 2 nodes. I don’t really know what I’m doing here so any pointers would be helpful. [4]The grenade job is also failing because I also need to put these fixes on the stable/ussuri branch to make it work so will need to figure that out too >>> Cheers,Sam >>> [1] https://review.opendev.org/#/c/749857/[2] https://zuul.opendev.org/t/openstack/build/e94e873cbf0443c0a7f25ffe76b3b00b[3] https://b1a2669063d97482275a-410cecb8410320c66fb802e0a530979a.ssl.cf5.rackcdn.com/749857/18/check/openstack-tox-py36/0344651/testr_results.html[4] https://zuul.opendev.org/t/openstack/build/61f6dd3dc3d74a81b7a3f5968b4d8c72 >>> >>> >>> >>> >>> >>> I can now start to look into the devstack zuul jobs. >>> >>> Cheers, >>> Sam >>> >>> >>> [1] https://github.com/NeCTAR-RC/networking-midonet/commits/devstack >>> [2] https://github.com/midonet/midonet/pull/9 >>> >>> >>> >>> >>> On 1 Sep 2020, at 4:03 pm, Sam Morrison wrote: >>> >>> >>> >>> On 1 Sep 2020, at 2:59 pm, Takashi Yamamoto wrote: >>> >>> hi, >>> >>> On Tue, Sep 1, 2020 at 1:39 PM Sam Morrison wrote: >>> >>> >>> >>> On 1 Sep 2020, at 11:49 am, Takashi Yamamoto wrote: >>> >>> Sebastian, Sam, >>> >>> thank you for speaking up. >>> >>> as Slawek said, the first (and probably the biggest) thing is to fix the ci. >>> the major part for it is to make midonet itself to run on ubuntu >>> version used by the ci. (18.04, or maybe directly to 20.04) >>> https://midonet.atlassian.net/browse/MNA-1344 >>> iirc, the remaining blockers are: >>> * libreswan (used by vpnaas) >>> * vpp (used by fip64) >>> maybe it's the easiest to drop those features along with their >>> required components, if it's acceptable for your use cases. >>> >>> We are running midonet-cluster and midolman on 18.04, we dropped those package dependencies from our ubuntu package to get it working. >>> >>> We currently have built our own and host in our internal repo but happy to help putting this upstream somehow. Can we upload them to the midonet apt repo, does it still exist? >>> >>> it still exists. but i don't think it's maintained well. >>> let me find and ask someone in midokura who "owns" that part of infra. >>> >>> does it also involve some package-related modifications to midonet repo, right? >>> >>> >>> Yes a couple, I will send up as as pull requests to https://github.com/midonet/midonet today or tomorrow >>> >>> Sam >>> >>> >>> >>> >>> >>> I’m keen to do the work but might need a bit of guidance to get started, >>> >>> Sam >>> >>> >>> >>> >>> >>> >>> >>> alternatively you might want to make midonet run in a container. (so >>> that you can run it with older ubuntu, or even a container trimmed for >>> JVM) >>> there were a few attempts to containerize midonet. >>> i think this is the latest one: https://github.com/midonet/midonet-docker >>> >>> On Fri, Aug 28, 2020 at 7:10 AM Sam Morrison wrote: >>> >>> We (Nectar Research Cloud) use midonet heavily too, it works really well and we haven’t found another driver that works for us. We tried OVN but it just doesn’t scale to the size of environment we have. >>> >>> I’m happy to help too. >>> >>> Cheers, >>> Sam >>> >>> >>> >>> On 31 Jul 2020, at 2:06 am, Slawek Kaplonski wrote: >>> >>> Hi, >>> >>> Thx Sebastian for stepping in to maintain the project. That is great news. >>> I think that at the beginning You should do 2 things: >>> - sync with Takashi Yamamoto (I added him to the loop) as he is probably most active current maintainer of this project, >>> - focus on fixing networking-midonet ci which is currently broken - all scenario jobs aren’t working fine on Ubuntu 18.04 (and we are going to move to 20.04 in this cycle), migrate jobs to zuulv3 from the legacy ones and finally add them to the ci again, >>> >>> I can of course help You with ci jobs if You need any help. Feel free to ping me on IRC or email (can be off the list). >>> >>> On 29 Jul 2020, at 15:24, Sebastian Saemann wrote: >>> >>> Hi Slawek, >>> >>> we at NETWAYS are running most of our neutron networking on top of midonet and wouldn't be too happy if it gets deprecated and removed. So we would like to take over the maintainer role for this part. >>> >>> Please let me know how to proceed and how we can be onboarded easily. >>> >>> Best regards, >>> >>> Sebastian >>> >>> -- >>> Sebastian Saemann >>> Head of Managed Services >>> >>> NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg >>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77 >>> CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 >>> https://netways.de | sebastian.saemann at netways.de >>> >>> ** NETWAYS Web Services - https://nws.netways.de ** >>> >>> — >>> Slawek Kaplonski >>> Principal software engineer >>> Red Hat >>> >>> >>> >>> >>> > From tonyliu0592 at hotmail.com Fri Sep 11 03:49:09 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 11 Sep 2020 03:49:09 +0000 Subject: [Neutron] number of subnets in a network Message-ID: Hi, Is there any hard limit to the number of subnets in a network? In theory, would it be ok to put, like 5000 subnets, in a network? Thanks! Tony From tonyliu0592 at hotmail.com Fri Sep 11 04:15:54 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 11 Sep 2020 04:15:54 +0000 Subject: "openstack server list" takes 30s Message-ID: Hi, I built a Ussuri cluster with 3 controllers and 5 compute nodes. OpenStack CLI ran pretty fast at the beginning, but gets slower over time along with increased workloads. Right now, it takes about 30s to list 10 VMs. The CPU, memory and disk usage are on those 3 controllers are all in the range. I understand there are many API calls happening behind CLI. I'd like to figure out how this 30s is consumed, which call is the killer. Any guidance or hint would be helpful. Thanks! Tony From dev.faz at gmail.com Fri Sep 11 05:38:03 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Fri, 11 Sep 2020 07:38:03 +0200 Subject: "openstack server list" takes 30s In-Reply-To: References: Message-ID: Hi, You could try to use osProfiler. Fabian Tony Liu schrieb am Fr., 11. Sept. 2020, 06:24: > Hi, > > I built a Ussuri cluster with 3 controllers and 5 compute nodes. > OpenStack CLI ran pretty fast at the beginning, but gets slower > over time along with increased workloads. Right now, it takes > about 30s to list 10 VMs. The CPU, memory and disk usage are on > those 3 controllers are all in the range. I understand there are > many API calls happening behind CLI. I'd like to figure out how > this 30s is consumed, which call is the killer. > Any guidance or hint would be helpful. > > > Thanks! > Tony > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Sep 11 07:07:39 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Sep 2020 09:07:39 +0200 Subject: [Keystone] socket.timeout: timed out In-Reply-To: References: Message-ID: Hi Tony, Well, it looks like memcached just timed out. I'd check the load on it. -yoctozepto On Thu, Sep 10, 2020 at 7:24 PM Tony Liu wrote: > > Any clues on this timeout exception? > > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context [req-534d9855-8113-450d-8f9f-d93c0d961d24 113ee63a9ed0466794e24d069efc302c 4c142a681d884010ab36a7ac687d910c - default default] timed out: socket.timeout: timed out > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context Traceback (most recent call last): > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 103, in _inner > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return method(self, request) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 353, in process_request > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context resp = super(AuthContextMiddleware, self).process_request(request) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 411, in process_request > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context allow_expired=allow_expired) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 445, in _do_fetch_token > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context data = self.fetch_token(token, **kwargs) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/auth_context.py", line 248, in fetch_token > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token, access_rules_support=ACCESS_RULES_MIN_VERSION) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 145, in validate_token > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token = self._validate_token(token_id) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "", line 2, in _validate_token > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in get_or_create_for_user_func > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context key, user_func, timeout, should_cache_fn, (arg, kw) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in get_or_create > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context async_creator, > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._enter() > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 94, in _enter > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context generated = self._enter_create(value, createdtime) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 180, in _enter_create > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self.creator() > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 916, in gen_value > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context *creator_args[0], **creator_args[1] > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 179, in _validate_token > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context token.mint(token_id, issued_at) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 579, in mint > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self._validate_token_resources() > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 471, in _validate_token_resources > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if self.project and not self.project_domain.get('enabled'): > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line 176, in project_domain > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.project['domain_id'] > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context __ret_val = __f(*args, **kwargs) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "", line 2, in get_domain > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, in get_or_create_for_user_func > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context key, user_func, timeout, should_cache_fn, (arg, kw) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in get_or_create > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context async_creator, > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in __enter__ > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._enter() > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 87, in _enter > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = value_fn() > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 902, in get_value > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = self.backend.get(key) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/keystone/common/cache/_context_cache.py", line 74, in get > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = self.proxied.get(key) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/dogpile/cache/backends/memcached.py", line 168, in get > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context value = self.client.get(key) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/oslo_cache/backends/memcache_pool.py", line 32, in _run_method > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return getattr(client, __name)(*args, **kwargs) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1129, in get > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context return self._get('get', key) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1074, in _get > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context server, key = self._get_server(key) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 446, in _get_server > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if server.connect(): > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1391, in connect > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context if self._get_socket(): > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1423, in _get_socket > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.flush() > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1498, in flush > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context self.expect(b'OK') > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1473, in expect > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context line = self.readline(raise_exception) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context File "/usr/lib/python3.6/site-packages/memcache.py", line 1459, in readline > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context data = recv(4096) > 2020-09-10 10:10:33.981 28 ERROR keystone.server.flask.request_processing.middleware.auth_context socket.timeout: timed out > > > Thanks! > Tony > > From oliver.weinmann at me.com Fri Sep 11 07:53:40 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Fri, 11 Sep 2020 07:53:40 -0000 Subject: =?utf-8?B?VXNzdXJpIC0gbWFrZSBhZGRlZCBtcHRzYXMgZHJpdmVyIHRvIGludHJvc3Bl?= =?utf-8?B?Y3Rpb24gaW5pdHJhbWZzIGxvYWQgYXV0b21hdGljYWxseQ==?= Message-ID: <873eeade-c2f4-423d-81bb-c0be5976b0a0@me.com> Hi, I already asked this question on serverfault. But I guess here is a better place. I have a very ancient hardware with a MPTSAS controller. I use this for TripleO deployment testing. With the release of Ussuri which is running CentOS8, I can no longer provision my overcloud nodes as the MPTSAS driver has been removed in CentOS8: https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ I managed to include the driver provided from ELrepo in the introspection image but It is not loaded automatically: All commands are run as user "stack". Extract the introspection image: cd ~ mkdir imagesnew cd imagesnew tar xvf ../ironic-python-agent.tar mkdir ~/ipa-tmp cd ~/ipa-tmp /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs | zcat | cpio -ivd | pax -r Extract the contents of the mptsas driver rpm: rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r Put the kernel module in the right places. To figure out where the module has to reside I installed the rpm on a already deployed node and used find to locate it. xz -c ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz mkdir ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko sudo chown root . -R find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8  > ~/images/ironic-python-agent.initramfs Upload the new image cd ~/images openstack overcloud image upload --update-existing --image-path /home/stack/images/ Now when I start the introspection and ssh into the host I see no disks: [root at localhost ~]# fdisk -l [root at localhost ~]# lsmod | grep mptsas Once i manually load the driver, I can see the disks: [root at localhost ~]# modprobe mptsas [root at localhost ~]# lsmod | grep mptsas mptsas                 69632  0 mptscsih               45056  1 mptsas mptbase                98304  2 mptsas,mptscsih scsi_transport_sas     45056  1 mptsas [root at localhost ~]# fdisk -l Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes But how can I make it so that it will automatically load on boot? Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Fri Sep 11 09:12:28 2020 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 11 Sep 2020 12:12:28 +0300 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: +1 from me. Lucio does a lot of good contributions to Cinder. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Fri, Sep 11, 2020 at 4:53 AM Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > answering questions in IRC, and participating in the Cinder weekly > meetings and at the midcycles. He's been particularly thorough and > helpful in his reviews of backend drivers, and has also been helpful in > giving pointers to new driver maintainers who are setting up third party > CI for their drivers. Having Lucio as a core reviewer will help improve > the team's review bandwidth without sacrificing review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC > in #openstack-meeting-alt). Please communicate any concerns to me > before that time. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Sep 11 10:50:39 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Sep 2020 12:50:39 +0200 Subject: [Neutron] number of subnets in a network In-Reply-To: References: Message-ID: <20200911105039.ehcdlb66iezfj3dh@skaplons-mac> Hi, I'm not aware about any such limit. There shouldn't be any IMHO. On Fri, Sep 11, 2020 at 03:49:09AM +0000, Tony Liu wrote: > Hi, > > Is there any hard limit to the number of subnets in a network? > In theory, would it be ok to put, like 5000 subnets, in a network? > > Thanks! > Tony > > -- Slawek Kaplonski Principal software engineer Red Hat From klemen at psi-net.si Fri Sep 11 10:51:33 2020 From: klemen at psi-net.si (Klemen Pogacnik) Date: Fri, 11 Sep 2020 12:51:33 +0200 Subject: [kolla-ansible] Ceph in Ussuri Message-ID: I've done Ansible playbook to simplify Ceph integration with Openstack. It's based on cephadm-ansible project ( https://github.com/jcmdln/cephadm-ansible) Check: https://gitlab.com/kemopq/it_addmodule-ceph Any suggestions and/or help are appreciated! Klemen -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Sep 11 11:12:58 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 11 Sep 2020 13:12:58 +0200 Subject: [Neutron] number of subnets in a network In-Reply-To: <20200911105039.ehcdlb66iezfj3dh@skaplons-mac> References: <20200911105039.ehcdlb66iezfj3dh@skaplons-mac> Message-ID: Hello, I know there is an issue with dhcp agent on queens when there are a lot of subnets. The issue is solved in stein. Ignazio Il Ven 11 Set 2020, 12:58 Slawek Kaplonski ha scritto: > Hi, > > I'm not aware about any such limit. There shouldn't be any IMHO. > > On Fri, Sep 11, 2020 at 03:49:09AM +0000, Tony Liu wrote: > > Hi, > > > > Is there any hard limit to the number of subnets in a network? > > In theory, would it be ok to put, like 5000 subnets, in a network? > > > > Thanks! > > Tony > > > > > > -- > Slawek Kaplonski > Principal software engineer > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Fri Sep 11 11:50:57 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Fri, 11 Sep 2020 13:50:57 +0200 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> Message-ID: <447841d8d8560f96475cb0a275e34464ece6352b.camel@redhat.com> On Wed, 2020-09-09 at 14:05 -0500, Ghanshyam Mann wrote: > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > Updates: > > After working more on failing one today and listing the blocking > one, I think we are good to switch tox based testing today > > and discuss the integration testing switch tomorrow in TC office > hours. > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > I have checked it again and fixed many repos that are up for > review and merge. Most python clients are already fixed > > or their fixes are up for merge so they can make it before the > feature freeze on 10th. If any repo is broken then it will be pretty > quick > > to fix by lower constraint bump (see the example under > https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > Even if any of the fixes miss the victoria release then those can > be backported easily. I am opening the tox base jobs migration to > merge: > > - All patches in this series > https://review.opendev.org/#/c/738328/ > > All these tox base jobs are merged now and running on Focal. If any > of your repo is failing, please fix on priority or ping me on IRC if > failure not clear. > You can find most of the fixes for possible failure in this topic: > - > https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > -gmann We're in a bit of a pickle here. So with kuryr-kubernetes we aim to keep lower-constraints on the versions that can be found in CentOS/RHEL8 and seems like cffi 1.11.5 won't compile with Python 3.8. What should we do here? Is such assumption even possible given broader OpenStack assumptions? > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > We have three blocking open bugs here so I would like to discuss > it in tomorrow's TC office hour also about how to proceed on this. > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 ( > https://bugs.launchpad.net/qemu/+bug/1894804) > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > -gmann > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > > Hello Everyone, > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' > community goal. Its time to force the base jobs migration which can > > > break the projects gate if not yet taken care of. Read below > for the plan. > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > Progress: > > > ======= > > > * We are close to V-3 release and this is time we have to > complete this migration otherwise doing it in RC period can add > > > unnecessary and last min delay. I am going to plan this > migration in two-part. This will surely break some projects gate > > > which is not yet finished the migration but we have to do at > some time. Please let me know if any objection to the below > > > plan. > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > ** I am going to open tox base jobs migration (doc, unit, > functional, lower-constraints etc) to merge by tomorrow. which is > this > > > series (all base patches of this): > https://review.opendev.org/#/c/738328/ . > > > > > > **There are few repos still failing on requirements lower- > constraints job specifically which I tried my best to fix as many as > possible. > > > Many are ready to merge also. Please merge or work on your > projects repo testing before that or fix on priority if failing. > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > * We have few open bugs for this which are not yet resolved, we > will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > ** Bug#1882521 > > > ** DB migration issues, > > > *** alembic and few on telemetry/gnocchi side > https://github.com/sqlalchemy/alembic/issues/699, > https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > Testing Till now: > > > ============ > > > * ~200 repos gate have been tested or fixed till now. > > > ** > https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > * ~100 repos are under test and failing. Debugging and fixing > are in progress (If you would like to help, please check your > > > project repos if I am late to fix them): > > > ** > https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > * ~30repos fixes ready to merge: > > > ** > https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > Bugs Report: > > > ========== > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > There is open bug for nova/cinder where three tempest tests are > failing for > > > volume detach operation. There is no clear root cause found yet > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > We have skipped the tests in tempest base patch to proceed with > the other > > > projects testing but this is blocking things for the migration. > > > > > > 2. DB migration issues (IN-PROGRESS) > > > * alembic and few on telemetry/gnocchi side > https://github.com/sqlalchemy/alembic/issues/699, > https://storyboard.openstack.org/#!/story/2008003 > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. > (FIXED) > > > nodeset conflict is resolved now and devstack provides all > focal nodes now. > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is > the default python version > > > on ubuntu focal[1]. With pep8 job running on focal faces the > issue and fail. We need to bump > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > As of now, many projects are using old hacking version so I am > explicitly adding pyflakes>=2.1.1 > > > on the project side[2] but for the long term easy maintenance, > I am doing it in 'hacking' requirements.txt[3] > > > nd will release a new hacking version. After that project can > move to new hacking and do not need > > > to maintain pyflakes version compatibility. > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > 'Markupsafe' 1.0 is not compatible with the latest version of > setuptools[4], > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to > make it work. > > > There are a few more issues[5] with lower-constraint jobs which > I am debugging. > > > > > > > > > What work to be done on the project side: > > > ================================ > > > This goal is more of testing the jobs on focal and fixing bugs > if any otherwise > > > migrate jobs by switching the nodeset to focal node sets > defined in devstack. > > > > > > 1. Start a patch in your repo by making depends-on on either of > below: > > > devstack base patch if you are using only devstack base jobs > not tempest: > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > OR > > > tempest base patch if you are using the tempest base job (like > devstack-tempest): > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > Both have depends-on on the series where I am moving > unit/functional/doc/cover/nodejs tox jobs to focal. So > > > you can test the complete gate > jobs(unit/functional/doc/integration) together. > > > This and its base patches - > https://review.opendev.org/#/c/738328/ > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > 2. If none of your project jobs override the nodeset then above > patch will be > > > testing patch(do not merge) otherwise change the nodeset to > focal. > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > 3. If the jobs are defined in branchless repo and override the > nodeset then you need to override the branches > > > variant to adjust the nodeset so that those jobs run on Focal > on victoria onwards only. If no nodeset > > > is overridden then devstack being branched and stable base job > using bionic/xenial will take care of > > > this. > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > 4. If no updates need you can abandon the testing patch ( > https://review.opendev.org/#/c/744341/). If it need > > > updates then modify the same patch with proper commit msg, once > it pass the gate then remove the Depends-On > > > so that you can merge your patch before base jobs are switched > to focal. This way we make sure no gate downtime in > > > this migration. > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > Once we finish the testing on projects side and no failure then > we will merge the devstack and tempest > > > base patches. > > > > > > > > > Important things to note: > > > =================== > > > * Do not forgot to add the story and task link to your patch so > that we can track it smoothly. > > > * Use gerrit topic 'migrate-to-focal' > > > * Do not backport any of the patches. > > > > > > > > > References: > > > ========= > > > Goal doc: > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > Storyboard tracking: > https://storyboard.openstack.org/#!/story/2007865 > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > [2] https://review.opendev.org/#/c/739315/ > > > [3] https://review.opendev.org/#/c/739334/ > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > [5] > https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > -gmann > > > > > > > > > > > From smooney at redhat.com Fri Sep 11 11:53:03 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 11 Sep 2020 12:53:03 +0100 Subject: [Neutron] number of subnets in a network In-Reply-To: References: <20200911105039.ehcdlb66iezfj3dh@skaplons-mac> Message-ID: <95cfcb95500a6da897732c1a6d835ae1e42af6fa.camel@redhat.com> is this request related to routed networks by anychance im just interested in why you would need 5000 subnets in one network in a non routed case you would like have issue with broadcast domains and that many networks but with routed network that would not be an issue. On Fri, 2020-09-11 at 13:12 +0200, Ignazio Cassano wrote: > Hello, I know there is an issue with dhcp agent on queens when there are a > lot of subnets. The issue is solved in stein. > Ignazio > > Il Ven 11 Set 2020, 12:58 Slawek Kaplonski ha scritto: > > > Hi, > > > > I'm not aware about any such limit. There shouldn't be any IMHO. > > > > On Fri, Sep 11, 2020 at 03:49:09AM +0000, Tony Liu wrote: > > > Hi, > > > > > > Is there any hard limit to the number of subnets in a network? > > > In theory, would it be ok to put, like 5000 subnets, in a network? > > > > > > Thanks! > > > Tony > > > > > > > > > > -- > > Slawek Kaplonski > > Principal software engineer > > Red Hat > > > > > > From radoslaw.piliszek at gmail.com Fri Sep 11 12:13:14 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Sep 2020 14:13:14 +0200 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <447841d8d8560f96475cb0a275e34464ece6352b.camel@redhat.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <447841d8d8560f96475cb0a275e34464ece6352b.camel@redhat.com> Message-ID: I agree with Michał that it kind of breaks the purpose of lower-constraints. Supposedly lower-constraints should just be tested with the lowest supported python version? WDYT, Folks? (That said, lots of projects already made lower-constraints break on RDO due to these bumps.) -yoctozepto On Fri, Sep 11, 2020 at 2:02 PM Michał Dulko wrote: > > On Wed, 2020-09-09 at 14:05 -0500, Ghanshyam Mann wrote: > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann < > > gmann at ghanshyammann.com> wrote ---- > > > Updates: > > > After working more on failing one today and listing the blocking > > one, I think we are good to switch tox based testing today > > > and discuss the integration testing switch tomorrow in TC office > > hours. > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > I have checked it again and fixed many repos that are up for > > review and merge. Most python clients are already fixed > > > or their fixes are up for merge so they can make it before the > > feature freeze on 10th. If any repo is broken then it will be pretty > > quick > > > to fix by lower constraint bump (see the example under > > https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > Even if any of the fixes miss the victoria release then those can > > be backported easily. I am opening the tox base jobs migration to > > merge: > > > - All patches in this series > > https://review.opendev.org/#/c/738328/ > > > > All these tox base jobs are merged now and running on Focal. If any > > of your repo is failing, please fix on priority or ping me on IRC if > > failure not clear. > > You can find most of the fixes for possible failure in this topic: > > - > > https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > -gmann > > We're in a bit of a pickle here. So with kuryr-kubernetes we aim to > keep lower-constraints on the versions that can be found in > CentOS/RHEL8 and seems like cffi 1.11.5 won't compile with Python 3.8. > What should we do here? Is such assumption even possible given broader > OpenStack assumptions? > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > We have three blocking open bugs here so I would like to discuss > > it in tomorrow's TC office hour also about how to proceed on this. > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 ( > > https://bugs.launchpad.net/qemu/+bug/1894804) > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > -gmann > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann < > > gmann at ghanshyammann.com> wrote ---- > > > > Hello Everyone, > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' > > community goal. Its time to force the base jobs migration which can > > > > break the projects gate if not yet taken care of. Read below > > for the plan. > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > Progress: > > > > ======= > > > > * We are close to V-3 release and this is time we have to > > complete this migration otherwise doing it in RC period can add > > > > unnecessary and last min delay. I am going to plan this > > migration in two-part. This will surely break some projects gate > > > > which is not yet finished the migration but we have to do at > > some time. Please let me know if any objection to the below > > > > plan. > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, > > functional, lower-constraints etc) to merge by tomorrow. which is > > this > > > > series (all base patches of this): > > https://review.opendev.org/#/c/738328/ . > > > > > > > > **There are few repos still failing on requirements lower- > > constraints job specifically which I tried my best to fix as many as > > possible. > > > > Many are ready to merge also. Please merge or work on your > > projects repo testing before that or fix on priority if failing. > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > * We have few open bugs for this which are not yet resolved, we > > will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > ** Bug#1882521 > > > > ** DB migration issues, > > > > *** alembic and few on telemetry/gnocchi side > > https://github.com/sqlalchemy/alembic/issues/699, > > https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > Testing Till now: > > > > ============ > > > > * ~200 repos gate have been tested or fixed till now. > > > > ** > > https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing > > are in progress (If you would like to help, please check your > > > > project repos if I am late to fix them): > > > > ** > > https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > * ~30repos fixes ready to merge: > > > > ** > > https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > Bugs Report: > > > > ========== > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > There is open bug for nova/cinder where three tempest tests are > > failing for > > > > volume detach operation. There is no clear root cause found yet > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > We have skipped the tests in tempest base patch to proceed with > > the other > > > > projects testing but this is blocking things for the migration. > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > * alembic and few on telemetry/gnocchi side > > https://github.com/sqlalchemy/alembic/issues/699, > > https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. > > (FIXED) > > > > nodeset conflict is resolved now and devstack provides all > > focal nodes now. > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is > > the default python version > > > > on ubuntu focal[1]. With pep8 job running on focal faces the > > issue and fail. We need to bump > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > As of now, many projects are using old hacking version so I am > > explicitly adding pyflakes>=2.1.1 > > > > on the project side[2] but for the long term easy maintenance, > > I am doing it in 'hacking' requirements.txt[3] > > > > nd will release a new hacking version. After that project can > > move to new hacking and do not need > > > > to maintain pyflakes version compatibility. > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > 'Markupsafe' 1.0 is not compatible with the latest version of > > setuptools[4], > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to > > make it work. > > > > There are a few more issues[5] with lower-constraint jobs which > > I am debugging. > > > > > > > > > > > > What work to be done on the project side: > > > > ================================ > > > > This goal is more of testing the jobs on focal and fixing bugs > > if any otherwise > > > > migrate jobs by switching the nodeset to focal node sets > > defined in devstack. > > > > > > > > 1. Start a patch in your repo by making depends-on on either of > > below: > > > > devstack base patch if you are using only devstack base jobs > > not tempest: > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > OR > > > > tempest base patch if you are using the tempest base job (like > > devstack-tempest): > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > Both have depends-on on the series where I am moving > > unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > you can test the complete gate > > jobs(unit/functional/doc/integration) together. > > > > This and its base patches - > > https://review.opendev.org/#/c/738328/ > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > 2. If none of your project jobs override the nodeset then above > > patch will be > > > > testing patch(do not merge) otherwise change the nodeset to > > focal. > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > 3. If the jobs are defined in branchless repo and override the > > nodeset then you need to override the branches > > > > variant to adjust the nodeset so that those jobs run on Focal > > on victoria onwards only. If no nodeset > > > > is overridden then devstack being branched and stable base job > > using bionic/xenial will take care of > > > > this. > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > 4. If no updates need you can abandon the testing patch ( > > https://review.opendev.org/#/c/744341/). If it need > > > > updates then modify the same patch with proper commit msg, once > > it pass the gate then remove the Depends-On > > > > so that you can merge your patch before base jobs are switched > > to focal. This way we make sure no gate downtime in > > > > this migration. > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > Once we finish the testing on projects side and no failure then > > we will merge the devstack and tempest > > > > base patches. > > > > > > > > > > > > Important things to note: > > > > =================== > > > > * Do not forgot to add the story and task link to your patch so > > that we can track it smoothly. > > > > * Use gerrit topic 'migrate-to-focal' > > > > * Do not backport any of the patches. > > > > > > > > > > > > References: > > > > ========= > > > > Goal doc: > > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > Storyboard tracking: > > https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > [2] https://review.opendev.org/#/c/739315/ > > > > [3] https://review.opendev.org/#/c/739334/ > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > [5] > > https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Fri Sep 11 12:30:28 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Sep 2020 07:30:28 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <447841d8d8560f96475cb0a275e34464ece6352b.camel@redhat.com> Message-ID: <1747d254391.1106413e3106578.3208130175292980261@ghanshyammann.com> ---- On Fri, 11 Sep 2020 07:13:14 -0500 Radosław Piliszek wrote ---- > I agree with Michał that it kind of breaks the purpose of lower-constraints. > Supposedly lower-constraints should just be tested with the lowest > supported python version? > WDYT, Folks? > > (That said, lots of projects already made lower-constraints break on > RDO due to these bumps.) This is something we discussed yesterday, there are both way we can argue whether we should test l-c on lower supported python or available python on tested Disro which we do with Ubuntu Focal for all tox based jobs. And believe be lower constraints are not only python version things. But It is fine for projects say kuryr-kubernetes to keep running the l-c job on Bionic or centos I can provide patch for that. and for RHEL compatible version, it can be adjusted as I did in ceilometer for lxml but I am not sure if we can test all of them for RHEL also. - https://review.opendev.org/#/c/744612/ I am going to submit a Forum session to discuss on this topic so that we have an agreed way of testing in future, -gman > > -yoctozepto > > On Fri, Sep 11, 2020 at 2:02 PM Michał Dulko wrote: > > > > On Wed, 2020-09-09 at 14:05 -0500, Ghanshyam Mann wrote: > > > ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann < > > > gmann at ghanshyammann.com> wrote ---- > > > > Updates: > > > > After working more on failing one today and listing the blocking > > > one, I think we are good to switch tox based testing today > > > > and discuss the integration testing switch tomorrow in TC office > > > hours. > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > I have checked it again and fixed many repos that are up for > > > review and merge. Most python clients are already fixed > > > > or their fixes are up for merge so they can make it before the > > > feature freeze on 10th. If any repo is broken then it will be pretty > > > quick > > > > to fix by lower constraint bump (see the example under > > > https://review.opendev.org/#/q/topic:migrate-to-focal) > > > > > > > > Even if any of the fixes miss the victoria release then those can > > > be backported easily. I am opening the tox base jobs migration to > > > merge: > > > > - All patches in this series > > > https://review.opendev.org/#/c/738328/ > > > > > > All these tox base jobs are merged now and running on Focal. If any > > > of your repo is failing, please fix on priority or ping me on IRC if > > > failure not clear. > > > You can find most of the fixes for possible failure in this topic: > > > - > > > https://review.opendev.org/#/q/topic:migrate-to-focal+(status:open+OR+status:merged) > > > > > > -gmann > > > > We're in a bit of a pickle here. So with kuryr-kubernetes we aim to > > keep lower-constraints on the versions that can be found in > > CentOS/RHEL8 and seems like cffi 1.11.5 won't compile with Python 3.8. > > What should we do here? Is such assumption even possible given broader > > OpenStack assumptions? > > > > > > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > We have three blocking open bugs here so I would like to discuss > > > it in tomorrow's TC office hour also about how to proceed on this. > > > > > > > > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 ( > > > https://bugs.launchpad.net/qemu/+bug/1894804) > > > > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 > > > > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 > > > > > > > > > > > > -gmann > > > > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann < > > > gmann at ghanshyammann.com> wrote ---- > > > > > Hello Everyone, > > > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' > > > community goal. Its time to force the base jobs migration which can > > > > > break the projects gate if not yet taken care of. Read below > > > for the plan. > > > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > Progress: > > > > > ======= > > > > > * We are close to V-3 release and this is time we have to > > > complete this migration otherwise doing it in RC period can add > > > > > unnecessary and last min delay. I am going to plan this > > > migration in two-part. This will surely break some projects gate > > > > > which is not yet finished the migration but we have to do at > > > some time. Please let me know if any objection to the below > > > > > plan. > > > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, > > > functional, lower-constraints etc) to merge by tomorrow. which is > > > this > > > > > series (all base patches of this): > > > https://review.opendev.org/#/c/738328/ . > > > > > > > > > > **There are few repos still failing on requirements lower- > > > constraints job specifically which I tried my best to fix as many as > > > possible. > > > > > Many are ready to merge also. Please merge or work on your > > > projects repo testing before that or fix on priority if failing. > > > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > > > * We have few open bugs for this which are not yet resolved, we > > > will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > > > ** Bug#1882521 > > > > > ** DB migration issues, > > > > > *** alembic and few on telemetry/gnocchi side > > > https://github.com/sqlalchemy/alembic/issues/699, > > > https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > > > > Testing Till now: > > > > > ============ > > > > > * ~200 repos gate have been tested or fixed till now. > > > > > ** > > > https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing > > > are in progress (If you would like to help, please check your > > > > > project repos if I am late to fix them): > > > > > ** > > > https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > > > * ~30repos fixes ready to merge: > > > > > ** > > > https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > > > > Bugs Report: > > > > > ========== > > > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > > There is open bug for nova/cinder where three tempest tests are > > > failing for > > > > > volume detach operation. There is no clear root cause found yet > > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > > We have skipped the tests in tempest base patch to proceed with > > > the other > > > > > projects testing but this is blocking things for the migration. > > > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > > * alembic and few on telemetry/gnocchi side > > > https://github.com/sqlalchemy/alembic/issues/699, > > > https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. > > > (FIXED) > > > > > nodeset conflict is resolved now and devstack provides all > > > focal nodes now. > > > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is > > > the default python version > > > > > on ubuntu focal[1]. With pep8 job running on focal faces the > > > issue and fail. We need to bump > > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > > As of now, many projects are using old hacking version so I am > > > explicitly adding pyflakes>=2.1.1 > > > > > on the project side[2] but for the long term easy maintenance, > > > I am doing it in 'hacking' requirements.txt[3] > > > > > nd will release a new hacking version. After that project can > > > move to new hacking and do not need > > > > > to maintain pyflakes version compatibility. > > > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > > 'Markupsafe' 1.0 is not compatible with the latest version of > > > setuptools[4], > > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to > > > make it work. > > > > > There are a few more issues[5] with lower-constraint jobs which > > > I am debugging. > > > > > > > > > > > > > > > What work to be done on the project side: > > > > > ================================ > > > > > This goal is more of testing the jobs on focal and fixing bugs > > > if any otherwise > > > > > migrate jobs by switching the nodeset to focal node sets > > > defined in devstack. > > > > > > > > > > 1. Start a patch in your repo by making depends-on on either of > > > below: > > > > > devstack base patch if you are using only devstack base jobs > > > not tempest: > > > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > > OR > > > > > tempest base patch if you are using the tempest base job (like > > > devstack-tempest): > > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > > > Both have depends-on on the series where I am moving > > > unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > > you can test the complete gate > > > jobs(unit/functional/doc/integration) together. > > > > > This and its base patches - > > > https://review.opendev.org/#/c/738328/ > > > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > > > 2. If none of your project jobs override the nodeset then above > > > patch will be > > > > > testing patch(do not merge) otherwise change the nodeset to > > > focal. > > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > > > 3. If the jobs are defined in branchless repo and override the > > > nodeset then you need to override the branches > > > > > variant to adjust the nodeset so that those jobs run on Focal > > > on victoria onwards only. If no nodeset > > > > > is overridden then devstack being branched and stable base job > > > using bionic/xenial will take care of > > > > > this. > > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > > > 4. If no updates need you can abandon the testing patch ( > > > https://review.opendev.org/#/c/744341/). If it need > > > > > updates then modify the same patch with proper commit msg, once > > > it pass the gate then remove the Depends-On > > > > > so that you can merge your patch before base jobs are switched > > > to focal. This way we make sure no gate downtime in > > > > > this migration. > > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > > > Once we finish the testing on projects side and no failure then > > > we will merge the devstack and tempest > > > > > base patches. > > > > > > > > > > > > > > > Important things to note: > > > > > =================== > > > > > * Do not forgot to add the story and task link to your patch so > > > that we can track it smoothly. > > > > > * Use gerrit topic 'migrate-to-focal' > > > > > * Do not backport any of the patches. > > > > > > > > > > > > > > > References: > > > > > ========= > > > > > Goal doc: > > > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > > Storyboard tracking: > > > https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > > [2] https://review.opendev.org/#/c/739315/ > > > > > [3] https://review.opendev.org/#/c/739334/ > > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > > [5] > > > https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > From sean.mcginnis at gmx.com Fri Sep 11 12:43:24 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 11 Sep 2020 07:43:24 -0500 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: > Lucio Seki (lseki on IRC) has been very active this cycle doing > reviews, answering questions in IRC, and participating in the Cinder > weekly meetings and at the midcycles.  He's been particularly thorough > and helpful in his reviews of backend drivers, and has also been > helpful in giving pointers to new driver maintainers who are setting > up third party CI for their drivers.  Having Lucio as a core reviewer > will help improve the team's review bandwidth without sacrificing > review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 > UTC in #openstack-meeting-alt).  Please communicate any concerns to me > before that time. > > cheers, > brian > +1 From rajatdhasmana at gmail.com Fri Sep 11 12:58:22 2020 From: rajatdhasmana at gmail.com (Rajat Dhasmana) Date: Fri, 11 Sep 2020 18:28:22 +0530 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: +1 On Fri, Sep 11, 2020, 6:17 PM Sean McGinnis wrote: > > Lucio Seki (lseki on IRC) has been very active this cycle doing > > reviews, answering questions in IRC, and participating in the Cinder > > weekly meetings and at the midcycles. He's been particularly thorough > > and helpful in his reviews of backend drivers, and has also been > > helpful in giving pointers to new driver maintainers who are setting > > up third party CI for their drivers. Having Lucio as a core reviewer > > will help improve the team's review bandwidth without sacrificing > > review quality. > > > > In the absence of objections, I'll add Lucio to the core team just > > before the next Cinder team meeting (Wednesday, 16 September at 1400 > > UTC in #openstack-meeting-alt). Please communicate any concerns to me > > before that time. > > > > cheers, > > brian > > > +1 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 11 13:03:33 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 11 Sep 2020 08:03:33 -0500 Subject: [release] Release countdown for week R-4 Sept 14 - 18 Message-ID: <20200911130333.GB1087594@sm-workstation> Development Focus ----------------- We just passed feature freeze! Until release branches are cut, you should stop accepting featureful changes to deliverables following the cycle-with-rc release model, or to libraries. Exceptions should be discussed on separate threads on the mailing-list, and feature freeze exceptions approved by the team's PTL. Focus should be on finding and fixing release-critical bugs, so that release candidates and final versions of the victoria deliverables can be proposed, well ahead of the final victoria release date. General Information ------------------- We are still finishing up processing a few release requests, but the victoria release requirements are now frozen. If new library releases are needed to fix release-critical bugs in victoria, you must request a Requirements Freeze Exception (RFE) from the requirements team before we can do a new release to avoid having something released in victoria that is not actually usable. This is done by posting to the openstack-discuss mailing list with a subject line similar to: [$PROJECT][requirements] RFE requested for $PROJECT_LIB Include justification/reasoning for why a RFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. A soft String freeze is now in effect, in order to let the I18N team do the translation work in good conditions. In Horizon and the various dashboard plugins, you should stop accepting changes that modify user-visible strings. Exceptions should be discussed on the mailing-list. By September 24 this will become a hard string freeze, with no changes in user-visible strings allowed. Actions ------- stable/victoria branches should be created soon for all not-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update. Please review those in priority so that the branch can be functional ASAP. The Prelude section of reno release notes is rendered as the top level overview for the release. Any important overall messaging for victoria changes should be added there to make sure the consumers of your release notes see them. Finally, if you haven't proposed victoria cycle-highlights yet, you are already late to the party. Please see http://lists.openstack.org/pipermail/openstack-discuss/2020-September/016941.html for details. Upcoming Deadlines & Dates -------------------------- RC1 deadline: September 24 (R-3) Final Victoria release: October 14 Open Infra Summit: October 19-23 Wallaby PTG: October 26-30 From gmann at ghanshyammann.com Fri Sep 11 13:08:39 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Sep 2020 08:08:39 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib In-Reply-To: <1747608f93e.d7ec085f28282.4985395579846058200@ghanshyammann.com> References: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> <20200909192543.b2d2ksruoqtbgcfy@mthode.org> <174746eb120.1229d07d224552.356509349559116522@ghanshyammann.com> <1747608f93e.d7ec085f28282.4985395579846058200@ghanshyammann.com> Message-ID: <1747d4839a4.10505a96f108873.8961642381266139796@ghanshyammann.com> ---- On Wed, 09 Sep 2020 22:22:13 -0500 Ghanshyam Mann wrote ---- > ---- On Wed, 09 Sep 2020 14:54:05 -0500 Ghanshyam Mann wrote ---- > > > > ---- On Wed, 09 Sep 2020 14:25:43 -0500 Matthew Thode wrote ---- > > > On 20-09-09 12:04:51, Ben Nemec wrote: > > > > > > > > On 9/8/20 10:45 AM, Ghanshyam Mann wrote: > > > > > Hello Team, > > > > > > > > > > This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and > > > > > base job switch is planned to be switched by today[1]. > > > > > > > > > > There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal > > > > > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > > > > > > > > > If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. > > > > > We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. > > > > > Opinion? > > > > > > > > As I noted in the meeting, if we have to do this to keep the gates working > > > > then I'd rather do it as an FFE than have to backport all of the relevant > > > > patches. IMHO we should only decline this FFE if we are going to also change > > > > our statement of support for Python/Ubuntu in Victoria. > > > > > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > https://review.opendev.org/#/c/750089 seems like the only functional > > > change. It has my ACK with my requirements hat on. > > NOTE: There is one py3.8 bug fix also merged in oslo.uitls which is not yet released. This made py3.8 job voting in oslo.utils gate. > - https://review.opendev.org/#/c/750216/ > > Rest all l-c bump are now passing on Focal > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) All the patches are merged now, requesting to reconsider this FEE so that we can avoid more delay in this. -gmann > > -gmann > > > > > yeah, and this one changing one test with #noqa - https://review.opendev.org/#/c/744323/5 > > The rest all are l-c bump. > > > > Also all the tox base jobs are migrated to Focal now. > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017136.html > > > > > > > > -- > > > Matthew Thode > > > > > > > > From senrique at redhat.com Fri Sep 11 13:11:38 2020 From: senrique at redhat.com (Sofia Enriquez) Date: Fri, 11 Sep 2020 10:11:38 -0300 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: Congratulations Lucio 🍻 On Thu, Sep 10, 2020 at 11:03 PM Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > answering questions in IRC, and participating in the Cinder weekly > meetings and at the midcycles. He's been particularly thorough and > helpful in his reviews of backend drivers, and has also been helpful in > giving pointers to new driver maintainers who are setting up third party > CI for their drivers. Having Lucio as a core reviewer will help improve > the team's review bandwidth without sacrificing review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC > in #openstack-meeting-alt). Please communicate any concerns to me > before that time. > > cheers, > brian > > -- L. Sofía Enriquez she/her Associate Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Sep 11 14:36:10 2020 From: melwittt at gmail.com (melanie witt) Date: Fri, 11 Sep 2020 07:36:10 -0700 Subject: "openstack server list" takes 30s In-Reply-To: References: Message-ID: <36dac474-5d59-1756-ba80-32562b345e2b@gmail.com> On 9/10/20 21:15, Tony Liu wrote: > Hi, > > I built a Ussuri cluster with 3 controllers and 5 compute nodes. > OpenStack CLI ran pretty fast at the beginning, but gets slower > over time along with increased workloads. Right now, it takes > about 30s to list 10 VMs. The CPU, memory and disk usage are on > those 3 controllers are all in the range. I understand there are > many API calls happening behind CLI. I'd like to figure out how > this 30s is consumed, which call is the killer. > Any guidance or hint would be helpful. To see the individual calls made by OSC and troubleshoot the reason for the slowness, you can use the --debug option: $ openstack --debug server list Besides that, there is something that I know of that can be slow: the flavor and image name lookup. This happens if you have a Large Number of flavors and/or images. There are a couple of options you can use to make this faster: $ openstack server list --name-lookup-one-by-one This will make OSC lookup flavors and images only once per unique flavor/image in the list and uses the cached value for subsequent appearances in the list [1]. I think this should have been made the default behavior when it was introduced but it was nacked and accepted only as opt-in. The other option skips name lookup altogether and shows UUIDs only instead of names [2]: $ openstack server list --no-name-lookup Hope this helps, -melanie [1] https://docs.openstack.org/python-openstackclient/ussuri/cli/command-objects/server.html#cmdoption-openstack-server-list-name-lookup-one-by-one [2] https://docs.openstack.org/python-openstackclient/ussuri/cli/command-objects/server.html#cmdoption-openstack-server-list-n From eblock at nde.ag Fri Sep 11 14:58:42 2020 From: eblock at nde.ag (Eugen Block) Date: Fri, 11 Sep 2020 14:58:42 +0000 Subject: "openstack server list" takes 30s In-Reply-To: <36dac474-5d59-1756-ba80-32562b345e2b@gmail.com> References: <36dac474-5d59-1756-ba80-32562b345e2b@gmail.com> Message-ID: <20200911145842.Horde.i6jVJiPIiSx8MrwK2wsZPJX@webmail.nde.ag> Hi, we had this a couple of years ago in our Ocata cloud, in our case memcached was not configured correctly. Since you might be experiencing another memcached issue according to your other thread [1] it could be worth double-checking your memcache configuration. Regards, Eugen [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017196.html Zitat von melanie witt : > On 9/10/20 21:15, Tony Liu wrote: >> Hi, >> >> I built a Ussuri cluster with 3 controllers and 5 compute nodes. >> OpenStack CLI ran pretty fast at the beginning, but gets slower >> over time along with increased workloads. Right now, it takes >> about 30s to list 10 VMs. The CPU, memory and disk usage are on >> those 3 controllers are all in the range. I understand there are >> many API calls happening behind CLI. I'd like to figure out how >> this 30s is consumed, which call is the killer. >> Any guidance or hint would be helpful. > > To see the individual calls made by OSC and troubleshoot the reason > for the slowness, you can use the --debug option: > > $ openstack --debug server list > > Besides that, there is something that I know of that can be slow: > the flavor and image name lookup. This happens if you have a Large > Number of flavors and/or images. There are a couple of options you > can use to make this faster: > > $ openstack server list --name-lookup-one-by-one > > This will make OSC lookup flavors and images only once per unique > flavor/image in the list and uses the cached value for subsequent > appearances in the list [1]. I think this should have been made the > default behavior when it was introduced but it was nacked and > accepted only as opt-in. > > The other option skips name lookup altogether and shows UUIDs only > instead of names [2]: > > $ openstack server list --no-name-lookup > > Hope this helps, > -melanie > > [1] > https://docs.openstack.org/python-openstackclient/ussuri/cli/command-objects/server.html#cmdoption-openstack-server-list-name-lookup-one-by-one > [2] > https://docs.openstack.org/python-openstackclient/ussuri/cli/command-objects/server.html#cmdoption-openstack-server-list-n From gouthampravi at gmail.com Fri Sep 11 15:39:52 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 11 Sep 2020 08:39:52 -0700 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: A +0.02 from an occasional lurker... Great job Lucio! On Thu, Sep 10, 2020 at 6:57 PM Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > answering questions in IRC, and participating in the Cinder weekly > meetings and at the midcycles. He's been particularly thorough and > helpful in his reviews of backend drivers, and has also been helpful in > giving pointers to new driver maintainers who are setting up third party > CI for their drivers. Having Lucio as a core reviewer will help improve > the team's review bandwidth without sacrificing review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC > in #openstack-meeting-alt). Please communicate any concerns to me > before that time. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Sep 11 15:56:55 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 11 Sep 2020 08:56:55 -0700 Subject: [manila][release] Some feature patches will merge today Message-ID: Hello Stackers, As is unusual, we have found ourselves with a few unmerged feature patches at the wrong side of the feature freeze: [Container driver] Adds share and share server migration: https://review.opendev.org/740831/ [NetApp] Enables configuring NFS transfer limits: https://review.opendev.org/746568 [NetApp] Add support for share server migration: https://review.opendev.org/747048/ [NetApp] Adding support for Adaptive QoS in NetApp driver with dhss false: https://review.opendev.org/740532/ These patches have been getting review attention late in the cycle, and were caught up in CI failures and code collisions. The code changes above are isolated within driver modules in the code that are explicitly configured/optionally enabled by deployers. These changes do not: - introduce any new library dependencies to manila, or - modify the API, DB, RPC, driver interface layers - introduce new requirements into the client library (python-manilaclient) They do introduce translatable exception strings - however, manila has not received translation updates in the past. We're looking to get these merged today. Hope that's okay - do let me know of any gotchas. -- Goutham Pacha Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Sep 11 16:13:45 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 11 Sep 2020 11:13:45 -0500 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: <46054156-fe1c-0b16-8a27-cb6495da986a@gmail.com> +2 from me.  Appreciate the energy and effort the Lucio has brought to the team lately! On 9/10/2020 8:51 PM, Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing > reviews, answering questions in IRC, and participating in the Cinder > weekly meetings and at the midcycles.  He's been particularly thorough > and helpful in his reviews of backend drivers, and has also been > helpful in giving pointers to new driver maintainers who are setting > up third party CI for their drivers.  Having Lucio as a core reviewer > will help improve the team's review bandwidth without sacrificing > review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 > UTC in #openstack-meeting-alt).  Please communicate any concerns to me > before that time. > > cheers, > brian > From kennelson11 at gmail.com Fri Sep 11 16:40:29 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 11 Sep 2020 09:40:29 -0700 Subject: [TC][PTG] Virtual PTG Planning In-Reply-To: References: Message-ID: Reminder! We need your availability responses ASAP because the deadline to sign up for the PTG is today! -Kendall (diablo_rojo) On Fri, Sep 4, 2020 at 10:58 AM Kendall Nelson wrote: > Hello! > > So as you might have seen, the deadline to sign up for PTG time by the end > of next week. To coordinate our time to meet as the TC, please fill out the > poll[1] that Mohammed kindly put together for us. *We need responses by > EOD Thursday September 10th* so that we can book the time in the > ethercalc and fill out the survey to reserve our space before the deadline. > > Also, I created this planning etherpad [2] to start collecting ideas for > discussion topics! > > Can't wait to see you all there! > > -Kendall & Mohammed > > [1] https://doodle.com/poll/hkbg44da2udxging > [2] https://etherpad.opendev.org/p/tc-wallaby-ptg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.schulze at tu-berlin.de Fri Sep 11 17:08:11 2020 From: t.schulze at tu-berlin.de (thoralf schulze) Date: Fri, 11 Sep 2020 19:08:11 +0200 Subject: [sdk] openstacksdk vs. server groups Message-ID: hi there, the current openstack-sdk crashes while trying to process instances that are part of a server group. to give an example: one of our instances shows up in the output of "openstack-inventory --list --debug" as […] REQ: curl -g -i -X GET https://redacted:8774/v2.1/servers/352307ba-0c06-43e5-ba21-467ec25a4a2e -H "OpenStack-API-Version: compute 2.72" -H "User-Agent: openstacksdk/0.49.0 keystoneauth1/4.2.1 python-requests/2. 22.0 CPython/3.8.2" -H "X-Auth-Token: redacted" -H "X-OpenStack-Nova-API-Version: 2.72" RESP: [200] Connection: Keep-Alive Content-Length: 2024 Content-Type: application/json Date: Fri, 11 Sep 2020 15:22:14 GMT Keep-Alive: timeout=5, max=92 OpenStack-API-Version: compute 2.72 Server: Apache/2.4.29 (Ubuntu) Vary: Op enStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.72 x-compute-request-id: req-54078034-3b9e-42bc-afeb-c9c2288db568 x-openstack-request-id: req-54078034-3b9e-42bc-afeb-c9c2288db568 RESP BODY: {"server": {"id": "352307ba-0c06-43e5-ba21-467ec25a4a2e", "name": "dss-test-aaai2hq3gmjc-node-0", "status": "ACTIVE", "tenant_id": "56c37bc705364d34a5f423c531d9e1a7", "user_id": "120258b9dbc7831140ed30b57366fe91feaa77608c7a118da9cf0f6288f1886b", "metadata": {}, "hostId": "16e275df317b4699aa6fdb52b437295c4e3beb83b5e4d15276314660", "image": {"id": "41bb79e0-0962-4162-b9c5-ff178ab61218", "links": [{"rel": "bookmark", "href": "https://redacted/images/41bb79e0-0962-4162-b9c5-ff178ab61218"}]}, "flavor": {"vcpus": 8, "ram": 16384, "disk": 100, "ephemeral": 0, "swap": 0, "original_name": "tx.medium", "extra_specs": {"hw:mem_page_size": "large", "hw:watchdog_action": "reset", "hw_rng:allowed": "true", "hw_rng:rate_bytes": "24", "hw_rng:rate_period": "5000", "trait:CUSTOM_DC_CLASS_TEST": "required"}}, "created": "2020-07-10T07:09:48Z", "updated": "2020-07-10T07:09:59Z", "addresses": {"dss-test": [{"version": 4, "addr": "10.0.0.42", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:62:3e:69"}, {"version": 4, "addr": "10.176.1.154", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:62:3e:69"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://redacted/v2.1/servers/352307ba-0c06-43e5-ba21-467ec25a4a2e"}, {"rel": "bookmark", "href": "https://redacted/servers/352307ba-0c06-43e5-ba21-467ec25a4a2e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "BARZ", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": "2020-07-10T07:09:58.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "dss-test-aaai2hq3gmjc-secgroup_kube_minion-cjco6vmpgs6j"}], "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [], "locked": false, "description": null, "tags": [], "trusted_image_certificates": null, "server_groups": ["2b9dfdd8-0a5e-45ec-af07-1a0437a7f61e"]}} due to the value of "server_groups" not being a dict, openstack/resource.py crashes while trying to process this instance: File "/home/ubuntu/.local/lib/python3.8/site-packages/openstack/resource.py", line 82, in _convert_type return data_type(value) ValueError: dictionary update sequence element #0 has length 1; 2 is required is server_groups really supposed to be just a list of uuids? having it dealt with in a manner akin to security_groups seems to be an obvious choice … thank you very much & with kind regards, t. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From artem.goncharov at gmail.com Fri Sep 11 17:23:07 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 11 Sep 2020 19:23:07 +0200 Subject: [sdk] openstacksdk vs. server groups In-Reply-To: References: Message-ID: <8883AE9B-B1AB-4192-8CE7-B78BE70F41BF@gmail.com> Hi Thoralf, This issue is already addressed (see https://review.opendev.org/#/c/749381/ ) and will be fixed in next release of SDK (pretty soon). Regards, Artem > On 11. Sep 2020, at 19:08, thoralf schulze wrote: > > hi there, > > the current openstack-sdk crashes while trying to process instances that > are part of a server group. to give an example: > one of our instances shows up in the output of "openstack-inventory > --list --debug" as > > […] > REQ: curl -g -i -X GET > https://redacted:8774/v2.1/servers/352307ba-0c06-43e5-ba21-467ec25a4a2e > -H "OpenStack-API-Version: compute 2.72" -H "User-Agent: > openstacksdk/0.49.0 keystoneauth1/4.2.1 python-requests/2. > 22.0 CPython/3.8.2" -H "X-Auth-Token: redacted" -H > "X-OpenStack-Nova-API-Version: 2.72" > RESP: [200] Connection: Keep-Alive Content-Length: 2024 Content-Type: > application/json Date: Fri, 11 Sep 2020 15:22:14 GMT Keep-Alive: > timeout=5, max=92 OpenStack-API-Version: compute 2.72 Server: > Apache/2.4.29 (Ubuntu) Vary: Op > enStack-API-Version,X-OpenStack-Nova-API-Version > X-OpenStack-Nova-API-Version: 2.72 x-compute-request-id: > req-54078034-3b9e-42bc-afeb-c9c2288db568 x-openstack-request-id: > req-54078034-3b9e-42bc-afeb-c9c2288db568 > RESP BODY: {"server": {"id": "352307ba-0c06-43e5-ba21-467ec25a4a2e", > "name": "dss-test-aaai2hq3gmjc-node-0", "status": "ACTIVE", "tenant_id": > "56c37bc705364d34a5f423c531d9e1a7", "user_id": > "120258b9dbc7831140ed30b57366fe91feaa77608c7a118da9cf0f6288f1886b", > "metadata": {}, "hostId": > "16e275df317b4699aa6fdb52b437295c4e3beb83b5e4d15276314660", "image": > {"id": "41bb79e0-0962-4162-b9c5-ff178ab61218", "links": [{"rel": > "bookmark", "href": > "https://redacted/images/41bb79e0-0962-4162-b9c5-ff178ab61218"}]}, > "flavor": {"vcpus": 8, "ram": 16384, "disk": 100, "ephemeral": 0, > "swap": 0, "original_name": "tx.medium", "extra_specs": > {"hw:mem_page_size": "large", "hw:watchdog_action": "reset", > "hw_rng:allowed": "true", "hw_rng:rate_bytes": "24", > "hw_rng:rate_period": "5000", "trait:CUSTOM_DC_CLASS_TEST": > "required"}}, "created": "2020-07-10T07:09:48Z", "updated": > "2020-07-10T07:09:59Z", "addresses": {"dss-test": [{"version": 4, > "addr": "10.0.0.42", "OS-EXT-IPS:type": "fixed", > "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:62:3e:69"}, {"version": 4, "addr": > "10.176.1.154", "OS-EXT-IPS:type": "floating", > "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:62:3e:69"}]}, "accessIPv4": "", > "accessIPv6": "", "links": [{"rel": "self", "href": > "https://redacted/v2.1/servers/352307ba-0c06-43e5-ba21-467ec25a4a2e"}, > {"rel": "bookmark", "href": > "https://redacted/servers/352307ba-0c06-43e5-ba21-467ec25a4a2e"}], > "OS-DCF:diskConfig": "MANUAL", "progress": 0, > "OS-EXT-AZ:availability_zone": "BARZ", "config_drive": "", "key_name": > null, "OS-SRV-USG:launched_at": "2020-07-10T07:09:58.000000", > "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": > "dss-test-aaai2hq3gmjc-secgroup_kube_minion-cjco6vmpgs6j"}], > "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", > "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": [], > "locked": false, "description": null, "tags": [], > "trusted_image_certificates": null, "server_groups": > ["2b9dfdd8-0a5e-45ec-af07-1a0437a7f61e"]}} > > due to the value of "server_groups" not being a dict, > openstack/resource.py crashes while trying to process this instance: > > File > "/home/ubuntu/.local/lib/python3.8/site-packages/openstack/resource.py", > line 82, in _convert_type > return data_type(value) > ValueError: dictionary update sequence element #0 has length 1; 2 is > required > > is server_groups really supposed to be just a list of uuids? having it > dealt with in a manner akin to security_groups seems to be an obvious > choice … > > thank you very much & with kind regards, > t. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 11 17:47:05 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 11 Sep 2020 17:47:05 +0000 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <447841d8d8560f96475cb0a275e34464ece6352b.camel@redhat.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1747442027f.bb3f3d1823475.212800600164097649@ghanshyammann.com> <447841d8d8560f96475cb0a275e34464ece6352b.camel@redhat.com> Message-ID: <20200911174705.eu44wamtoeybsikw@yuggoth.org> On 2020-09-11 13:50:57 +0200 (+0200), Michał Dulko wrote: [...] > We're in a bit of a pickle here. So with kuryr-kubernetes we aim > to keep lower-constraints on the versions that can be found in > CentOS/RHEL8 and seems like cffi 1.11.5 won't compile with Python > 3.8. What should we do here? Is such assumption even possible > given broader OpenStack assumptions? [...] To rehash and summarize my opinion from our lengthy IRC discussion in the #openstack-infra channel yesterday, I agree that it makes sense to run lower-constraints jobs on the lowest Python interpreter release listed in our tested runtimes for that cycle. However, you also have to take into account the platform(s) which cause that interpreter version to be included. Basically if we're going to standardize Python 3.6 jobs of any kind in Victoria those should be running on CentOS 8 (which is in our tested runtimes for Victoria and why we included Python 3.6 at all), *not* on Ubuntu Bionic (which is not included in our tested runtimes for Victoria). Otherwise we're stuck supporting ubuntu-bionic images for testing stable/victoria well into the future even though we didn't technically list it as a target platform. This gets more complicated though. We've, as a community overall, tried repeatedly to pretend we're not testing specific platforms, and instead just testing representative Python versions. Except we don't actually test with real Python versions we test with the patched Python interpreters shipped in distros. So our job names pretend they're Python version specific for the most part, when to actually meet the intent of our tested runtimes we really do need distro-specific jobs. We test Python 3.6 for stable/ussuri changes because Ubuntu Bionic is a tested runtime platform for Ussuri and that's the default Python 3 it ships, but Python 3.6 is listed in Victoria for the benefit of CentOS 8 not Ubuntu Bionic, so we need a different py36 job on each branch, running on different distros. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pramchan at yahoo.com Fri Sep 11 18:06:09 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 11 Sep 2020 18:06:09 +0000 (UTC) Subject: [InteropWG] Seeking ideas for InteropWG Branding for Open Infra References: <1398359147.1271387.1599847569117.ref@mail.yahoo.com> Message-ID: <1398359147.1271387.1599847569117@mail.yahoo.com> Hi all, Please add your ideas for discussions for Forum https://etherpad.opendev.org/p/2020-Wallaby-interop-brainstorming Since any Interop case, it's best presented by PTLs their requirments, like to invite Ironic/metal3/MaaS and Container Projects(Zun,kuryr, magnum, kolla...) to help us define the new Branding strategy for Open Infra projects like (Airship, Starlingx, Kata, Zuul, Openlabs ...) Note we see one opportunity coming out of BareMetal as a Service for Ironic routed through Metal3 which is now a CNCF Sandbox project. The other we see OpenInfa-ready-k8s-container and/or based on Zun- CRI (Docker, Containerd, Kata...), CNI (kuryr) & CSI (?) & COE as Magnum Please add your ideas and time you may need for us to go over these emerging ideas for re-imagining clusters in Open Infra with Baremetal as well VM Nodes over Private & Public clouds. ThanksFor InteropWGPrakash -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.williamson at redhat.com Thu Sep 10 18:02:44 2020 From: alex.williamson at redhat.com (Alex Williamson) Date: Thu, 10 Sep 2020 12:02:44 -0600 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> References: <20200818113652.5d81a392.cohuck@redhat.com> <20200820003922.GE21172@joy-OptiPlex-7040> <20200819212234.223667b3@x1.home> <20200820031621.GA24997@joy-OptiPlex-7040> <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> Message-ID: <20200910120244.71e7b630@w520.home> On Thu, 10 Sep 2020 13:50:11 +0100 Sean Mooney wrote: > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > On Wed, 9 Sep 2020 10:13:09 +0800 > > Yan Zhao wrote: > > > > > > > still, I'd like to put it more explicitly to make ensure it's not missed: > > > > > the reason we want to specify compatible_type as a trait and check > > > > > whether target compatible_type is the superset of source > > > > > compatible_type is for the consideration of backward compatibility. > > > > > e.g. > > > > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > with the compatible_type traits, the old generation device is still > > > > > able to be regarded as compatible to newer generation device even their > > > > > mdev types are not equal. > > > > > > > > If you want to support migration from v4 to v5, can't the (presumably > > > > newer) driver that supports v5 simply register the v4 type as well, so > > > > that the mdev can be created as v4? (Just like QEMU versioned machine > > > > types work.) > > > > > > yes, it should work in some conditions. > > > but it may not be that good in some cases when v5 and v4 in the name string > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for > > > gen9) > > > > > > e.g. > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > software does not support it initially, and v4 and v5 identify hardware > > > differences. > > > > My first hunch here is: Don't introduce types that may be compatible > > later. Either make them compatible, or make them distinct by design, > > and possibly add a different, compatible type later. > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > software now downgrade mdev type from v5 to v4? > > > not sure if moving hardware generation info into a separate attribute > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, while use > > > compatible_pci_ids to identify compatibility. > > > > If the generations are compatible, don't mention it in the mdev type. > > If they aren't, use distinct types, so that management software doesn't > > have to guess. At least that would be my naive approach here. > yep that is what i would prefer to see too. > > > > > > > > (2) name string of mdev type is composed by "driver_name + type_name". > > > in some devices, e.g. qat, different generations of devices are binding to > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > then though type_name is equal, mdev type is not equal. e.g. > > > "qat-v4-type1", "qat-v5-type1". > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > approach? Or maybe I'm just confused. > yes i really dont like haveing the version in the mdev-type name > i would stongly perfger just qat-type-1 wehere qat is just there as a way of namespacing. > although symmetric-cryto, asymmetric-cryto and compression woudl be a better name then type-1, type-2, type-3 if > that is what they would end up mapping too. e.g. qat-compression or qat-aes is a much better name then type-1 > higher layers of software are unlikely to parse the mdev names but as a human looking at them its much eaiser to > understand if the names are meaningful. the qat prefix i think is important however to make sure that your mdev-types > dont colide with other vendeors mdev types. so i woudl encurage all vendors to prefix there mdev types with etiher the > device name or the vendor. +1 to all this, the mdev type is meant to indicate a software compatible interface, if different hardware versions can be software compatible, then don't make the job of finding a compatible device harder. The full type is a combination of the vendor driver name plus the vendor provided type name specifically in order to provide a type namespace per vendor driver. That's done at the mdev core level. Thanks, Alex From oliver.weinmann at icloud.com Fri Sep 11 07:47:45 2020 From: oliver.weinmann at icloud.com (Oliver Weinmann) Date: Fri, 11 Sep 2020 07:47:45 -0000 Subject: Ussuri CentOS 8 add mptsas driver to introspection initramfs Message-ID: <55c5b908-3d0e-4d92-8f8f-95443fbefb9f@me.com> Hi, I already asked this question on serverfault. But I guess here is a better place. I have a very ancient hardware with a MPTSAS controller. I use this for TripleO deployment testing. With the release of Ussuri which is running CentOS8, I can no longer provision my overcloud nodes as the MPTSAS driver has been removed in CentOS8: https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ I managed to include the driver provided from ELrepo in the introspection image but It is not loaded automatically: All commands are run as user "stack". Extract the introspection image: cd ~ mkdir imagesnew cd imagesnew tar xvf ../ironic-python-agent.tar mkdir ~/ipa-tmp cd ~/ipa-tmp /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs | zcat | cpio -ivd | pax -r Extract the contents of the mptsas driver rpm: rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r Put the kernel module in the right places. To figure out where the module has to reside I installed the rpm on a already deployed node and used find to locate it. xz -c ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz mkdir ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko sudo chown root . -R find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8  > ~/images/ironic-python-agent.initramfs Upload the new image cd ~/images openstack overcloud image upload --update-existing --image-path /home/stack/images/ Now when I start the introspection and ssh into the host I see no disks: [root at localhost ~]# fdisk -l [root at localhost ~]# lsmod | grep mptsas Once i manually load the driver, I can see the disks: [root at localhost ~]# modprobe mptsas [root at localhost ~]# lsmod | grep mptsas mptsas                 69632  0 mptscsih               45056  1 mptsas mptbase                98304  2 mptsas,mptscsih scsi_transport_sas     45056  1 mptsas [root at localhost ~]# fdisk -l Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes But how can I make it so that it will automatically load on boot? Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Fri Sep 11 08:14:15 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Fri, 11 Sep 2020 16:14:15 +0800 Subject: Instance cannot ping external (Lan/Internet) Openstack AllinOne Packstack Virtualbox Message-ID: Hi Team, I have been struggling to get the solution. I'm testing openstack and deploying it over virtualbox. I have shared my issue in SO with help from berndbausch and try to share it here also to get more help....as I have create and redeploy few times with different openstack release.. deployment methods..parameters....still the same problem. I don't think my deployment is different with others as i follow the guideline and common practise... just stuck on this problem...please do help and assist me further. Thank you to all.... Below is my environment setup. Windos10_Virtualbox----Centos7------Openstack----VM instance [192.168.0.0/24] - external network/public-ip Pc host - 192.168.0.160 Home Lan GW - 192.168.0.1 Centos7 - 192.168.0.12 (VM virtualbox) Openstack Router GW - 192.168.0.221 Virtualbox VM setting bridge (enp0s3) and promiscuous mode all selinux permissive add rules icmp and ssh. create public-ip 192.168.0.0/24 (pool 220-230) create router create private subnet 10.0.0.0/24 attach router to private subnet create router GW(public-ip) >From LAN I can ping and ssh vm instance but from vm instance i cannot ping to home Lan GW, pc host. VM instance can ping up to centos7 virtualbox and openstack Router GW. Instance created with centos and cirros using direct public-ip and also floating ip. Some instance created with direct public ip and some with private subnet and using floating ip. I have tested with queens, stein, train. With allinone and also using answerfile and all end up I cannot ping external. I follows guideline below:- *hxxps://www.linuxtechi.com/single-node-openstack-liberty-installation-centos-7/ * *hxxps://www.rdoproject.org/install/packstack/ * This some of answer file parameters that i edited CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex CONFIG_NEUTRON_ML2_TYPE_DRIVERS=flat,vxlan CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:enp0s3 CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex CONFIG_PROVISION_DEMO=n ip netns list qrouter-f6967bba-986e-4bb3-838e-d035a684e2c4 (id: 2) qdhcp-dbd713cd-1af4-4e2c-9c57-d8a675a10608 (id: 1) qdhcp-fa6fb1d6-b65e-4eb2-a4a4-5552fde8bb08 (id: 0) [root at myospackanswer ~(keystone_admin)]# ip netns exec qrouter-f6967bba-986e-4bb3-838e-d035a684e2c4 arp -an ? (192.168.0.211) at on qg-0ba7da31-7f ? (192.168.0.227) at fa:16:3e:ed:19:81 [ether] on qg-0ba7da31-7f (Instance IP) ? (192.168.0.160) at d4:d2:52:73:de:80 [ether] on qg-0ba7da31-7f (host pc IP) ? (192.168.0.1) at 80:26:89:b2:98:50 [ether] on qg-0ba7da31-7f (home router GW) ? (10.0.0.4) at fa:16:3e:01:63:42 [ether] on qr-7e6f9436-40 (private subnet) ip r default via 192.168.0.1 dev br-ex169.254.0.0/16 dev enp0s3 scope link metric 1002169.254.0.0/16 dev br-ex scope link metric 1006192.168.0.0/24 dev br-ex proto kernel scope link src 192.168.0.121 qrouter-f6967bba-986e-4bb3-838e-d035a684e2c4 (id: 2) qdhcp-dbd713cd-1af4-4e2c-9c57-d8a675a10608 (id: 1) qdhcp-fa6fb1d6-b65e-4eb2-a4a4-5552fde8bb08 (id: 0) sudo ip netns exec qrouter-f6967bba-986e-4bb3-838e-d035a684e2c4 ip route default via 192.168.0.1 dev qg-0ba7da31-7f10.0.0.0/24 dev qr-7e6f9436-40 proto kernel scope link src 10.0.0.1192.168.0.0/24 dev qg-0ba7da31-7f proto kernel scope link src 192.168.0.221 ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000 link/ether 08:00:27:98:9b:a3 brd ff:ff:ff:ff:ff:ff inet6 fe80::a00:27ff:fe98:9ba3/64 scope link valid_lft forever preferred_lft forever 5: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 8a:17:8b:e5:dc:c2 brd ff:ff:ff:ff:ff:ff 6: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 08:00:27:98:9b:a3 brd ff:ff:ff:ff:ff:ff inet 192.168.0.121/24 brd 192.168.0.255 scope global br-ex valid_lft forever preferred_lft forever inet6 2001:e68:5435:d135:a00:27ff:fe98:9ba3/64 scope global mngtmpaddr dynamic valid_lft 86399sec preferred_lft 86399sec inet6 fe80::a00:27ff:fe98:9ba3/64 scope link valid_lft forever preferred_lft forever 7: br-int: mtu 1450 qdisc noop state DOWN group default qlen 1000 link/ether 32:ff:0f:26:18:43 brd ff:ff:ff:ff:ff:ff 8: br-tun: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether d6:52:08:a9:68:4f brd ff:ff:ff:ff:ff:ff 29: qbr1f637f14-9c: mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 42:95:e8:c0:a3:07 brd ff:ff:ff:ff:ff:ff 30: qvo1f637f14-9c at qvb1f637f14-9c: mtu 1450 qdisc noqueue master ovs-system state UP group default qlen 1000 link/ether 6e:2e:07:8d:79:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::6c2e:7ff:fe8d:7986/64 scope link valid_lft forever preferred_lft forever 31: qvb1f637f14-9c at qvo1f637f14-9c: mtu 1450 qdisc noqueue master qbr1f637f14-9c state UP group default qlen 1000 link/ether 42:95:e8:c0:a3:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::4095:e8ff:fec0:a307/64 scope link valid_lft forever preferred_lft forever 32: tap1f637f14-9c: mtu 1450 qdisc pfifo_fast master qbr1f637f14-9c state UNKNOWN group default qlen 1000 link/ether fe:16:3e:05:2d:50 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe05:2d50/64 scope link valid_lft forever preferred_lft forever -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1osp.jpg Type: image/jpeg Size: 25859 bytes Desc: not available URL: From kevin.tian at intel.com Fri Sep 11 10:18:01 2020 From: kevin.tian at intel.com (Tian, Kevin) Date: Fri, 11 Sep 2020 10:18:01 +0000 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200911120806.5cfe203c.cohuck@redhat.com> References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> <20200911005559.GA3932@joy-OptiPlex-7040> <20200911120806.5cfe203c.cohuck@redhat.com> Message-ID: > From: Cornelia Huck > Sent: Friday, September 11, 2020 6:08 PM > > On Fri, 11 Sep 2020 08:56:00 +0800 > Yan Zhao wrote: > > > On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > > > On Thu, 10 Sep 2020 13:50:11 +0100 > > > Sean Mooney wrote: > > > > > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > > > Yan Zhao wrote: > > > > > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not > missed: > > > > > > > > the reason we want to specify compatible_type as a trait and > check > > > > > > > > whether target compatible_type is the superset of source > > > > > > > > compatible_type is for the consideration of backward > compatibility. > > > > > > > > e.g. > > > > > > > > an old generation device may have a mdev type xxx-v4-yyy, > while a newer > > > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > > > with the compatible_type traits, the old generation device is still > > > > > > > > able to be regarded as compatible to newer generation device > even their > > > > > > > > mdev types are not equal. > > > > > > > > > > > > > > If you want to support migration from v4 to v5, can't the > (presumably > > > > > > > newer) driver that supports v5 simply register the v4 type as well, > so > > > > > > > that the mdev can be created as v4? (Just like QEMU versioned > machine > > > > > > > types work.) > > > > > > > > > > > > yes, it should work in some conditions. > > > > > > but it may not be that good in some cases when v5 and v4 in the > name string > > > > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 > for > > > > > > gen9) > > > > > > > > > > > > e.g. > > > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > > > software does not support it initially, and v4 and v5 identify > hardware > > > > > > differences. > > > > > > > > > > My first hunch here is: Don't introduce types that may be compatible > > > > > later. Either make them compatible, or make them distinct by design, > > > > > and possibly add a different, compatible type later. > > > > > > > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > > > > software now downgrade mdev type from v5 to v4? > > > > > > not sure if moving hardware generation info into a separate > attribute > > > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, > while use > > > > > > compatible_pci_ids to identify compatibility. > > > > > > > > > > If the generations are compatible, don't mention it in the mdev type. > > > > > If they aren't, use distinct types, so that management software > doesn't > > > > > have to guess. At least that would be my naive approach here. > > [*] > > > > > yep that is what i would prefer to see too. > > > > > > > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + > type_name". > > > > > > in some devices, e.g. qat, different generations of devices are > binding to > > > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > > > > approach? Or maybe I'm just confused. > > > > yes i really dont like haveing the version in the mdev-type name > > > > i would stongly perfger just qat-type-1 wehere qat is just there as a way > of namespacing. > > > > although symmetric-cryto, asymmetric-cryto and compression woudl be > a better name then type-1, type-2, type-3 if > > > > that is what they would end up mapping too. e.g. qat-compression or > qat-aes is a much better name then type-1 > > > > higher layers of software are unlikely to parse the mdev names but as a > human looking at them its much eaiser to > > > > understand if the names are meaningful. the qat prefix i think is > important however to make sure that your mdev-types > > > > dont colide with other vendeors mdev types. so i woudl encurage all > vendors to prefix there mdev types with etiher the > > > > device name or the vendor. > > > > > > +1 to all this, the mdev type is meant to indicate a software > > > compatible interface, if different hardware versions can be software > > > compatible, then don't make the job of finding a compatible device > > > harder. The full type is a combination of the vendor driver name plus > > > the vendor provided type name specifically in order to provide a type > > > namespace per vendor driver. That's done at the mdev core level. > > > Thanks, > > > > hi Alex, > > got it. so do you suggest that vendors use consistent driver name over > > generations of devices? > > for qat, they create different modules for each generation. This > > practice is not good if they want to support migration between devices > > of different generations, right? > > Even if they create different modules, I'd assume that they have some > kind of core with common functionality. I'd assume that as long they do > any type registrations satisfying [*] in the core, they should be good. > > > and can I understand that we don't want support of migration between > > different mdev types even in future ? > > From my point of view, I don't see anything that migration between > different mdev types would buy that is worth the complexity in finding > out which mdev types are actually compatible. Agree. Different type means different device API. as long as the device API doesn't change, different modules should expose it as the same type. If qat really wants to attach module name to the type, it essentially implies that qat has no generational compatibility. Thanks Kevin From alexis.deberg at ubisoft.com Fri Sep 11 16:03:55 2020 From: alexis.deberg at ubisoft.com (Alexis Deberg) Date: Fri, 11 Sep 2020 16:03:55 +0000 Subject: [neutron] Flow drop on agent restart with openvswitch firewall driver In-Reply-To: References: <20200909075042.qyxbnq7li2zm5oo4@skaplons-mac> , Message-ID: See my last comment in the opened bug, looks like upgrading to a more recent version brings some patches that fix the issue. Thanks everyone, and especially to slaweq and rodolfo-alonso-hernandez Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.williamson at redhat.com Fri Sep 11 16:51:55 2020 From: alex.williamson at redhat.com (Alex Williamson) Date: Fri, 11 Sep 2020 10:51:55 -0600 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200911005559.GA3932@joy-OptiPlex-7040> References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> <20200911005559.GA3932@joy-OptiPlex-7040> Message-ID: <20200911105155.184e32a0@w520.home> On Fri, 11 Sep 2020 08:56:00 +0800 Yan Zhao wrote: > On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > > On Thu, 10 Sep 2020 13:50:11 +0100 > > Sean Mooney wrote: > > > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > > Yan Zhao wrote: > > > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not missed: > > > > > > > the reason we want to specify compatible_type as a trait and check > > > > > > > whether target compatible_type is the superset of source > > > > > > > compatible_type is for the consideration of backward compatibility. > > > > > > > e.g. > > > > > > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > > with the compatible_type traits, the old generation device is still > > > > > > > able to be regarded as compatible to newer generation device even their > > > > > > > mdev types are not equal. > > > > > > > > > > > > If you want to support migration from v4 to v5, can't the (presumably > > > > > > newer) driver that supports v5 simply register the v4 type as well, so > > > > > > that the mdev can be created as v4? (Just like QEMU versioned machine > > > > > > types work.) > > > > > > > > > > yes, it should work in some conditions. > > > > > but it may not be that good in some cases when v5 and v4 in the name string > > > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for > > > > > gen9) > > > > > > > > > > e.g. > > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > > software does not support it initially, and v4 and v5 identify hardware > > > > > differences. > > > > > > > > My first hunch here is: Don't introduce types that may be compatible > > > > later. Either make them compatible, or make them distinct by design, > > > > and possibly add a different, compatible type later. > > > > > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > > > software now downgrade mdev type from v5 to v4? > > > > > not sure if moving hardware generation info into a separate attribute > > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, while use > > > > > compatible_pci_ids to identify compatibility. > > > > > > > > If the generations are compatible, don't mention it in the mdev type. > > > > If they aren't, use distinct types, so that management software doesn't > > > > have to guess. At least that would be my naive approach here. > > > yep that is what i would prefer to see too. > > > > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + type_name". > > > > > in some devices, e.g. qat, different generations of devices are binding to > > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > > > approach? Or maybe I'm just confused. > > > yes i really dont like haveing the version in the mdev-type name > > > i would stongly perfger just qat-type-1 wehere qat is just there as a way of namespacing. > > > although symmetric-cryto, asymmetric-cryto and compression woudl be a better name then type-1, type-2, type-3 if > > > that is what they would end up mapping too. e.g. qat-compression or qat-aes is a much better name then type-1 > > > higher layers of software are unlikely to parse the mdev names but as a human looking at them its much eaiser to > > > understand if the names are meaningful. the qat prefix i think is important however to make sure that your mdev-types > > > dont colide with other vendeors mdev types. so i woudl encurage all vendors to prefix there mdev types with etiher the > > > device name or the vendor. > > > > +1 to all this, the mdev type is meant to indicate a software > > compatible interface, if different hardware versions can be software > > compatible, then don't make the job of finding a compatible device > > harder. The full type is a combination of the vendor driver name plus > > the vendor provided type name specifically in order to provide a type > > namespace per vendor driver. That's done at the mdev core level. > > Thanks, > > hi Alex, > got it. so do you suggest that vendors use consistent driver name over > generations of devices? > for qat, they create different modules for each generation. This > practice is not good if they want to support migration between devices > of different generations, right? > > and can I understand that we don't want support of migration between > different mdev types even in future ? You need to balance your requirements here. If you're creating different drivers per generation, that suggests different device APIs, which is a legitimate use case for different mdev types. However if you're expecting migration compatibility, that must be seamless to the guest, therefore the device API must be identical. That suggests that migration between different types doesn't make much sense. If a new generation device wants to expose a new mdev type with new features or device API, yet also support migration with an older mdev type, why wouldn't it simply expose both the old and the new type? It seems much more supportable to simply instantiate an instance of the older type than to create an instance of the new type, which by the contents of the migration stream is configured to behave as the older type. The latter sounds very difficult to test. A challenge when we think about migration between different types, particularly across different vendor drivers, is that the migration stream is opaque, it's device and vendor specific. Therefore it's not only difficult for userspace to understand the compatibility matrix, but also to actually support it in software, maintaining version and bug compatibility across different drivers. It's clearly much, much easier when the same code base (and thus the same mdev type) is producing and consuming the migration data. Thanks, Alex From yan.y.zhao at intel.com Fri Sep 11 00:56:00 2020 From: yan.y.zhao at intel.com (Yan Zhao) Date: Fri, 11 Sep 2020 08:56:00 +0800 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200910120244.71e7b630@w520.home> References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> Message-ID: <20200911005559.GA3932@joy-OptiPlex-7040> On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > On Thu, 10 Sep 2020 13:50:11 +0100 > Sean Mooney wrote: > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > Yan Zhao wrote: > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not missed: > > > > > > the reason we want to specify compatible_type as a trait and check > > > > > > whether target compatible_type is the superset of source > > > > > > compatible_type is for the consideration of backward compatibility. > > > > > > e.g. > > > > > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > with the compatible_type traits, the old generation device is still > > > > > > able to be regarded as compatible to newer generation device even their > > > > > > mdev types are not equal. > > > > > > > > > > If you want to support migration from v4 to v5, can't the (presumably > > > > > newer) driver that supports v5 simply register the v4 type as well, so > > > > > that the mdev can be created as v4? (Just like QEMU versioned machine > > > > > types work.) > > > > > > > > yes, it should work in some conditions. > > > > but it may not be that good in some cases when v5 and v4 in the name string > > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for > > > > gen9) > > > > > > > > e.g. > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > software does not support it initially, and v4 and v5 identify hardware > > > > differences. > > > > > > My first hunch here is: Don't introduce types that may be compatible > > > later. Either make them compatible, or make them distinct by design, > > > and possibly add a different, compatible type later. > > > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > > software now downgrade mdev type from v5 to v4? > > > > not sure if moving hardware generation info into a separate attribute > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, while use > > > > compatible_pci_ids to identify compatibility. > > > > > > If the generations are compatible, don't mention it in the mdev type. > > > If they aren't, use distinct types, so that management software doesn't > > > have to guess. At least that would be my naive approach here. > > yep that is what i would prefer to see too. > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + type_name". > > > > in some devices, e.g. qat, different generations of devices are binding to > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > > approach? Or maybe I'm just confused. > > yes i really dont like haveing the version in the mdev-type name > > i would stongly perfger just qat-type-1 wehere qat is just there as a way of namespacing. > > although symmetric-cryto, asymmetric-cryto and compression woudl be a better name then type-1, type-2, type-3 if > > that is what they would end up mapping too. e.g. qat-compression or qat-aes is a much better name then type-1 > > higher layers of software are unlikely to parse the mdev names but as a human looking at them its much eaiser to > > understand if the names are meaningful. the qat prefix i think is important however to make sure that your mdev-types > > dont colide with other vendeors mdev types. so i woudl encurage all vendors to prefix there mdev types with etiher the > > device name or the vendor. > > +1 to all this, the mdev type is meant to indicate a software > compatible interface, if different hardware versions can be software > compatible, then don't make the job of finding a compatible device > harder. The full type is a combination of the vendor driver name plus > the vendor provided type name specifically in order to provide a type > namespace per vendor driver. That's done at the mdev core level. > Thanks, hi Alex, got it. so do you suggest that vendors use consistent driver name over generations of devices? for qat, they create different modules for each generation. This practice is not good if they want to support migration between devices of different generations, right? and can I understand that we don't want support of migration between different mdev types even in future ? Thanks Yan From cohuck at redhat.com Fri Sep 11 10:08:06 2020 From: cohuck at redhat.com (Cornelia Huck) Date: Fri, 11 Sep 2020 12:08:06 +0200 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200911005559.GA3932@joy-OptiPlex-7040> References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> <20200911005559.GA3932@joy-OptiPlex-7040> Message-ID: <20200911120806.5cfe203c.cohuck@redhat.com> On Fri, 11 Sep 2020 08:56:00 +0800 Yan Zhao wrote: > On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > > On Thu, 10 Sep 2020 13:50:11 +0100 > > Sean Mooney wrote: > > > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > > Yan Zhao wrote: > > > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not missed: > > > > > > > the reason we want to specify compatible_type as a trait and check > > > > > > > whether target compatible_type is the superset of source > > > > > > > compatible_type is for the consideration of backward compatibility. > > > > > > > e.g. > > > > > > > an old generation device may have a mdev type xxx-v4-yyy, while a newer > > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > > with the compatible_type traits, the old generation device is still > > > > > > > able to be regarded as compatible to newer generation device even their > > > > > > > mdev types are not equal. > > > > > > > > > > > > If you want to support migration from v4 to v5, can't the (presumably > > > > > > newer) driver that supports v5 simply register the v4 type as well, so > > > > > > that the mdev can be created as v4? (Just like QEMU versioned machine > > > > > > types work.) > > > > > > > > > > yes, it should work in some conditions. > > > > > but it may not be that good in some cases when v5 and v4 in the name string > > > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 for > > > > > gen9) > > > > > > > > > > e.g. > > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > > software does not support it initially, and v4 and v5 identify hardware > > > > > differences. > > > > > > > > My first hunch here is: Don't introduce types that may be compatible > > > > later. Either make them compatible, or make them distinct by design, > > > > and possibly add a different, compatible type later. > > > > > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > > > software now downgrade mdev type from v5 to v4? > > > > > not sure if moving hardware generation info into a separate attribute > > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, while use > > > > > compatible_pci_ids to identify compatibility. > > > > > > > > If the generations are compatible, don't mention it in the mdev type. > > > > If they aren't, use distinct types, so that management software doesn't > > > > have to guess. At least that would be my naive approach here. [*] > > > yep that is what i would prefer to see too. > > > > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + type_name". > > > > > in some devices, e.g. qat, different generations of devices are binding to > > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > > > approach? Or maybe I'm just confused. > > > yes i really dont like haveing the version in the mdev-type name > > > i would stongly perfger just qat-type-1 wehere qat is just there as a way of namespacing. > > > although symmetric-cryto, asymmetric-cryto and compression woudl be a better name then type-1, type-2, type-3 if > > > that is what they would end up mapping too. e.g. qat-compression or qat-aes is a much better name then type-1 > > > higher layers of software are unlikely to parse the mdev names but as a human looking at them its much eaiser to > > > understand if the names are meaningful. the qat prefix i think is important however to make sure that your mdev-types > > > dont colide with other vendeors mdev types. so i woudl encurage all vendors to prefix there mdev types with etiher the > > > device name or the vendor. > > > > +1 to all this, the mdev type is meant to indicate a software > > compatible interface, if different hardware versions can be software > > compatible, then don't make the job of finding a compatible device > > harder. The full type is a combination of the vendor driver name plus > > the vendor provided type name specifically in order to provide a type > > namespace per vendor driver. That's done at the mdev core level. > > Thanks, > > hi Alex, > got it. so do you suggest that vendors use consistent driver name over > generations of devices? > for qat, they create different modules for each generation. This > practice is not good if they want to support migration between devices > of different generations, right? Even if they create different modules, I'd assume that they have some kind of core with common functionality. I'd assume that as long they do any type registrations satisfying [*] in the core, they should be good. > and can I understand that we don't want support of migration between > different mdev types even in future ? From my point of view, I don't see anything that migration between different mdev types would buy that is worth the complexity in finding out which mdev types are actually compatible. From openstack at nemebean.com Fri Sep 11 19:43:43 2020 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 11 Sep 2020 14:43:43 -0500 Subject: [oslo][release][requirement] FFE request for Oslo lib In-Reply-To: <1747d4839a4.10505a96f108873.8961642381266139796@ghanshyammann.com> References: <1746e64d702.ee80b0bc1249.5426348472779199647@ghanshyammann.com> <20200909192543.b2d2ksruoqtbgcfy@mthode.org> <174746eb120.1229d07d224552.356509349559116522@ghanshyammann.com> <1747608f93e.d7ec085f28282.4985395579846058200@ghanshyammann.com> <1747d4839a4.10505a96f108873.8961642381266139796@ghanshyammann.com> Message-ID: <53ba067f-3c47-12ce-858c-c1de982166be@nemebean.com> On 9/11/20 8:08 AM, Ghanshyam Mann wrote: > ---- On Wed, 09 Sep 2020 22:22:13 -0500 Ghanshyam Mann wrote ---- > > ---- On Wed, 09 Sep 2020 14:54:05 -0500 Ghanshyam Mann wrote ---- > > > > > > ---- On Wed, 09 Sep 2020 14:25:43 -0500 Matthew Thode wrote ---- > > > > On 20-09-09 12:04:51, Ben Nemec wrote: > > > > > > > > > > On 9/8/20 10:45 AM, Ghanshyam Mann wrote: > > > > > > Hello Team, > > > > > > > > > > > > This is regarding FFE for Focal migration work. As planned, we have to move the Victoria testing to Focal and > > > > > > base job switch is planned to be switched by today[1]. > > > > > > > > > > > > There are few oslo lib need work (especially tox job-based testing not user-facing changes) to pass on Focal > > > > > > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > > > > > > > > > > > If we move the base tox jobs to Focal then these lib victoria gates (especially lower-constraint job) will be failing. > > > > > > We can either do these as FFE or backport (as this is lib own CI fixes only) later once the victoria branch is open. > > > > > > Opinion? > > > > > > > > > > As I noted in the meeting, if we have to do this to keep the gates working > > > > > then I'd rather do it as an FFE than have to backport all of the relevant > > > > > patches. IMHO we should only decline this FFE if we are going to also change > > > > > our statement of support for Python/Ubuntu in Victoria. > > > > > > > > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017060.html > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > https://review.opendev.org/#/c/750089 seems like the only functional > > > > change. It has my ACK with my requirements hat on. > > > > NOTE: There is one py3.8 bug fix also merged in oslo.uitls which is not yet released. This made py3.8 job voting in oslo.utils gate. > > - https://review.opendev.org/#/c/750216/ > > > > Rest all l-c bump are now passing on Focal > > - https://review.opendev.org/#/q/topic:migrate-to-focal-oslo+(status:open+OR+status:merged) > > All the patches are merged now, requesting to reconsider this FEE so that we can avoid more delay in this. The final Oslo releases with the focal changes are merged. We should be able to branch now and if anything else comes up we'll just have to backport. Thanks to everyone for working with us to get this sorted out, and sorry I dropped the ball on this goal. From tonyliu0592 at hotmail.com Fri Sep 11 21:26:21 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 11 Sep 2020 21:26:21 +0000 Subject: memchached connections Message-ID: Hi, Is there any guidance or experiences to estimate the number of memcached connections? Here is memcached connection on one of the 3 controllers. Connection number is the total established connections to all 3 memcached nodes. Node 1: 10 Keystone workers have 62 connections. 11 Nova API workers have 37 connections. 6 Neutron server works have 4304 connections. 1 memcached has 4973 connections. Node 2: 10 Keystone workers have 62 connections. 11 Nova API workers have 30 connections. 6 Neutron server works have 3703 connections. 1 memcached has 4973 connections. Node 3: 10 Keystone workers have 54 connections. 11 Nova API workers have 15 connections. 6 Neutron server works have 6541 connections. 1 memcached has 4973 connections. Before I increase the connection limit for memcached, I'd like to understand if all the above is expected? How Neutron server and memcached take so many connections? Any elaboration is appreciated. BTW, the problem leading me here is memcached connection timeout, which results all services depending on memcached stop working properly. Thanks! Tony From rosmaita.fossdev at gmail.com Fri Sep 11 21:29:19 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 11 Sep 2020 17:29:19 -0400 Subject: [cinder] FEATURE FREEZE in effect Message-ID: <8a9bebfd-850d-619a-b925-ba4abbb1958d@gmail.com> The Victoria feature freeze is now in effect. Please do not approve patches proposing features for master until after the stable/victoria branch is cut the week of 21 September. Due to gate shenanigans over the past week, the following reviews have a feature freeze exception, and may be merged to master during the next week: Default type overrides - https://review.opendev.org/#/c/737707/ Adding support for Adaptive QoS in NetApp driver - https://review.opendev.org/#/c/741327/ The following have been approved and have been making their way slowly through the gate today, but technically they haven't been merged yet, so these also have a FFE: - https://review.opendev.org/#/c/747540/ - https://review.opendev.org/#/c/746941/ - https://review.opendev.org/#/c/746813/ (Hopefully by the time you read this, they've already been merged.) cheers, brian From tonyliu0592 at hotmail.com Fri Sep 11 21:37:15 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 11 Sep 2020 21:37:15 +0000 Subject: [Keystone] socket.timeout: timed out In-Reply-To: References: Message-ID: memcached load is heavy. I started another thread to get clarifications. This is not Keystone specific issue. Thanks! Tony > -----Original Message----- > From: Radosław Piliszek > Sent: Friday, September 11, 2020 12:08 AM > To: Tony Liu > Cc: openstack-discuss > Subject: Re: [Keystone] socket.timeout: timed out > > Hi Tony, > > Well, it looks like memcached just timed out. > I'd check the load on it. > > -yoctozepto > > On Thu, Sep 10, 2020 at 7:24 PM Tony Liu wrote: > > > > Any clues on this timeout exception? > > > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context [req- > 534d9855-8113-450d-8f9f-d93c0d961d24 113ee63a9ed0466794e24d069efc302c > 4c142a681d884010ab36a7ac687d910c - default default] timed out: > socket.timeout: timed out > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > Traceback (most recent call last): > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site- > packages/keystone/server/flask/request_processing/middleware/auth_contex > t.py", line 103, in _inner > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > return method(self, request) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site- > packages/keystone/server/flask/request_processing/middleware/auth_contex > t.py", line 353, in process_request > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > resp = super(AuthContextMiddleware, self).process_request(request) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site- > packages/keystonemiddleware/auth_token/__init__.py", line 411, in > process_request > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > allow_expired=allow_expired) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site- > packages/keystonemiddleware/auth_token/__init__.py", line 445, in > _do_fetch_token > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > data = self.fetch_token(token, **kwargs) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site- > packages/keystone/server/flask/request_processing/middleware/auth_contex > t.py", line 248, in fetch_token > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > token, access_rules_support=ACCESS_RULES_MIN_VERSION) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, > in wrapped > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > __ret_val = __f(*args, **kwargs) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 145, > in validate_token > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > token = self._validate_token(token_id) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "", line > 2, in _validate_token > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, > in get_or_create_for_user_func > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context key, > user_func, timeout, should_cache_fn, (arg, kw) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in > get_or_create > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > async_creator, > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in > __enter__ > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > return self._enter() > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 94, in _enter > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > generated = self._enter_create(value, createdtime) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 180, in > _enter_create > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > return self.creator() > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 916, in > gen_value > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > *creator_args[0], **creator_args[1] > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/token/provider.py", line 179, > in _validate_token > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > token.mint(token_id, issued_at) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line > 579, in mint > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > self._validate_token_resources() > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line > 471, in _validate_token_resources > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context if > self.project and not self.project_domain.get('enabled'): > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/models/token_model.py", line > 176, in project_domain > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > self.project['domain_id'] > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, > in wrapped > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > __ret_val = __f(*args, **kwargs) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "", line > 2, in get_domain > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 1360, > in get_or_create_for_user_func > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context key, > user_func, timeout, should_cache_fn, (arg, kw) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 962, in > get_or_create > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > async_creator, > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 187, in > __enter__ > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > return self._enter() > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/lock.py", line 87, in _enter > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > value = value_fn() > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/region.py", line 902, in > get_value > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > value = self.backend.get(key) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site- > packages/keystone/common/cache/_context_cache.py", line 74, in get > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > value = self.proxied.get(key) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/dogpile/cache/backends/memcached.py", > line 168, in get > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > value = self.client.get(key) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/oslo_cache/backends/memcache_pool.py", > line 32, in _run_method > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > return getattr(client, __name)(*args, **kwargs) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1129, in get > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > return self._get('get', key) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1074, in _get > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > server, key = self._get_server(key) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 446, in _get_server > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context if > server.connect(): > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1391, in connect > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context if > self._get_socket(): > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1423, in > _get_socket > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > self.flush() > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1498, in flush > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > self.expect(b'OK') > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1473, in expect > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > line = self.readline(raise_exception) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context File > "/usr/lib/python3.6/site-packages/memcache.py", line 1459, in readline > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > data = recv(4096) > > 2020-09-10 10:10:33.981 28 ERROR > keystone.server.flask.request_processing.middleware.auth_context > socket.timeout: timed out > > > > > > Thanks! > > Tony > > > > From tonyliu0592 at hotmail.com Sat Sep 12 02:20:39 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sat, 12 Sep 2020 02:20:39 +0000 Subject: [Neutron] memchached connections Message-ID: I restarted neutron-server on all 3 nodes. Those memcached connections from neutron-server are gone. Everything is back to normal. It seems like that memcached connections are not closed properly in neutron-server. The connections pile up over time. Is there any know issue related? Could any Neutron experts comment here? Thanks! Tony > -----Original Message----- > From: Tony Liu > Sent: Friday, September 11, 2020 2:26 PM > To: openstack-discuss > Subject: memchached connections > > Hi, > > Is there any guidance or experiences to estimate the number of memcached > connections? > > Here is memcached connection on one of the 3 controllers. > Connection number is the total established connections to all 3 > memcached nodes. > > Node 1: > 10 Keystone workers have 62 connections. > 11 Nova API workers have 37 connections. > 6 Neutron server works have 4304 connections. > 1 memcached has 4973 connections. > > Node 2: > 10 Keystone workers have 62 connections. > 11 Nova API workers have 30 connections. > 6 Neutron server works have 3703 connections. > 1 memcached has 4973 connections. > > Node 3: > 10 Keystone workers have 54 connections. > 11 Nova API workers have 15 connections. > 6 Neutron server works have 6541 connections. > 1 memcached has 4973 connections. > > Before I increase the connection limit for memcached, I'd like to > understand if all the above is expected? > > How Neutron server and memcached take so many connections? > > Any elaboration is appreciated. > > BTW, the problem leading me here is memcached connection timeout, which > results all services depending on memcached stop working properly. > > > Thanks! > Tony From its-openstack at zohocorp.com Sat Sep 12 03:04:53 2020 From: its-openstack at zohocorp.com (its-openstack at zohocorp.com) Date: Sat, 12 Sep 2020 08:34:53 +0530 Subject: WIndows 10 instance hostname not updating Message-ID: <1748045d1f2.b9ebd2757273.2608405934809994219@zohocorp.com> Dear openstack, I have installed openstack train branch, I am facing issue with windows image. all windows 10 instance dosen't get its hostname updated from the metadata, but able to get the metadata(hostname) from inside the instance using powershell. ``` $ Invoke-WebRequest http://169.254.169.254/latest/meta-data/hostname -UseBasicParsing  ``` windows2016 instance no issue. using the stable cloudbase-init package for preparation of windows 10 (tried bot in v1909 and v2004). windows2016 server dosen't have this issue.  if you would so kindly help us with this issue Regards sysadmin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Sep 12 07:39:19 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 12 Sep 2020 09:39:19 +0200 Subject: [Neutron] vxlan to vlan bridge Message-ID: Hello Stackers, is it possibile to create a vxlan to vlan bridge in openstack like vmware nsx does? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Sep 12 09:41:24 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 12 Sep 2020 11:41:24 +0200 Subject: [Neutron] memchached connections In-Reply-To: References: Message-ID: I believe you are hitting [1]. We have recently worked around that in Kolla-Ansible [2]. We'd like to get clarifications on whether the workaround is the right choice but it seems to work. :-) [1] https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 [2] https://review.opendev.org/746966 -yoctozepto On Sat, Sep 12, 2020 at 4:34 AM Tony Liu wrote: > > I restarted neutron-server on all 3 nodes. Those memcached > connections from neutron-server are gone. Everything is back > to normal. It seems like that memcached connections are not > closed properly in neutron-server. The connections pile up > over time. Is there any know issue related? Could any Neutron > experts comment here? > > Thanks! > Tony > > -----Original Message----- > > From: Tony Liu > > Sent: Friday, September 11, 2020 2:26 PM > > To: openstack-discuss > > Subject: memchached connections > > > > Hi, > > > > Is there any guidance or experiences to estimate the number of memcached > > connections? > > > > Here is memcached connection on one of the 3 controllers. > > Connection number is the total established connections to all 3 > > memcached nodes. > > > > Node 1: > > 10 Keystone workers have 62 connections. > > 11 Nova API workers have 37 connections. > > 6 Neutron server works have 4304 connections. > > 1 memcached has 4973 connections. > > > > Node 2: > > 10 Keystone workers have 62 connections. > > 11 Nova API workers have 30 connections. > > 6 Neutron server works have 3703 connections. > > 1 memcached has 4973 connections. > > > > Node 3: > > 10 Keystone workers have 54 connections. > > 11 Nova API workers have 15 connections. > > 6 Neutron server works have 6541 connections. > > 1 memcached has 4973 connections. > > > > Before I increase the connection limit for memcached, I'd like to > > understand if all the above is expected? > > > > How Neutron server and memcached take so many connections? > > > > Any elaboration is appreciated. > > > > BTW, the problem leading me here is memcached connection timeout, which > > results all services depending on memcached stop working properly. > > > > > > Thanks! > > Tony > > From dev.faz at gmail.com Sat Sep 12 13:34:31 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Sat, 12 Sep 2020 15:34:31 +0200 Subject: [Neutron] vxlan to vlan bridge In-Reply-To: References: Message-ID: Hi, something like networking-l2gw? Fabian Ignazio Cassano schrieb am Sa., 12. Sept. 2020, 09:47: > Hello Stackers, is it possibile to create a vxlan to vlan bridge in > openstack like vmware nsx does? > Thanks > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at goirand.fr Sun Sep 13 18:46:38 2020 From: thomas at goirand.fr (Thomas Goirand) Date: Sun, 13 Sep 2020 20:46:38 +0200 Subject: Is Storyboard really the future? In-Reply-To: References: <20200910154704.3erw242ynqldlq63@yuggoth.org> Message-ID: <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> On 9/10/20 6:45 PM, Radosław Piliszek wrote: > I feel you. I could not so far convince anyone to support me to work > on it, mostly because > Jira/GitHub/GitLab/Launchpad exists. > Not to mention many small internal projects are happy with just Trello. :-) Did you just try to list all the non-free services you could in this thread? Seriously, don't you care a little bit? You probably don't realize it, but that's shocking, at least for me, and hopefully I'm not the only one. If the project was to take such direction as to use this kind of non-free services, I probably would reconsider my involvement (which soon will reach 10 years of Debian packaging...). Cheers, Thomas Goirand (zigo) From tonyliu0592 at hotmail.com Sun Sep 13 19:17:38 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 13 Sep 2020 19:17:38 +0000 Subject: [Neutron] memchached connections In-Reply-To: References: Message-ID: I will apply the workaround and keep watching it. Given comment #7, do we know if https://review.opendev.org/#/c/742193/ is the right fix yet? Thanks! Tony > -----Original Message----- > From: Radosław Piliszek > Sent: Saturday, September 12, 2020 2:41 AM > To: Tony Liu > Cc: openstack-discuss > Subject: Re: [Neutron] memchached connections > > I believe you are hitting [1]. > > We have recently worked around that in Kolla-Ansible [2]. > > We'd like to get clarifications on whether the workaround is the right > choice but it seems to work. :-) > > [1] https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 > [2] https://review.opendev.org/746966 > > -yoctozepto > > On Sat, Sep 12, 2020 at 4:34 AM Tony Liu wrote: > > > > I restarted neutron-server on all 3 nodes. Those memcached connections > > from neutron-server are gone. Everything is back to normal. It seems > > like that memcached connections are not closed properly in > > neutron-server. The connections pile up over time. Is there any know > > issue related? Could any Neutron experts comment here? > > > > Thanks! > > Tony > > > -----Original Message----- > > > From: Tony Liu > > > Sent: Friday, September 11, 2020 2:26 PM > > > To: openstack-discuss > > > Subject: memchached connections > > > > > > Hi, > > > > > > Is there any guidance or experiences to estimate the number of > > > memcached connections? > > > > > > Here is memcached connection on one of the 3 controllers. > > > Connection number is the total established connections to all 3 > > > memcached nodes. > > > > > > Node 1: > > > 10 Keystone workers have 62 connections. > > > 11 Nova API workers have 37 connections. > > > 6 Neutron server works have 4304 connections. > > > 1 memcached has 4973 connections. > > > > > > Node 2: > > > 10 Keystone workers have 62 connections. > > > 11 Nova API workers have 30 connections. > > > 6 Neutron server works have 3703 connections. > > > 1 memcached has 4973 connections. > > > > > > Node 3: > > > 10 Keystone workers have 54 connections. > > > 11 Nova API workers have 15 connections. > > > 6 Neutron server works have 6541 connections. > > > 1 memcached has 4973 connections. > > > > > > Before I increase the connection limit for memcached, I'd like to > > > understand if all the above is expected? > > > > > > How Neutron server and memcached take so many connections? > > > > > > Any elaboration is appreciated. > > > > > > BTW, the problem leading me here is memcached connection timeout, > > > which results all services depending on memcached stop working > properly. > > > > > > > > > Thanks! > > > Tony > > > > From fungi at yuggoth.org Sun Sep 13 19:59:45 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 Sep 2020 19:59:45 +0000 Subject: Is Storyboard really the future? In-Reply-To: <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> References: <20200910154704.3erw242ynqldlq63@yuggoth.org> <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> Message-ID: <20200913195945.iwh556ygifx5u4vg@yuggoth.org> On 2020-09-13 20:46:38 +0200 (+0200), Thomas Goirand wrote: > On 9/10/20 6:45 PM, Radosław Piliszek wrote: > > I feel you. I could not so far convince anyone to support me to > > work on it, mostly because Jira/GitHub/GitLab/Launchpad exists. > > Not to mention many small internal projects are happy with just > > Trello. :-) > > Did you just try to list all the non-free services you could in > this thread? Seriously, don't you care a little bit? You probably > don't realize it, but that's shocking, at least for me, and > hopefully I'm not the only one. [...] To be fair, Launchpad is F/LOSS, just not that easy to run a copy of it yourself. Technically so is GitLab's community edition (not their enterprise edition nor their cloud SaaS), but yes the "open-core" situation there concerns me enough to not consider it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Mon Sep 14 06:59:25 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 14 Sep 2020 08:59:25 +0200 Subject: Is Storyboard really the future? In-Reply-To: <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> References: <20200910154704.3erw242ynqldlq63@yuggoth.org> <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> Message-ID: On Sun, Sep 13, 2020 at 8:46 PM Thomas Goirand wrote: > > On 9/10/20 6:45 PM, Radosław Piliszek wrote: > > I feel you. I could not so far convince anyone to support me to work > > on it, mostly because > > Jira/GitHub/GitLab/Launchpad exists. > > Not to mention many small internal projects are happy with just Trello. :-) > > Did you just try to list all the non-free services you could in this > thread? Seriously, don't you care a little bit? You probably don't > realize it, but that's shocking, at least for me, and hopefully I'm not > the only one. I feel offended by the accusations. I *do* care about open source. Jeremy has already answered regarding GitLab and Launchpad. Let's not forget GitHub actually *is* the largest, diverse open source community, even though the service itself is not. It hurts me as well so please don't just randomly attack people mentioning non-free software. It can support open source software as well. > If the project was to take such direction as to use this kind of > non-free services, I probably would reconsider my involvement (which > soon will reach 10 years of Debian packaging...). I did not propose that in any part. Launchpad is FLOSS and that is my proposal. The general idea behind my mail was to emphasise that Storyboard has great aspirations and assumptions but is far from delivering its full potential so should not be recommended without giving background and other possible solutions. -yoctozepto > Cheers, > > Thomas Goirand (zigo) From zigo at debian.org Mon Sep 14 07:22:08 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 14 Sep 2020 09:22:08 +0200 Subject: Is Storyboard really the future? In-Reply-To: References: <20200910154704.3erw242ynqldlq63@yuggoth.org> <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> Message-ID: On 9/14/20 8:59 AM, Radosław Piliszek wrote: > On Sun, Sep 13, 2020 at 8:46 PM Thomas Goirand wrote: >> >> On 9/10/20 6:45 PM, Radosław Piliszek wrote: >>> I feel you. I could not so far convince anyone to support me to work >>> on it, mostly because >>> Jira/GitHub/GitLab/Launchpad exists. >>> Not to mention many small internal projects are happy with just Trello. :-) >> >> Did you just try to list all the non-free services you could in this >> thread? Seriously, don't you care a little bit? You probably don't >> realize it, but that's shocking, at least for me, and hopefully I'm not >> the only one. > > I feel offended by the accusations. > I *do* care about open source. > > Jeremy has already answered regarding GitLab and Launchpad. > Let's not forget GitHub actually *is* the largest, diverse open source > community, > even though the service itself is not. It hurts me as well so please don't just > randomly attack people mentioning non-free software. > It can support open source software as well. You're the one mentioning Jira, GitHub, Trello as possible solution to solve the fact that you don't like Storyboard. This is IMO very far from the spirit of free software. Sorry if you took it as a personal attack: there's nothing personal here, just a strong opposition to using this kind of services. The fact that many projects are using these non-free services to produce free software is actually a problem, not a solution. Same for Github being (probably) the largest repository of free software: that's a huge problem, as huge as the number of projects hosted. Lucky, many just think of it as just free hosting and nothing more. Gitlab being open-core, as Jeremy pointed out, is also a problem (anyone that knows about the beginning of OpenStack and Eucalyptus knows why open core is problematic). > I did not propose that in any part. Launchpad is FLOSS and that is my proposal. > The general idea behind my mail was to emphasise that Storyboard has > great aspirations > and assumptions but is far from delivering its full potential so > should not be recommended without > giving background and other possible solutions. Launchpad is hardly installable, and is tightly connected to Canonical/Ubuntu. It is a very good thing that the OpenStack community has made efforts to get out of it. It is IMO counter-productive to push projects to either go back to launchpad, or not migrate to Storyboard. The only viable solution is to contribute and make Storyboard better, or switching to another existing free software. There are many out there that could have done the job. Cheers, Thomas Goirand (zigo) From skaplons at redhat.com Mon Sep 14 07:46:06 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 14 Sep 2020 09:46:06 +0200 Subject: REMINDER: 2020 Virtual Summit: Forum Submissions Now Accepted In-Reply-To: References: Message-ID: <20200914074606.rrjdkf4bz5twsgzm@skaplons-mac> Hi, Is deadline for proposing forum topics already reached? I'm trying to propose something now and on https://cfp.openstack.org/app/presentations I see only info that "Submission is closed". On Wed, Sep 09, 2020 at 05:57:00PM -0500, Jimmy McArthur wrote: > Hello Everyone! > > We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Virtual Forum page [4]. > > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. > > The timeline for submissions is as follows: > > Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org. > Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. > Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. > Sept 28th | Forum schedule final > Oct 19th | Forum begins! > > If you have questions or concerns, please reach out to speakersupport at openstack.org (mailto:speakersupport at openstack.org). > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/2020/ > [3] https://cfp.openstack.org > [4]https://wiki.openstack.org/wiki/Forum/Virtual2020 -- Slawek Kaplonski Senior software engineer Red Hat From t.schulze at tu-berlin.de Mon Sep 14 08:09:42 2020 From: t.schulze at tu-berlin.de (thoralf schulze) Date: Mon, 14 Sep 2020 10:09:42 +0200 Subject: [sdk] openstacksdk vs. server groups In-Reply-To: <8883AE9B-B1AB-4192-8CE7-B78BE70F41BF@gmail.com> References: <8883AE9B-B1AB-4192-8CE7-B78BE70F41BF@gmail.com> Message-ID: <5b6459e7-494b-2a25-b747-5480953538cf@tu-berlin.de> hi Artem, On 9/11/20 7:23 PM, Artem Goncharov wrote: > This issue is already addressed (see > https://review.opendev.org/#/c/749381/) and will be fixed in next > release of SDK (pretty soon). and will be fixed in next release of > SDK (pretty soon). that's good to know … thank you very much and also john and dmitriy for their patches. cheers, t. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gael.therond at bitswalk.com Mon Sep 14 08:17:32 2020 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 14 Sep 2020 10:17:32 +0200 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: > > ---------- Forwarded message ---------- > From: Thomas Goirand > To: openstack-discuss > Cc: > Bcc: > Date: Mon, 14 Sep 2020 09:22:08 +0200 > Subject: Re: Is Storyboard really the future? > On 9/14/20 8:59 AM, Radosław Piliszek wrote: > > > On Sun, Sep 13, 2020 at 8:46 PM Thomas Goirand > wrote: > > >> > > >> On 9/10/20 6:45 PM, Radosław Piliszek wrote: > > >>> I feel you. I could not so far convince anyone to support me to work > > >>> on it, mostly because > > >>> Jira/GitHub/GitLab/Launchpad exists. > > >>> Not to mention many small internal projects are happy with just > Trello. :-) > > >> > > >> Did you just try to list all the non-free services you could in this > > >> thread? Seriously, don't you care a little bit? You probably don't > > >> realize it, but that's shocking, at least for me, and hopefully I'm not > > >> the only one. > > > > > > I feel offended by the accusations. > > > I *do* care about open source. > > > > > > Jeremy has already answered regarding GitLab and Launchpad. > > > Let's not forget GitHub actually *is* the largest, diverse open source > > > community, > > > even though the service itself is not. It hurts me as well so please > don't just > > > randomly attack people mentioning non-free software. > > > It can support open source software as well. > > > > You're the one mentioning Jira, GitHub, Trello as possible solution to > > solve the fact that you don't like Storyboard. This is IMO very far from > > the spirit of free software. Sorry if you took it as a personal attack: > > there's nothing personal here, just a strong opposition to using this > > kind of services. > > > > The fact that many projects are using these non-free services to produce > > free software is actually a problem, not a solution. Same for Github > > being (probably) the largest repository of free software: that's a huge > > problem, as huge as the number of projects hosted. Lucky, many just > > think of it as just free hosting and nothing more. > > > > Gitlab being open-core, as Jeremy pointed out, is also a problem (anyone > > that knows about the beginning of OpenStack and Eucalyptus knows why > > open core is problematic). > > > > > I did not propose that in any part. Launchpad is FLOSS and that is my > proposal. > > > The general idea behind my mail was to emphasise that Storyboard has > > > great aspirations > > > and assumptions but is far from delivering its full potential so > > > should not be recommended without > > > giving background and other possible solutions. > > > > Launchpad is hardly installable, and is tightly connected to > > Canonical/Ubuntu. It is a very good thing that the OpenStack community > > has made efforts to get out of it. It is IMO counter-productive to push > > projects to either go back to launchpad, or not migrate to Storyboard. > > > > The only viable solution is to contribute and make Storyboard better, or > > switching to another existing free software. There are many out there > > that could have done the job. > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > > > _______________________________________________ > > openstack-discuss mailing list > > openstack-discuss at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss Hi everyone, So, thomas, your message was rude and can hurt because Yocto didn’t suggested to use those tools, he was answering you that he feel the pain as everyone is suggesting those tool when you talk with people on IRC etc. Even if I do understand your point and know the importance of being autonomous and do not rely on non-FLOSS software, the thruth being all those discussions is the pain in the ass that it is to contribute to Openstack projects compared with other Open source software based on Github or Github like workflow. The opensource community and especially the Openstack one need to understand that people really get a limited amount of time and so if you want to attract more people your contribution process have to be streamlined and on par with what most of us developers do experience on everyday. The foundation made a first step toward that by migrating every project on gitea, and honestly, I’m still amazed that while migrating those projects it wasn’t decided to use the issues/projects feature of gitea. There even is a cicd zuul plugin for gitea. As a community we propose things, but if the community don’t use them, it’s because it’s not what they’re waiting for. We also need to step back from time to time and admit that one software need to sunset and be migrated elsewhere. We want a fully floss project to host it? Fine it’s perfectly valid argument, then just reuse what’s already there with gitea and redirect development effort of the abandonned software to the new platform in order to support the missing part if ever required! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Mon Sep 14 08:27:55 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 14 Sep 2020 16:27:55 +0800 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi feilong, hope you are keeping well. Thank you for the info! For issue 1. Maybe this should be with the kayobe/kolla-ansible team. Thanks for the insight :) For the 2nd one, I was able to run the HOT template in your link. There's no issues at all running that multiple times concurrently while using the 0MB disk flavour. I tried four times with the last three executing one after the other so that they ran parallelly. All were successful and completed and did not complain about the 0MB disk issue. Does this conclude that the error and create-failed issue relates to Magnum or could you suggest other steps to test on my side? Best regards, Tony Pearce On Thu, 10 Sep 2020 at 16:01, feilong wrote: > Hi Tony, > > Sorry for the late response for your thread. > > For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS and > it works. > > For the 2nd issue, I think we were misunderstanding the nodes disk > capacity. I was assuming you're talking about the k8s nodes, but seems > you're talking about the physical compute host. I still don't think it's a > Magnum issue because a k8s master/worker nodes are just normal Nova > instances and managed by Heat. So I would suggest you use a simple HOT to > test it, you can use this > https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 > > Most of the cloud providers or organizations who have adopted Magnum are > using Ceph as far as I know, just FYI. > > > On 10/09/20 4:35 pm, Tony Pearce wrote: > > Hi all, hope you are all keeping safe and well. I am looking for > information on the following two issues that I have which surrounds Magnum > project: > > 1. Magnum does not support Openstack API with HTTPS > 2. Magnum forces compute nodes to consume disk capacity for instance data > > My environment: Openstack Train deployed using Kayobe (Kolla-ansible). > > With regards to the HTTPS issue, Magnum stops working after enabling HTTPS > because the certificate / CA certificate is not trusted by Magnum. The > certificate which I am using is one that was purchased from GoDaddy and is > trusted in web browsers (and is valid), just not trusted by the Magnum > component. > > Regarding compute node disk consumption issue - I'm at a loss with regards > to this and so I'm looking for more information about why this is being > done and is there any way that I could avoid it? I have storage provided > by a Cinder integration and so the consumption of compute node disk for > instance data I need to avoid. > > Any information the community could provide to me with regards to the > above would be much appreciated. I would very much like to use the Magnum > project in this deployment for Kubernetes deployment within projects. > > Thanks in advance, > > Regards, > > Tony > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Sep 14 08:36:44 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 14 Sep 2020 09:36:44 +0100 Subject: Kolla-ansible ironic In-Reply-To: References: Message-ID: On Tue, 8 Sep 2020 at 19:52, Thomas Wakefield wrote: > > All- > > > We are new to using OpenStack and are testing out Kolla-ansible with hopes of using Ironic as a deployment tool. Our issue is we can’t use the openstack baremetal command, it’s not found after deployment. Our current test environment is built using Train on CentOS 7. And all other basic OpenStack functionality seems to be working with our Kolla install (nova, glance, horizon, etc). > > > > We followed these docs, https://docs.openstack.org/kolla-ansible/train/reference/bare-metal/ironic-guide.html , but when we get to running any “openstack baremetal” commands we don’t seem to have the baremetal commands available in openstack. > > > > Globals.yml lines that should be relavent: > > > > enable_horizon_ironic: "{{ enable_ironic | bool }}" > > enable_ironic: "yes" > > enable_ironic_ipxe: "yes" > > enable_ironic_neutron_agent: "{{ enable_neutron | bool and enable_ironic | bool }}" > > enable_ironic_pxe_uefi: "no" > > #enable_iscsid: "{{ (enable_cinder | bool and enable_cinder_backend_iscsi | bool) or enable_ironic | bool }}" > > ironic_dnsmasq_interface: "em1" > > # The following value must be set when enabling ironic, > > ironic_dnsmasq_dhcp_range: "192.168.2.230,192.168.2.239" > > ironic_dnsmasq_boot_file: "pxelinux.0" > > ironic_cleaning_network: "demo-net" > > > > > > Ironic is listed as an installed service, but you can see the baremetal commands are not found: > > root at orc-os5:~## openstack service list > > +----------------------------------+------------------+-------------------------+ > > | ID | Name | Type | > > +----------------------------------+------------------+-------------------------+ > > | 0e5119acbf384714ab11520fadce36bb | nova_legacy | compute_legacy | > > | 2ed83015047249f38b782901e03bcfc1 | ironic-inspector | baremetal-introspection | > > | 5d7aabf15bdc415387fac54fa1ca21df | ironic | baremetal | > > | 6d05cdce019347e9940389abed959ffb | neutron | network | > > | 7d9485969e504b2e90273af75e9b1713 | cinderv3 | volumev3 | > > | a11dc04e83ed4d9ba65474b9de947d1b | keystone | identity | > > | ad0c2db47b414b34b86a5f6a5aca597c | glance | image | > > | dcbbc90813714c989b82bece1c0d9d9f | nova | compute | > > | de0ee6b55486495296516e07d2e9e97c | heat | orchestration | > > | df605d671d88496d91530fbc01573589 | cinderv2 | volumev2 | > > | e211294ca78a418ea34d9c29d86b05f1 | placement | placement | > > | f62ba90bc0b94cb9b3d573605f800a1f | heat-cfn | cloudformation | > > +----------------------------------+------------------+-------------------------+ > > root at orc-os5:~## openstack baremetal > > openstack: 'baremetal' is not an openstack command. See 'openstack --help'. > > Did you mean one of these? > > credential create > > credential delete > > credential list > > credential set > > credential show > > > > > > Is there anything else that needs configured to activate ironic? > > > > Thanks in advance. > > Tom Hi Thomas, what are you planning to use the machines deployed by Ironic for? If they are going to be used as hypervisors or for storage, you might want to consider using Kayobe [1]. [1] https://docs.openstack.org/kayobe/latest/ > > From mark at stackhpc.com Mon Sep 14 08:44:25 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 14 Sep 2020 09:44:25 +0100 Subject: [kolla-ansible] Ceph in Ussuri In-Reply-To: References: Message-ID: On Fri, 11 Sep 2020 at 11:52, Klemen Pogacnik wrote: > > I've done Ansible playbook to simplify Ceph integration with Openstack. It's based on cephadm-ansible project (https://github.com/jcmdln/cephadm-ansible) > Check: > https://gitlab.com/kemopq/it_addmodule-ceph > Any suggestions and/or help are appreciated! > Klemen Thanks for sharing Klemen. When I first saw cephadm I thought it was in need of some declarative interface to drive it, rather than issuing many cephadm commands. Since then they have added support for YAML service definitions. From feilong at catalyst.net.nz Mon Sep 14 09:20:14 2020 From: feilong at catalyst.net.nz (feilong) Date: Mon, 14 Sep 2020 21:20:14 +1200 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi Tony, Could you please let me know  your flavor details? I would like to test it in my devstack environment (based on LVM). Thanks. On 14/09/20 8:27 pm, Tony Pearce wrote: > Hi feilong, hope you are keeping well. Thank you for the info!   > > For issue 1. Maybe this should be with the kayobe/kolla-ansible team. > Thanks for the insight :)  > > For the 2nd one, I was able to run the HOT template in your link. > There's no issues at all running that multiple times concurrently > while using the 0MB disk flavour. I tried four times with the last > three executing one after the other so that they ran parallelly.  All > were successful and completed and did not complain about the 0MB disk > issue.  > > Does this conclude that the error and create-failed issue relates to > Magnum or could you suggest other steps to test on my side?  > > Best regards, > > Tony Pearce > > > > > On Thu, 10 Sep 2020 at 16:01, feilong > wrote: > > Hi Tony, > > Sorry for the late response for your thread. > > For you HTTPS issue, we (Catalyst Cloud) are using Magnum with > HTTPS and it works. > > For the 2nd issue, I think we were misunderstanding the nodes disk > capacity. I was assuming you're talking about the k8s nodes, but > seems you're talking about the physical compute host. I still > don't think it's a Magnum issue because a k8s master/worker nodes > are just normal Nova instances and managed by Heat. So I would > suggest you use a simple HOT to test it, you can use this > https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 > > Most of the cloud providers or organizations who have adopted > Magnum are using Ceph as far as I know, just FYI. > > > On 10/09/20 4:35 pm, Tony Pearce wrote: >> Hi all, hope you are all keeping safe and well. I am looking for >> information on the following two issues that I have which >> surrounds Magnum project: >> >> 1. Magnum does not support Openstack API with HTTPS >> 2. Magnum forces compute nodes to consume disk capacity for >> instance data >> >> My environment: Openstack Train deployed using Kayobe >> (Kolla-ansible).  >> >> With regards to the HTTPS issue, Magnum stops working after >> enabling HTTPS because the certificate / CA certificate is not >> trusted by Magnum. The certificate which I am using is one that >> was purchased from GoDaddy and is trusted in web browsers (and is >> valid), just not trusted by the Magnum component.  >> >> Regarding compute node disk consumption issue - I'm at a loss >> with regards to this and so I'm looking for more information >> about why this is being done and is there any way that I could >> avoid it?  I have storage provided by a Cinder integration and so >> the consumption of compute node disk for instance data I need to >> avoid.  >> >> Any information the community could provide to me with regards to >> the above would be much appreciated. I would very much like to >> use the Magnum project in this deployment for Kubernetes >> deployment within projects.  >> >> Thanks in advance,  >> >> Regards, >> >> Tony > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Mon Sep 14 09:37:02 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 14 Sep 2020 17:37:02 +0800 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi Feilong, sure. The flavour I used has 2 CPU and 2GB memory. All other values either unset or 0mb. I also used the same fedora 27 image that is being used for the kubernetes cluster. Thank you Tony On Mon, 14 Sep 2020, 17:20 feilong, wrote: > Hi Tony, > > Could you please let me know your flavor details? I would like to test it > in my devstack environment (based on LVM). Thanks. > > > On 14/09/20 8:27 pm, Tony Pearce wrote: > > Hi feilong, hope you are keeping well. Thank you for the info! > > For issue 1. Maybe this should be with the kayobe/kolla-ansible team. > Thanks for the insight :) > > For the 2nd one, I was able to run the HOT template in your link. There's > no issues at all running that multiple times concurrently while using the > 0MB disk flavour. I tried four times with the last three executing one > after the other so that they ran parallelly. All were successful and > completed and did not complain about the 0MB disk issue. > > Does this conclude that the error and create-failed issue relates to > Magnum or could you suggest other steps to test on my side? > > Best regards, > > Tony Pearce > > > > > On Thu, 10 Sep 2020 at 16:01, feilong wrote: > >> Hi Tony, >> >> Sorry for the late response for your thread. >> >> For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS and >> it works. >> >> For the 2nd issue, I think we were misunderstanding the nodes disk >> capacity. I was assuming you're talking about the k8s nodes, but seems >> you're talking about the physical compute host. I still don't think it's a >> Magnum issue because a k8s master/worker nodes are just normal Nova >> instances and managed by Heat. So I would suggest you use a simple HOT to >> test it, you can use this >> https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 >> >> Most of the cloud providers or organizations who have adopted Magnum are >> using Ceph as far as I know, just FYI. >> >> >> On 10/09/20 4:35 pm, Tony Pearce wrote: >> >> Hi all, hope you are all keeping safe and well. I am looking for >> information on the following two issues that I have which surrounds Magnum >> project: >> >> 1. Magnum does not support Openstack API with HTTPS >> 2. Magnum forces compute nodes to consume disk capacity for instance data >> >> My environment: Openstack Train deployed using Kayobe (Kolla-ansible). >> >> With regards to the HTTPS issue, Magnum stops working after enabling >> HTTPS because the certificate / CA certificate is not trusted by Magnum. >> The certificate which I am using is one that was purchased from GoDaddy and >> is trusted in web browsers (and is valid), just not trusted by the Magnum >> component. >> >> Regarding compute node disk consumption issue - I'm at a loss with >> regards to this and so I'm looking for more information about why this is >> being done and is there any way that I could avoid it? I have storage >> provided by a Cinder integration and so the consumption of compute node >> disk for instance data I need to avoid. >> >> Any information the community could provide to me with regards to the >> above would be much appreciated. I would very much like to use the Magnum >> project in this deployment for Kubernetes deployment within projects. >> >> Thanks in advance, >> >> Regards, >> >> Tony >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> >> -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon Sep 14 09:44:32 2020 From: feilong at catalyst.net.nz (feilong) Date: Mon, 14 Sep 2020 21:44:32 +1200 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi Tony, Does your Magnum support this config https://github.com/openstack/magnum/blob/master/magnum/conf/cinder.py#L47 can you try to change it from 0 to 10? 10 means the root disk volume size for the k8s node. By default the 0 means the node will be based on image instead of volume. On 14/09/20 9:37 pm, Tony Pearce wrote: > Hi Feilong, sure. The flavour I used has 2 CPU and 2GB memory. All > other values either unset or 0mb.  > I also used the same fedora 27 image that is being used for the > kubernetes cluster.  > > Thank you > Tony > > On Mon, 14 Sep 2020, 17:20 feilong, > wrote: > > Hi Tony, > > Could you please let me know  your flavor details? I would like to > test it in my devstack environment (based on LVM). Thanks. > > > On 14/09/20 8:27 pm, Tony Pearce wrote: >> Hi feilong, hope you are keeping well. Thank you for the info!   >> >> For issue 1. Maybe this should be with the kayobe/kolla-ansible >> team. Thanks for the insight :)  >> >> For the 2nd one, I was able to run the HOT template in your link. >> There's no issues at all running that multiple times concurrently >> while using the 0MB disk flavour. I tried four times with the >> last three executing one after the other so that they ran >> parallelly.  All were successful and completed and did not >> complain about the 0MB disk issue.  >> >> Does this conclude that the error and create-failed issue relates >> to Magnum or could you suggest other steps to test on my side?  >> >> Best regards, >> >> Tony Pearce >> >> >> >> >> On Thu, 10 Sep 2020 at 16:01, feilong > > wrote: >> >> Hi Tony, >> >> Sorry for the late response for your thread. >> >> For you HTTPS issue, we (Catalyst Cloud) are using Magnum >> with HTTPS and it works. >> >> For the 2nd issue, I think we were misunderstanding the nodes >> disk capacity. I was assuming you're talking about the k8s >> nodes, but seems you're talking about the physical compute >> host. I still don't think it's a Magnum issue because a k8s >> master/worker nodes are just normal Nova instances and >> managed by Heat. So I would suggest you use a simple HOT to >> test it, you can use this >> https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 >> >> Most of the cloud providers or organizations who have adopted >> Magnum are using Ceph as far as I know, just FYI. >> >> >> On 10/09/20 4:35 pm, Tony Pearce wrote: >>> Hi all, hope you are all keeping safe and well. I am looking >>> for information on the following two issues that I have >>> which surrounds Magnum project: >>> >>> 1. Magnum does not support Openstack API with HTTPS >>> 2. Magnum forces compute nodes to consume disk capacity for >>> instance data >>> >>> My environment: Openstack Train deployed using Kayobe >>> (Kolla-ansible).  >>> >>> With regards to the HTTPS issue, Magnum stops working after >>> enabling HTTPS because the certificate / CA certificate is >>> not trusted by Magnum. The certificate which I am using is >>> one that was purchased from GoDaddy and is trusted in web >>> browsers (and is valid), just not trusted by the Magnum >>> component.  >>> >>> Regarding compute node disk consumption issue - I'm at a >>> loss with regards to this and so I'm looking for more >>> information about why this is being done and is there any >>> way that I could avoid it?  I have storage provided by a >>> Cinder integration and so the consumption of compute node >>> disk for instance data I need to avoid.  >>> >>> Any information the community could provide to me with >>> regards to the above would be much appreciated. I would very >>> much like to use the Magnum project in this deployment for >>> Kubernetes deployment within projects.  >>> >>> Thanks in advance,  >>> >>> Regards, >>> >>> Tony >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Sep 14 10:20:41 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 14 Sep 2020 12:20:41 +0200 Subject: [neutron] Bug deputy report - week of 7.09.2020 Message-ID: Hi, Last week I was bug deputy. Below is summary of bugs reported during that time: *Critical* * https://bugs.launchpad.net/neutron/+bug/1894857 - lower-constraints job fail on Focal gate failure, Fixed already * https://bugs.launchpad.net/neutron/+bug/1894864 - ContextualVersionConflict with pecan 1.3.3 in networking-midonet - not assigned, related to the issue https://bugs.launchpad.net/neutron/+bug/1895196 *High* * https://bugs.launchpad.net/neutron/+bug/1894913 - haproxy permission issue, sounds as important issue but I didn't had time to confirm it so far - I asked for some more data what have change in their env recently and what could cause this issue, * https://bugs.launchpad.net/neutron/+bug/1894981 - [neutron-dynamic-routing] The self-service network can be bound to the bgp speaker, in progress * https://bugs.launchpad.net/neutron/+bug/1895038 - Flow drop on agent restart with ovs firewall driver - sounds as important issue but I didn't had time to confirm it so far, Already fixed in newer versions * https://bugs.launchpad.net/neutron/+bug/1895196 - pecan>=1.4.0 is greater than upper-constraints.txt - not assigned and it seems that we will need to revert our patch which bumped pecan version in neutron, at least for Victoria cycle, *Medium* * https://bugs.launchpad.net/neutron/+bug/1894825 - placement allocation update accepts only integers from 1 - in progress * https://bugs.launchpad.net/neutron/+bug/1895033 - When using ml2/OVN with other plugins with agents, non-OVN agents are broken for show/update/delete operations, in progress * https://bugs.launchpad.net/neutron/+bug/1895108 - ovn migration: db sync step doesn't work, in progress * https://bugs.launchpad.net/neutron/+bug/1894843 - [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host - I didn't had time to confirm that, seems like something for L3 subteam, no assigned yet, *Low* * https://bugs.launchpad.net/neutron/+bug/1894799 - For existing ovs interface, the ovs_use_veth parameter don't take effect, in progress * https://bugs.launchpad.net/neutron/+bug/1895401 - [L3][IPv6][DVR] missing address scope mark for IPv6 traffic, in progress, — Slawek Kaplonski Principal software engineer Red Hat From smooney at redhat.com Mon Sep 14 10:47:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Sep 2020 11:47:59 +0100 Subject: Is Storyboard really the future? In-Reply-To: References: <20200910154704.3erw242ynqldlq63@yuggoth.org> <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> Message-ID: On Mon, 2020-09-14 at 09:22 +0200, Thomas Goirand wrote: > On 9/14/20 8:59 AM, Radosław Piliszek wrote: > > On Sun, Sep 13, 2020 at 8:46 PM Thomas Goirand wrote: > > > > > > On 9/10/20 6:45 PM, Radosław Piliszek wrote: > > > > I feel you. I could not so far convince anyone to support me to work > > > > on it, mostly because > > > > Jira/GitHub/GitLab/Launchpad exists. > > > > Not to mention many small internal projects are happy with just Trello. :-) > > > > > > Did you just try to list all the non-free services you could in this > > > thread? Seriously, don't you care a little bit? You probably don't > > > realize it, but that's shocking, at least for me, and hopefully I'm not > > > the only one. > > > > I feel offended by the accusations. > > I *do* care about open source. > > > > Jeremy has already answered regarding GitLab and Launchpad. > > Let's not forget GitHub actually *is* the largest, diverse open source > > community, > > even though the service itself is not. It hurts me as well so please don't just > > randomly attack people mentioning non-free software. > > It can support open source software as well. > > You're the one mentioning Jira, GitHub, Trello as possible solution to > solve the fact that you don't like Storyboard. This is IMO very far from > the spirit of free software. Sorry if you took it as a personal attack: > there's nothing personal here, just a strong opposition to using this > kind of services. for what its worth i read it as a personal attack or at least a very agressive stance openstack has used github as a mirror and marketing vector for a very long time. we still use launchpad for many project which is opensouce even if other have mentioned it is also hard to run. jira and trello are solution that many ues even redhat uses both and we also use gilab, github and bugzilla. the fact is that there ofteen isnt one tool that works for everyone. that said one tool that is opensouce that i have been wanting to try for a while which might be another alternitive is git-bug. https://github.com/MichaelMure/git-bug unfortunetly the webui is basically non existant and it would force eveyoen to use a git workflow submit bugs which is not ideal given the userbase and our gerrit workflow. but one day it would be nice to have a terminal interface and git workflow for this ( and everything, life in the terminal is less scary then on the web :) ) > > The fact that many projects are using these non-free services to produce > free software is actually a problem, not a solution. that is a stance that easily offends and alienates contibutors. a tool is something the helps you complete a task. we can use opensouce or free or paid tools, im fine with a preference for using opensouce tools when it becomes religious and the only morally accptable thing to do i think that is problematic. part fo the reason i still work on openstack is its apach2 licene and the fact it has never taken and extreamist everything we use and touch must be opensouce approch. we practice open dev and the 4 opens is an important part of openstack cluture but branding all proprity software as evil is not part of that culture. we support integrations with hyperv, vmware and more propitroy storage and network backends then i can count. openstack is an example of open core done right where everything is open by default and then you can as a thirdparty vendor replace exsiting compents with your closed impementation but all feature are provided in the core and in the opensource release. anyway i think this is proably going a little more off topic. > Same for Github > being (probably) the largest repository of free software: that's a huge > problem, as huge as the number of projects hosted. Lucky, many just > think of it as just free hosting and nothing more. > > Gitlab being open-core, as Jeremy pointed out, is also a problem (anyone > that knows about the beginning of OpenStack and Eucalyptus knows why > open core is problematic). > > > I did not propose that in any part. Launchpad is FLOSS and that is my proposal. > > The general idea behind my mail was to emphasise that Storyboard has > > great aspirations > > and assumptions but is far from delivering its full potential so > > should not be recommended without > > giving background and other possible solutions. > > Launchpad is hardly installable, and is tightly connected to > Canonical/Ubuntu. It is a very good thing that the OpenStack community > has made efforts to get out of it. It is IMO counter-productive to push > projects to either go back to launchpad, or not migrate to Storyboard. well as someone that previously worked at intel and now works at redhat i have no porblem using launchpad, in fact i prefer it stongly over storyboard and bugzilla which is what we use for many downstream products. if i was forced to use storyboard for the project i contibute too i would not do any bug triage and i would avoid interacting with it for my own work as much as possible. the only way to improve storyboard to be a useful tool from my usecases is to rewrite it to provide a similar work flow to launchpad and jiria. maybe not exclviely but it really need to have a set of dashboard and a non search driven workflow like launchpad. > > The only viable solution is to contribute and make Storyboard better, or > switching to another existing free software. There are many out there > that could have done the job. well the programing lanugages that it is written in provides an impediment to that if it was python based or used a maintained framework then contibuting would be a lot simpler. i think the orginal intent of the email was to stop recommending that new project adopt a tool that has a different paridam of interaction then most people are used too. the story task approch works in project where features and bugfixes can land at any time in the cycle or are more interupt driven it a challanging user experince for project like nova that have a more tradtional propose (blueprint/bug) design resoltion then implement code workflow with fixed milestoens and different cut offs to for spec vs code ectra. > > Cheers, > > Thomas Goirand (zigo) > From zigo at debian.org Mon Sep 14 11:31:40 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 14 Sep 2020 13:31:40 +0200 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: <6ccb748c-0028-933c-25d5-a9e31c47f32c@debian.org> On 9/14/20 10:17 AM, Gaël THEROND wrote: > Hi everyone, > > So, thomas, your message was rude and can hurt because Yocto didn’t > suggested to use those tools, he was answering you that he feel the pain > as everyone is suggesting those tool when you talk with people on IRC etc. Sorry if it was perceived as rude. Though IMO Yocto *was* suggesting these tools, at least that's my perception in his message. > Even if I do understand your point and know the importance of being > autonomous and do not rely on non-FLOSS software, the thruth being all > those discussions is the pain in the ass that it is to contribute to > Openstack projects compared with other Open source software based on > Github or Github like workflow. If it's harder to contribute to OpenStack, IMO, it's not because of the tooling (ie: gerrit + git-review), but because the bar for patch quality is set much higher. Otherwise, I did find the git-review workflow much nicer than the one of Gitlab / Github. > The opensource community and especially the Openstack one need to > understand that people really get a limited amount of time and so if you > want to attract more people your contribution process have to be > streamlined and on par with what most of us developers do experience on > everyday. I very much agree that getting a patch accepted isn't easy. I gave up about some patches because core reviewer were asking for too much work that I cannot unfortunately provide (I understand why they do that though). Though this never was because of the infrastructure, which I find much nicer than any other. > We want a fully floss project to host it? Fine it’s perfectly valid > argument, then just reuse what’s already there with gitea and redirect > development effort of the abandonned software to the new platform in > order to support the missing part if ever required! This is IMO a much nicer idea than suggesting to go back to Launchpad, or stop migrating to Storyboard. Cheers, Thomas Goirand (zigo) From smooney at redhat.com Mon Sep 14 11:35:18 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Sep 2020 12:35:18 +0100 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: On Mon, 2020-09-14 at 10:17 +0200, Gaël THEROND wrote: > > > > ---------- Forwarded message ---------- > > From: Thomas Goirand > > To: openstack-discuss > > Cc: > > Bcc: > > Date: Mon, 14 Sep 2020 09:22:08 +0200 > > Subject: Re: Is Storyboard really the future? > > On 9/14/20 8:59 AM, Radosław Piliszek wrote: > > > > > On Sun, Sep 13, 2020 at 8:46 PM Thomas Goirand > > > > wrote: > > > > > > > > > > On 9/10/20 6:45 PM, Radosław Piliszek wrote: > > > > > I feel you. I could not so far convince anyone to support me to work > > > > > on it, mostly because > > > > > Jira/GitHub/GitLab/Launchpad exists. > > > > > Not to mention many small internal projects are happy with just > > > > Trello. :-) > > > > > > > > > > Did you just try to list all the non-free services you could in this > > > > thread? Seriously, don't you care a little bit? You probably don't > > > > realize it, but that's shocking, at least for me, and hopefully I'm not > > > > the only one. > > > > > > I feel offended by the accusations. > > > I *do* care about open source. > > > > > > Jeremy has already answered regarding GitLab and Launchpad. > > > Let's not forget GitHub actually *is* the largest, diverse open source > > > community, > > > even though the service itself is not. It hurts me as well so please > > > > don't just > > > > > randomly attack people mentioning non-free software. > > > It can support open source software as well. > > > > > > > > You're the one mentioning Jira, GitHub, Trello as possible solution to > > > > solve the fact that you don't like Storyboard. This is IMO very far from > > > > the spirit of free software. Sorry if you took it as a personal attack: > > > > there's nothing personal here, just a strong opposition to using this > > > > kind of services. > > > > > > > > The fact that many projects are using these non-free services to produce > > > > free software is actually a problem, not a solution. Same for Github > > > > being (probably) the largest repository of free software: that's a huge > > > > problem, as huge as the number of projects hosted. Lucky, many just > > > > think of it as just free hosting and nothing more. > > > > > > > > Gitlab being open-core, as Jeremy pointed out, is also a problem (anyone > > > > that knows about the beginning of OpenStack and Eucalyptus knows why > > > > open core is problematic). > > > > > > > > > I did not propose that in any part. Launchpad is FLOSS and that is my > > > > proposal. > > > > > The general idea behind my mail was to emphasise that Storyboard has > > > great aspirations > > > and assumptions but is far from delivering its full potential so > > > should not be recommended without > > > giving background and other possible solutions. > > > > > > > > Launchpad is hardly installable, and is tightly connected to > > > > Canonical/Ubuntu. It is a very good thing that the OpenStack community > > > > has made efforts to get out of it. It is IMO counter-productive to push > > > > projects to either go back to launchpad, or not migrate to Storyboard. > > > > > > > > The only viable solution is to contribute and make Storyboard better, or > > > > switching to another existing free software. There are many out there > > > > that could have done the job. > > > > > > > > Cheers, > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > _______________________________________________ > > > > openstack-discuss mailing list > > > > openstack-discuss at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > Hi everyone, > > So, thomas, your message was rude and can hurt because Yocto didn’t > suggested to use those tools, he was answering you that he feel the pain as > everyone is suggesting those tool when you talk with people on IRC etc. > > Even if I do understand your point and know the importance of being > autonomous and do not rely on non-FLOSS software, the thruth being all > those discussions is the pain in the ass that it is to contribute to > Openstack projects compared with other Open source software based on Github > or Github like workflow. actully i would stongly disaggree i find working with the github workflow to be much less compelling for code review then a gerrit based workflow. i used gitlab for deveploemnt before is started workign on openstack have found gerrit based workflow to be simpler and eaiser to have async conversations with then github. primarlly since comment live with the version on which they are posted. also when you rebase your fork it udates the code visabel in the pull requests meaning that the comments that are there from previous version fo the patch nolonger make sense sicne the code has now changes so looking back on why changes were made becomes much much harder. > > The opensource community and especially the Openstack one need to > understand that people really get a limited amount of time and so if you > want to attract more people your contribution process have to be > streamlined and on par with what most of us developers do experience on > everyday. The foundation made a first step toward that by migrating every > project on gitea, and honestly, I’m still amazed that while migrating those > projects it wasn’t decided to use the issues/projects feature of gitea. > There even is a cicd zuul plugin for gitea. i do like githubs issue tracking and if gitea has a similar set of functionality i do think that would be a good alternitive to launchpad that i could certenly work with happily. storyborad did not really meet that need which is why i still prefer lanuchpad however for larger projects like nova gitea still performs quite pooly so it woudl depend on how responsive it actully is in production. the simple lables/tags, milestones and issue + integration with commit message comments is one of the things i love about gitlab that i missed when i first started working with lanuchpad although closes-bug has been supported with a bot to close lanuchpad issue since i started mroe or less so that lessened the pain. so +1 on github sytle issue tracking but i woudl be -1 on moveing to a pull request flow instead of gerrit. > > As a community we propose things, but if the community don’t use them, it’s > because it’s not what they’re waiting for. We also need to step back from > time to time and admit that one software need to sunset and be migrated > elsewhere. well thats the thihng story board does work for a subset of the community. lanuchpad works for another. gitea might also work and i would be interested to see what that would looklike as i think issue tracking was onething github/gitlab got right. > > We want a fully floss project to host it? both launchpad and story borad are opensocue by the way. the pushback on launchpad primarly comes form the fact that is not hosted by the openstack foundation and as a result you need an external ubuntu one login. that said you use teh same login for gerrit so removeing the use of launchpad will not remove the need for you account unless we also added a new singel signon provider hosted by the foundation. gerrit certenly support other openid backends but its not configured for them on opendev or at least the openstack one is not. > Fine it’s perfectly valid > argument, then just reuse what’s already there with gitea and redirect > development effort of the abandonned software to the new platform in order > to support the missing part if ever required! > > > From geguileo at redhat.com Mon Sep 14 11:35:58 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 14 Sep 2020 13:35:58 +0200 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: <20200914113558.42fhrcylf5aelo6s@localhost> On 10/09, Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > answering questions in IRC, and participating in the Cinder weekly meetings > and at the midcycles. He's been particularly thorough and helpful in his > reviews of backend drivers, and has also been helpful in giving pointers to > new driver maintainers who are setting up third party CI for their drivers. > Having Lucio as a core reviewer will help improve the team's review > bandwidth without sacrificing review quality. > > In the absence of objections, I'll add Lucio to the core team just before > the next Cinder team meeting (Wednesday, 16 September at 1400 UTC in > #openstack-meeting-alt). Please communicate any concerns to me before that > time. > > cheers, > brian > +1 Lucio will be a great addition!! From smooney at redhat.com Mon Sep 14 11:43:29 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Sep 2020 12:43:29 +0100 Subject: Is Storyboard really the future? In-Reply-To: <6ccb748c-0028-933c-25d5-a9e31c47f32c@debian.org> References: <6ccb748c-0028-933c-25d5-a9e31c47f32c@debian.org> Message-ID: <7545c3e35ef649df2ab991e3f07fc978f44312e4.camel@redhat.com> On Mon, 2020-09-14 at 13:31 +0200, Thomas Goirand wrote: > On 9/14/20 10:17 AM, Gaël THEROND wrote: > > Hi everyone, > > > > So, thomas, your message was rude and can hurt because Yocto didn’t > > suggested to use those tools, he was answering you that he feel the pain > > as everyone is suggesting those tool when you talk with people on IRC etc. > > Sorry if it was perceived as rude. Though IMO Yocto *was* suggesting > these tools, at least that's my perception in his message. > > > Even if I do understand your point and know the importance of being > > autonomous and do not rely on non-FLOSS software, the thruth being all > > those discussions is the pain in the ass that it is to contribute to > > Openstack projects compared with other Open source software based on > > Github or Github like workflow. > > If it's harder to contribute to OpenStack, IMO, it's not because of the > tooling (ie: gerrit + git-review), but because the bar for patch quality > is set much higher. Otherwise, I did find the git-review workflow much > nicer than the one of Gitlab / Github. +1 both to the code quality bar esspically in more mature project and the ux of git review. i have shown it to friend that use gerrit but dont work on openstack and they were really happy with it. > > > The opensource community and especially the Openstack one need to > > understand that people really get a limited amount of time and so if you > > want to attract more people your contribution process have to be > > streamlined and on par with what most of us developers do experience on > > everyday. > > I very much agree that getting a patch accepted isn't easy. I gave up > about some patches because core reviewer were asking for too much work > that I cannot unfortunately provide (I understand why they do that > though). Though this never was because of the infrastructure, which I > find much nicer than any other. ya the new contributor bar for some porject is kindo like learning emacs it can be pretty much vertical at times but i do know that we try to help new and old contributors leap over that hurdel too. i think we do a better job of that then we used too but it can be intimidating. for the first year year and a hafl of working on openstack i did not understand the importance of irc and keeping an eye on the mailing list even if i did not post. as a result i tried to land thing purely via gerrit and unsruprissingly that did not go well until i started talking to people on irc/email so i could socialise my proposals and get feedback more directly. > > > We want a fully floss project to host it? Fine it’s perfectly valid > > argument, then just reuse what’s already there with gitea and redirect > > development effort of the abandonned software to the new platform in > > order to support the missing part if ever required! > > This is IMO a much nicer idea than suggesting to go back to Launchpad, > or stop migrating to Storyboard. ya i do think looking at gitia has a lot of merit. its even somthing i would be happy to bring up with the nova team who previous had decided to never move from lanuchpad. i was also apposed to moveing nova or os-vif from lanuchpad to storyborad after trying to use it for a bit but this would be something i would be tempted to at least try. > > Cheers, > > Thomas Goirand (zigo) > From gael.therond at bitswalk.com Mon Sep 14 12:34:27 2020 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 14 Sep 2020 14:34:27 +0200 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: Hi Sean, thanks for your answer, that's interesting! My point wasn't about whether or not software X implements better features than Y but rather about what contributors wait for when participating in projects. They want a simple and straight forward workflow with a minimum of overhead compared to what they already have/know. *For instance, here are the pain points that I can list from a newcomer perspective (whatever its skillset is, senior/junior):* * Having to install git review. * Having to create another new account. * Having to read an extensive amount of not really clear and seamless documentation (gerrit/zuul workflow). * Having to find the community communication channels (IRC/List). * Having to install an IRC client. * Having to find out which issue manager you need to use (Launchpad/Storyboard/Trello/whatever). *A more streamlined workflow should be something like:* * Use gitea for issue tracking, whatever your project is, people are used to the github/gitlab issue style nowadays (fork/branch/merge), visually, it's easier to understand. * Use OpenID from an already existing provider (Openstack (gitea?)/Github/Gitlab/Google/whatever), it eases the authentication process. * Get a clear and straightforward documentation (One where I don't need to open ten different tabs). * Get a more modern communication platform (Mattermost is floss, other non-floss alternatives such as Discord/Slack exist) that let you get audio/video streams, rich document sharing, meeting rooms etc. As usual, it's just my two cents on how we could ease things such as driving new contributors, ease our contributing routine, ease communication etc. Honestly, my very main pain point is about issue management both in terms of workflow and UI/UX. Le lun. 14 sept. 2020 à 13:35, Sean Mooney a écrit : > On Mon, 2020-09-14 at 10:17 +0200, Gaël THEROND wrote: > > > > > > ---------- Forwarded message ---------- > > > From: Thomas Goirand > > > To: openstack-discuss > > > Cc: > > > Bcc: > > > Date: Mon, 14 Sep 2020 09:22:08 +0200 > > > Subject: Re: Is Storyboard really the future? > > > On 9/14/20 8:59 AM, Radosław Piliszek wrote: > > > > > > > On Sun, Sep 13, 2020 at 8:46 PM Thomas Goirand > > > > > > wrote: > > > > > > > > > > > > > On 9/10/20 6:45 PM, Radosław Piliszek wrote: > > > > > > I feel you. I could not so far convince anyone to support me to > work > > > > > > on it, mostly because > > > > > > Jira/GitHub/GitLab/Launchpad exists. > > > > > > Not to mention many small internal projects are happy with just > > > > > > Trello. :-) > > > > > > > > > > > > > Did you just try to list all the non-free services you could in > this > > > > > thread? Seriously, don't you care a little bit? You probably don't > > > > > realize it, but that's shocking, at least for me, and hopefully > I'm not > > > > > the only one. > > > > > > > > I feel offended by the accusations. > > > > I *do* care about open source. > > > > > > > > Jeremy has already answered regarding GitLab and Launchpad. > > > > Let's not forget GitHub actually *is* the largest, diverse open > source > > > > community, > > > > even though the service itself is not. It hurts me as well so please > > > > > > don't just > > > > > > > randomly attack people mentioning non-free software. > > > > It can support open source software as well. > > > > > > > > > > > > You're the one mentioning Jira, GitHub, Trello as possible solution to > > > > > > solve the fact that you don't like Storyboard. This is IMO very far > from > > > > > > the spirit of free software. Sorry if you took it as a personal attack: > > > > > > there's nothing personal here, just a strong opposition to using this > > > > > > kind of services. > > > > > > > > > > > > The fact that many projects are using these non-free services to > produce > > > > > > free software is actually a problem, not a solution. Same for Github > > > > > > being (probably) the largest repository of free software: that's a huge > > > > > > problem, as huge as the number of projects hosted. Lucky, many just > > > > > > think of it as just free hosting and nothing more. > > > > > > > > > > > > Gitlab being open-core, as Jeremy pointed out, is also a problem > (anyone > > > > > > that knows about the beginning of OpenStack and Eucalyptus knows why > > > > > > open core is problematic). > > > > > > > > > > > > > I did not propose that in any part. Launchpad is FLOSS and that is my > > > > > > proposal. > > > > > > > The general idea behind my mail was to emphasise that Storyboard has > > > > great aspirations > > > > and assumptions but is far from delivering its full potential so > > > > should not be recommended without > > > > giving background and other possible solutions. > > > > > > > > > > > > Launchpad is hardly installable, and is tightly connected to > > > > > > Canonical/Ubuntu. It is a very good thing that the OpenStack community > > > > > > has made efforts to get out of it. It is IMO counter-productive to push > > > > > > projects to either go back to launchpad, or not migrate to Storyboard. > > > > > > > > > > > > The only viable solution is to contribute and make Storyboard better, > or > > > > > > switching to another existing free software. There are many out there > > > > > > that could have done the job. > > > > > > > > > > > > Cheers, > > > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > openstack-discuss mailing list > > > > > > openstack-discuss at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > > > > Hi everyone, > > > > So, thomas, your message was rude and can hurt because Yocto didn’t > > suggested to use those tools, he was answering you that he feel the pain > as > > everyone is suggesting those tool when you talk with people on IRC etc. > > > > Even if I do understand your point and know the importance of being > > autonomous and do not rely on non-FLOSS software, the thruth being all > > those discussions is the pain in the ass that it is to contribute to > > Openstack projects compared with other Open source software based on > Github > > or Github like workflow. > actully i would stongly disaggree i find working with the github workflow > to be > much less compelling for code review then a gerrit based workflow. > i used gitlab for deveploemnt before is started workign on openstack > have found gerrit based workflow to be simpler and eaiser to have async > conversations > with then github. primarlly since comment live with the version on which > they are posted. > also when you rebase your fork it udates the code visabel in the pull > requests meaning > that the comments that are there from previous version fo the patch > nolonger make sense > sicne the code has now changes so looking back on why changes were made > becomes much much > harder. > > > > The opensource community and especially the Openstack one need to > > understand that people really get a limited amount of time and so if you > > want to attract more people your contribution process have to be > > streamlined and on par with what most of us developers do experience on > > everyday. The foundation made a first step toward that by migrating every > > project on gitea, and honestly, I’m still amazed that while migrating > those > > projects it wasn’t decided to use the issues/projects feature of gitea. > > There even is a cicd zuul plugin for gitea. > i do like githubs issue tracking and if gitea has a similar set of > functionality > i do think that would be a good alternitive to launchpad that i could > certenly work with > happily. storyborad did not really meet that need which is why i still > prefer lanuchpad however > for larger projects like nova gitea still performs quite pooly so it woudl > depend on how responsive > it actully is in production. the simple lables/tags, milestones and issue > + integration with > commit message comments is one of the things i love about gitlab that i > missed when i first started > working with lanuchpad although closes-bug has been supported with a bot > to close lanuchpad issue > since i started mroe or less so that lessened the pain. > > so +1 on github sytle issue tracking but i woudl be -1 on moveing to a > pull request flow instead > of gerrit. > > > > As a community we propose things, but if the community don’t use them, > it’s > > because it’s not what they’re waiting for. We also need to step back from > > time to time and admit that one software need to sunset and be migrated > > elsewhere. > well thats the thihng story board does work for a subset of the community. > lanuchpad works for another. > gitea might also work and i would be interested to see what that would > looklike > as i think issue tracking was onething github/gitlab got right. > > > > We want a fully floss project to host it? > both launchpad and story borad are opensocue by the way. > the pushback on launchpad primarly comes form the fact that is not hosted > by the openstack foundation > and as a result you need an external ubuntu one login. that said you use > teh same login for gerrit > so removeing the use of launchpad will not remove the need for you > account unless we also added > a new singel signon provider hosted by the foundation. gerrit certenly > support other openid backends > but its not configured for them on opendev or at least the openstack one > is not. > > > Fine it’s perfectly valid > > argument, then just reuse what’s already there with gitea and redirect > > development effort of the abandonned software to the new platform in > order > > to support the missing part if ever required! > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Sep 14 13:14:56 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Sep 2020 14:14:56 +0100 Subject: Is Storyboard really the future? In-Reply-To: <7545c3e35ef649df2ab991e3f07fc978f44312e4.camel@redhat.com> References: <6ccb748c-0028-933c-25d5-a9e31c47f32c@debian.org> <7545c3e35ef649df2ab991e3f07fc978f44312e4.camel@redhat.com> Message-ID: On Mon, 2020-09-14 at 12:43 +0100, Sean Mooney wrote: > On Mon, 2020-09-14 at 13:31 +0200, Thomas Goirand wrote: > > On 9/14/20 10:17 AM, Gaël THEROND wrote: > > > Hi everyone, > > > > > > So, thomas, your message was rude and can hurt because Yocto didn’t > > > suggested to use those tools, he was answering you that he feel the pain > > > as everyone is suggesting those tool when you talk with people on IRC etc. > > > > Sorry if it was perceived as rude. Though IMO Yocto *was* suggesting > > these tools, at least that's my perception in his message. > > > > > Even if I do understand your point and know the importance of being > > > autonomous and do not rely on non-FLOSS software, the thruth being all > > > those discussions is the pain in the ass that it is to contribute to > > > Openstack projects compared with other Open source software based on > > > Github or Github like workflow. > > > > If it's harder to contribute to OpenStack, IMO, it's not because of the > > tooling (ie: gerrit + git-review), but because the bar for patch quality > > is set much higher. Otherwise, I did find the git-review workflow much > > nicer than the one of Gitlab / Github. > > +1 both to the code quality bar esspically in more mature project and the ux > of git review. i have shown it to friend that use gerrit but dont work on openstack > and they were really happy with it. > > > > > The opensource community and especially the Openstack one need to > > > understand that people really get a limited amount of time and so if you > > > want to attract more people your contribution process have to be > > > streamlined and on par with what most of us developers do experience on > > > everyday. > > > > I very much agree that getting a patch accepted isn't easy. I gave up > > about some patches because core reviewer were asking for too much work > > that I cannot unfortunately provide (I understand why they do that > > though). Though this never was because of the infrastructure, which I > > find much nicer than any other. > > ya the new contributor bar for some porject is kindo like learning emacs > it can be pretty much vertical at times but i do know that we try to help > new and old contributors leap over that hurdel too. i think we do a better > job of that then we used too but it can be intimidating. for the first year > year and a hafl of working on openstack i did not understand the importance > of irc and keeping an eye on the mailing list even if i did not post. > > as a result i tried to land thing purely via gerrit and unsruprissingly > that did not go well until i started talking to people on irc/email so > i could socialise my proposals and get feedback more directly. > > > > > We want a fully floss project to host it? Fine it’s perfectly valid > > > argument, then just reuse what’s already there with gitea and redirect > > > development effort of the abandonned software to the new platform in > > > order to support the missing part if ever required! > > > > This is IMO a much nicer idea than suggesting to go back to Launchpad, > > or stop migrating to Storyboard. > > ya i do think looking at gitia has a lot of merit. > its even somthing i would be happy to bring up with the nova team who > previous had decided to never move from lanuchpad. i was also apposed > to moveing nova or os-vif from lanuchpad to storyborad after trying to use > it for a bit but this would be something i would be tempted to at least > try. hum one gap that could be a blocker for gitea is the lack fo confidential issue support https://docs.gitea.io/en-us/comparison/#issue-tracker unless https://github.com/go-gitea/gitea/issues/3217 is adressed we would not be able to use it for security bugs so that i think would be a blocker. it looks like its in flight https://github.com/go-gitea/gitea/pull/11099 https://github.com/php-tuf/php-tuf/issues/25 https://github.com/isaacs/github/issues/37 but that would be a requirement for any issue tracker we use so we can track security bugs privetly. > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > From rfolco at redhat.com Mon Sep 14 13:14:55 2020 From: rfolco at redhat.com (Rafael Folco) Date: Mon, 14 Sep 2020 10:14:55 -0300 Subject: [tripleo] TripleO CI Summary: Unified Sprint 32 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 32** (Aug 21 thru Sep 10). The following is a summary of completed work during this sprint cycle: - Created new container build w/ ubi-8 and overcloud image jobs for rhos-16.2. - Created new rhos-16.2 integration and component pipelines with standalone job, OVB jobs, and standalone scenario jobs. - Added more jobs to the component and integration pipelines, like scenario12. - Changed promoter tests to cover TCIB/Kolla builds and container naming differences across branches (openstack-* vs distro-binary-*). - Continued the necessary changes to switch to the new configuration engine in promoter code. - Continued making improvements to the Tempest scenario manager. - Started design of upstream jobs to run w/ a parent that builds all packages and containers and children jobs that consume these artifacts. This will avoid the use of an upstream container registry. - Merged upstream parent/child jobs for centos-8 jobs running against openstack/tripleo-ci. https://review.opendev.org/#/c/747591/ - We are limiting where this new type of upstream job design is running atm to ensure quality and consistency. - More information can be found https://hackmd.io/ermQSlQ-Q-mDtZkNN2oihQ - Initial design for the dependency pipeline design to early detect breakages in the OS. - https://hackmd.io/I_CFKPXHTza-5i2kFDIs5Q#Dependency-Pipeline - Initial design for elastic-recheck service. - https://hackmd.io/HQ5hyGAOSuG44Le2x6YzUw - Ruck/Rover recorded notes [1]. The planned work for the next sprint extends the work started in the previous sprint and focuses on the following: - Continue adding jobs to RHOS 16.2 pipelines. - Apply new promoter configuration engine changes to consolidate all promoters into the same code. - Start a PoC implementation of the new dependency pipeline. - Start elastic-recheck containerization work. - Create upstream parent jobs to build containers for multiple jobs. - Continue the design and socialization of the openstack-tempest-skiplist project. The Ruck and Rover for this sprint are Bhagyashri Shewale (bhagyashris) and Rafael Folco (rfolco). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd [2]. Thanks, rfolco [1] https://hackmd.io/FUalpr55TJuy28QLp2tLng [2] https://hackmd.io/7Q0YO5JKS0agcf9qwoD4IQ -- Folco -------------- next part -------------- An HTML attachment was scrubbed... URL: From eharney at redhat.com Mon Sep 14 13:18:09 2020 From: eharney at redhat.com (Eric Harney) Date: Mon, 14 Sep 2020 09:18:09 -0400 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: <736ee286-4065-8fcc-0a79-3f80a2c28ea4@redhat.com> On 9/10/20 9:51 PM, Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > answering questions in IRC, and participating in the Cinder weekly > meetings and at the midcycles.  He's been particularly thorough and > helpful in his reviews of backend drivers, and has also been helpful in > giving pointers to new driver maintainers who are setting up third party > CI for their drivers.  Having Lucio as a core reviewer will help improve > the team's review bandwidth without sacrificing review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC > in #openstack-meeting-alt).  Please communicate any concerns to me > before that time. > > cheers, > brian > +1 from me, thanks for the contributions so far, Lucio! From rosmaita.fossdev at gmail.com Mon Sep 14 14:25:07 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 14 Sep 2020 10:25:07 -0400 Subject: [cinder] need comments TODAY about DB handling for default_types Message-ID: There's a discussion going on about https://review.opendev.org/#/c/737707/23 , namely about how to handle the new default_types table. My view is that since it simply records a relation between a project and a volume_type, it's not the kind of thing that we need to keep a history of, and entries could be hard-deleted. The current patch keeps a history by introducing an additional is_active column. The additional complexity is worth it if we want to keep a history of this relation, but is not if we don't. Please leave your thoughts on the patch as soon as possible (i.e., in the next few hours) so Rajat can complete this feature. thanks, brian From smooney at redhat.com Mon Sep 14 14:30:17 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Sep 2020 15:30:17 +0100 Subject: [cinder] need comments TODAY about DB handling for default_types In-Reply-To: References: Message-ID: On Mon, 2020-09-14 at 10:25 -0400, Brian Rosmaita wrote: > There's a discussion going on about > https://review.opendev.org/#/c/737707/23 , namely about how to handle > the new default_types table. My view is that since it simply records a > relation between a project and a volume_type, it's not the kind of thing > that we need to keep a history of, and entries could be hard-deleted. for what its worth soft-delete as a feature has caused many operational issue in nova over time so we have offten specualted that we might someday remove it. we have not added soft delete capablieies to new db tables in recent times so its proably a good idea to avoid defaulting to adding soft-delete support for new tables in cinder too. im not sure if you have similary had issue with soft delete in the past but i largely see the feature as tech debth in the code base that really does not fit with the cloud model. > > The current patch keeps a history by introducing an additional is_active > column. The additional complexity is worth it if we want to keep a > history of this relation, but is not if we don't. > > Please leave your thoughts on the patch as soon as possible (i.e., in > the next few hours) so Rajat can complete this feature. > > > thanks, > brian > From fungi at yuggoth.org Mon Sep 14 14:44:59 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 Sep 2020 14:44:59 +0000 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: <20200914144459.tz6hpst3u44ceand@yuggoth.org> On 2020-09-14 10:17:32 +0200 (+0200), Gaël THEROND wrote: [...] > The foundation made a first step toward that by migrating every > project on gitea, and honestly, I’m still amazed that while > migrating those projects it wasn’t decided to use the > issues/projects feature of gitea. Credit where credit is due, this was the work of the OpenStack Infrastructure Team (and later OpenDev) sysadmins and contributors, not anything driven by or even recommended by the OSF. If anything, the bulk of the work there was contributed by Red Hat employees. > There even is a cicd zuul plugin for gitea. [...] Neat! Where did you find that? I don't think the Zuul contributors are aware it even exists (at least I hadn't heard about it until now). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From juliaashleykreger at gmail.com Mon Sep 14 14:49:16 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 14 Sep 2020 07:49:16 -0700 Subject: REMINDER: 2020 Virtual Summit: Forum Submissions Now Accepted In-Reply-To: <20200914074606.rrjdkf4bz5twsgzm@skaplons-mac> References: <20200914074606.rrjdkf4bz5twsgzm@skaplons-mac> Message-ID: Curious about this as well since Ironic barely had meeting quorum last week due to the holiday in the states and this is on our meeting agenda for today. On Mon, Sep 14, 2020 at 12:48 AM Slawek Kaplonski wrote: > > Hi, > > Is deadline for proposing forum topics already reached? I'm trying to propose > something now and on https://cfp.openstack.org/app/presentations I see only info > that "Submission is closed". > > On Wed, Sep 09, 2020 at 05:57:00PM -0500, Jimmy McArthur wrote: > > Hello Everyone! > > > > We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Virtual Forum page [4]. > > > > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. > > > > The timeline for submissions is as follows: > > > > Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org. > > Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. > > Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. > > Sept 28th | Forum schedule final > > Oct 19th | Forum begins! > > > > If you have questions or concerns, please reach out to speakersupport at openstack.org (mailto:speakersupport at openstack.org). > > > > Cheers, > > Jimmy > > > > [1] https://wiki.openstack.org/wiki/Forum > > [2] https://www.openstack.org/summit/2020/ > > [3] https://cfp.openstack.org > > [4]https://wiki.openstack.org/wiki/Forum/Virtual2020 > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > From fungi at yuggoth.org Mon Sep 14 14:58:31 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 Sep 2020 14:58:31 +0000 Subject: Is Storyboard really the future? In-Reply-To: References: Message-ID: <20200914145830.rknjfesuuit5ij6n@yuggoth.org> On 2020-09-14 12:35:18 +0100 (+0100), Sean Mooney wrote: [...] > for larger projects like nova gitea still performs quite > pooly [...] Is this still the case today? Browsing https://opendev.org/openstack/nova has been rather snappy for me since the performance fixes went in a while back. > gitea might also work and i would be interested to see what that > would looklike as i think issue tracking was onething > github/gitlab got right. [...] At the moment, Gitea is purely a read-only code browsing and Git server interface in the OpenDev Collaboratory. Any of its more interactive features have been disabled so that we can load-balance requests across multiple (currently eight) Gitea servers to handle the volume of code browsing and Git fetches we see at peak. These multiple Gitea services can't share a common database backend, and until that happens we can't really consider trying any of its features which require accounts/authentication or storing stateful data (issues, wiki, et cetera). > the pushback on launchpad primarly comes form the fact that is not > hosted by the openstack foundation and as a result you need an > external ubuntu one login. that said you use teh same login for > gerrit so removeing the use of launchpad will not remove the need > for you account unless we also added a new singel signon provider > hosted by the foundation. gerrit certenly support other openid > backends but its not configured for them on opendev or at least > the openstack one is not. [...] The plan is and has always been to put together a central authentication broker to act as an SSO for all of the services which make up the OpenDev Collaboratory, for a more consistent and flexible user experience. It wouldn't be managed by the OSF, it would just be part of the services we're managing in OpenDev: https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html If anyone's interested in helping us execute that plan, please let us know. The more, the merrier! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From babu89jucse at gmail.com Mon Sep 14 05:38:30 2020 From: babu89jucse at gmail.com (Subhajit Chatterjee) Date: Mon, 14 Sep 2020 11:08:30 +0530 Subject: "This system does not support SSSE3" while creating an instance in openstack Message-ID: While creating a VM instance in openstack (created using devstack), , I have got an issue “this system does not support SSSE3”. The cpu_mode in both /etc/nova/nova.conf and nova-cpu.conf was "none" so I tried changing it to use "host-model" or "host-passthrough". It didn't work on both the cases. I still get the same SSSE3 not supported error. I did grep in "/proc/cpuinfo", and found the ssse3 entry. Then I tried changing LIBVIRT_CPU_MODE to host-passthrough of "devstack/lib/nova" file and ran unstack.sh and stack.sh. The issue still persists. Please suggest how to resolve this issue? -- Name:- Subhajit Chatterjee DEPT.:- Computer Science and Engineering IIT Delhi Email:- babu89jucse at gmail.com Phone number:- +91 9958555224 -------------- next part -------------- An HTML attachment was scrubbed... URL: From xin.zeng at intel.com Mon Sep 14 13:48:43 2020 From: xin.zeng at intel.com (Zeng, Xin) Date: Mon, 14 Sep 2020 13:48:43 +0000 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200911105155.184e32a0@w520.home> References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> <20200911005559.GA3932@joy-OptiPlex-7040> <20200911105155.184e32a0@w520.home> Message-ID: On Saturday, September 12, 2020 12:52 AM Alex Williamson wrote: > To: Zhao, Yan Y > Cc: Sean Mooney ; Cornelia Huck > ; Daniel P.Berrangé ; > kvm at vger.kernel.org; libvir-list at redhat.com; Jason Wang > ; qemu-devel at nongnu.org; > kwankhede at nvidia.com; eauger at redhat.com; Wang, Xin-ran ran.wang at intel.com>; corbet at lwn.net; openstack- > discuss at lists.openstack.org; Feng, Shaohe ; Tian, > Kevin ; Parav Pandit ; Ding, > Jian-feng ; dgilbert at redhat.com; > zhenyuw at linux.intel.com; Xu, Hejie ; > bao.yumeng at zte.com.cn; intel-gvt-dev at lists.freedesktop.org; > eskultet at redhat.com; Jiri Pirko ; dinechin at redhat.com; > devel at ovirt.org > Subject: Re: device compatibility interface for live migration with assigned > devices > > On Fri, 11 Sep 2020 08:56:00 +0800 > Yan Zhao wrote: > > > On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > > > On Thu, 10 Sep 2020 13:50:11 +0100 > > > Sean Mooney wrote: > > > > > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > > > Yan Zhao wrote: > > > > > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not > missed: > > > > > > > > the reason we want to specify compatible_type as a trait and > check > > > > > > > > whether target compatible_type is the superset of source > > > > > > > > compatible_type is for the consideration of backward > compatibility. > > > > > > > > e.g. > > > > > > > > an old generation device may have a mdev type xxx-v4-yyy, > while a newer > > > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > > > with the compatible_type traits, the old generation device is still > > > > > > > > able to be regarded as compatible to newer generation device > even their > > > > > > > > mdev types are not equal. > > > > > > > > > > > > > > If you want to support migration from v4 to v5, can't the > (presumably > > > > > > > newer) driver that supports v5 simply register the v4 type as well, > so > > > > > > > that the mdev can be created as v4? (Just like QEMU versioned > machine > > > > > > > types work.) > > > > > > > > > > > > yes, it should work in some conditions. > > > > > > but it may not be that good in some cases when v5 and v4 in the > name string > > > > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 > for > > > > > > gen9) > > > > > > > > > > > > e.g. > > > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > > > software does not support it initially, and v4 and v5 identify > hardware > > > > > > differences. > > > > > > > > > > My first hunch here is: Don't introduce types that may be compatible > > > > > later. Either make them compatible, or make them distinct by design, > > > > > and possibly add a different, compatible type later. > > > > > > > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > > > > software now downgrade mdev type from v5 to v4? > > > > > > not sure if moving hardware generation info into a separate > attribute > > > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, > while use > > > > > > compatible_pci_ids to identify compatibility. > > > > > > > > > > If the generations are compatible, don't mention it in the mdev type. > > > > > If they aren't, use distinct types, so that management software > doesn't > > > > > have to guess. At least that would be my naive approach here. > > > > yep that is what i would prefer to see too. > > > > > > > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + > type_name". > > > > > > in some devices, e.g. qat, different generations of devices are > binding to > > > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > > > > approach? Or maybe I'm just confused. > > > > yes i really dont like haveing the version in the mdev-type name > > > > i would stongly perfger just qat-type-1 wehere qat is just there as a way > of namespacing. > > > > although symmetric-cryto, asymmetric-cryto and compression woudl > be a better name then type-1, type-2, type-3 if > > > > that is what they would end up mapping too. e.g. qat-compression or > qat-aes is a much better name then type-1 > > > > higher layers of software are unlikely to parse the mdev names but as a > human looking at them its much eaiser to > > > > understand if the names are meaningful. the qat prefix i think is > important however to make sure that your mdev-types > > > > dont colide with other vendeors mdev types. so i woudl encurage all > vendors to prefix there mdev types with etiher the > > > > device name or the vendor. > > > > > > +1 to all this, the mdev type is meant to indicate a software > > > compatible interface, if different hardware versions can be software > > > compatible, then don't make the job of finding a compatible device > > > harder. The full type is a combination of the vendor driver name plus > > > the vendor provided type name specifically in order to provide a type > > > namespace per vendor driver. That's done at the mdev core level. > > > Thanks, > > > > hi Alex, > > got it. so do you suggest that vendors use consistent driver name over > > generations of devices? > > for qat, they create different modules for each generation. This > > practice is not good if they want to support migration between devices > > of different generations, right? > > > > and can I understand that we don't want support of migration between > > different mdev types even in future ? > > You need to balance your requirements here. If you're creating > different drivers per generation, that suggests different device APIs, > which is a legitimate use case for different mdev types. However if > you're expecting migration compatibility, that must be seamless to the > guest, therefore the device API must be identical. That suggests that > migration between different types doesn't make much sense. If a new > generation device wants to expose a new mdev type with new features or > device API, yet also support migration with an older mdev type, why > wouldn't it simply expose both the old and the new type? I think all of these make sense, and I am assuming it's also reasonable and common that each generation of device has a separate device driver module. On the other hand, please be aware that, the mdev type is consisted of the driver name of the mdev's parent device and the name of a mdev type which the device driver specifies. If a new generation device driver wants to expose an old mdev type, it has to register the same driver name as the old one so that the mdev type could be completely same. This doesn't make sense as a) driver name usually is unique for a device driver module. b) If a system has both these two generation devices, once one generation device driver is loaded, the other is not allowed to be loaded due to the same driver name. So to allow a new generation device to simply expose the old mdev type for compatibility like you proposed, is it possible to create the mdev type by another approach, e.g. device driver creates its own namespace for the mdev type instead of mdev's parent device driver name being used currently? Thanks, Xin > It seems much more supportable to simply instantiate an instance of the older type > than to create an instance of the new type, which by the contents of > the migration stream is configured to behave as the older type. The > latter sounds very difficult to test. > > A challenge when we think about migration between different types, > particularly across different vendor drivers, is that the migration > stream is opaque, it's device and vendor specific. Therefore it's not > only difficult for userspace to understand the compatibility matrix, but > also to actually support it in software, maintaining version and bug > compatibility across different drivers. It's clearly much, much easier > when the same code base (and thus the same mdev type) is producing and > consuming the migration data. Thanks, > > Alex From alex.williamson at redhat.com Mon Sep 14 14:44:49 2020 From: alex.williamson at redhat.com (Alex Williamson) Date: Mon, 14 Sep 2020 08:44:49 -0600 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> <20200911005559.GA3932@joy-OptiPlex-7040> <20200911105155.184e32a0@w520.home> Message-ID: <20200914084449.0182e8a9@x1.home> On Mon, 14 Sep 2020 13:48:43 +0000 "Zeng, Xin" wrote: > On Saturday, September 12, 2020 12:52 AM > Alex Williamson wrote: > > To: Zhao, Yan Y > > Cc: Sean Mooney ; Cornelia Huck > > ; Daniel P.Berrangé ; > > kvm at vger.kernel.org; libvir-list at redhat.com; Jason Wang > > ; qemu-devel at nongnu.org; > > kwankhede at nvidia.com; eauger at redhat.com; Wang, Xin-ran > ran.wang at intel.com>; corbet at lwn.net; openstack- > > discuss at lists.openstack.org; Feng, Shaohe ; Tian, > > Kevin ; Parav Pandit ; Ding, > > Jian-feng ; dgilbert at redhat.com; > > zhenyuw at linux.intel.com; Xu, Hejie ; > > bao.yumeng at zte.com.cn; intel-gvt-dev at lists.freedesktop.org; > > eskultet at redhat.com; Jiri Pirko ; dinechin at redhat.com; > > devel at ovirt.org > > Subject: Re: device compatibility interface for live migration with assigned > > devices > > > > On Fri, 11 Sep 2020 08:56:00 +0800 > > Yan Zhao wrote: > > > > > On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > > > > On Thu, 10 Sep 2020 13:50:11 +0100 > > > > Sean Mooney wrote: > > > > > > > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > > > > Yan Zhao wrote: > > > > > > > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not > > missed: > > > > > > > > > the reason we want to specify compatible_type as a trait and > > check > > > > > > > > > whether target compatible_type is the superset of source > > > > > > > > > compatible_type is for the consideration of backward > > compatibility. > > > > > > > > > e.g. > > > > > > > > > an old generation device may have a mdev type xxx-v4-yyy, > > while a newer > > > > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > > > > with the compatible_type traits, the old generation device is still > > > > > > > > > able to be regarded as compatible to newer generation device > > even their > > > > > > > > > mdev types are not equal. > > > > > > > > > > > > > > > > If you want to support migration from v4 to v5, can't the > > (presumably > > > > > > > > newer) driver that supports v5 simply register the v4 type as well, > > so > > > > > > > > that the mdev can be created as v4? (Just like QEMU versioned > > machine > > > > > > > > types work.) > > > > > > > > > > > > > > yes, it should work in some conditions. > > > > > > > but it may not be that good in some cases when v5 and v4 in the > > name string > > > > > > > of mdev type identify hardware generation (e.g. v4 for gen8, and v5 > > for > > > > > > > gen9) > > > > > > > > > > > > > > e.g. > > > > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > > > > software does not support it initially, and v4 and v5 identify > > hardware > > > > > > > differences. > > > > > > > > > > > > My first hunch here is: Don't introduce types that may be compatible > > > > > > later. Either make them compatible, or make them distinct by design, > > > > > > and possibly add a different, compatible type later. > > > > > > > > > > > > > then after software upgrade, v5 is now compatible to v4, should the > > > > > > > software now downgrade mdev type from v5 to v4? > > > > > > > not sure if moving hardware generation info into a separate > > attribute > > > > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, > > while use > > > > > > > compatible_pci_ids to identify compatibility. > > > > > > > > > > > > If the generations are compatible, don't mention it in the mdev type. > > > > > > If they aren't, use distinct types, so that management software > > doesn't > > > > > > have to guess. At least that would be my naive approach here. > > > > > yep that is what i would prefer to see too. > > > > > > > > > > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + > > type_name". > > > > > > > in some devices, e.g. qat, different generations of devices are > > binding to > > > > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > > > > > > > I guess that shows a shortcoming of that "driver_name + type_name" > > > > > > approach? Or maybe I'm just confused. > > > > > yes i really dont like haveing the version in the mdev-type name > > > > > i would stongly perfger just qat-type-1 wehere qat is just there as a way > > of namespacing. > > > > > although symmetric-cryto, asymmetric-cryto and compression woudl > > be a better name then type-1, type-2, type-3 if > > > > > that is what they would end up mapping too. e.g. qat-compression or > > qat-aes is a much better name then type-1 > > > > > higher layers of software are unlikely to parse the mdev names but as a > > human looking at them its much eaiser to > > > > > understand if the names are meaningful. the qat prefix i think is > > important however to make sure that your mdev-types > > > > > dont colide with other vendeors mdev types. so i woudl encurage all > > vendors to prefix there mdev types with etiher the > > > > > device name or the vendor. > > > > > > > > +1 to all this, the mdev type is meant to indicate a software > > > > compatible interface, if different hardware versions can be software > > > > compatible, then don't make the job of finding a compatible device > > > > harder. The full type is a combination of the vendor driver name plus > > > > the vendor provided type name specifically in order to provide a type > > > > namespace per vendor driver. That's done at the mdev core level. > > > > Thanks, > > > > > > hi Alex, > > > got it. so do you suggest that vendors use consistent driver name over > > > generations of devices? > > > for qat, they create different modules for each generation. This > > > practice is not good if they want to support migration between devices > > > of different generations, right? > > > > > > and can I understand that we don't want support of migration between > > > different mdev types even in future ? > > > > You need to balance your requirements here. If you're creating > > different drivers per generation, that suggests different device APIs, > > which is a legitimate use case for different mdev types. However if > > you're expecting migration compatibility, that must be seamless to the > > guest, therefore the device API must be identical. That suggests that > > migration between different types doesn't make much sense. If a new > > generation device wants to expose a new mdev type with new features or > > device API, yet also support migration with an older mdev type, why > > wouldn't it simply expose both the old and the new type? > > I think all of these make sense, and I am assuming it's also reasonable and > common that each generation of device has a separate device driver module. > On the other hand, please be aware that, the mdev type is consisted of the > driver name of the mdev's parent device and the name of a mdev type which > the device driver specifies. > If a new generation device driver wants to expose an old mdev type, it has to > register the same driver name as the old one so that the mdev type could > be completely same. This doesn't make sense as a) driver name usually is > unique for a device driver module. b) If a system has both these two > generation devices, once one generation device driver is loaded, the other > is not allowed to be loaded due to the same driver name. > So to allow a new generation device to simply expose the old mdev type for > compatibility like you proposed, is it possible to create the mdev type by > another approach, e.g. device driver creates its own namespace for the > mdev type instead of mdev's parent device driver name being used currently? TBH, I don't think that it's reasonable or common that different drivers are used for each generation of hardware. Drivers typically evolve to support new generations of hardware, often sharing significant code between generations. When we deal with mdev migration, we have an opaque data stream managed by the driver, our default assumption is therefore that the driver plays a significant role in the composition of that data stream. I'm not ruling out that we should support some form of compatibility between types, but in the described scenario it seems the development model of the vendor drivers is not conducive to the most obvious form of compatibility checking. Thanks, Alex From jimmy at openstack.org Mon Sep 14 15:11:38 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 14 Sep 2020 10:11:38 -0500 Subject: REMINDER: 2020 Virtual Summit: Forum Submissions Now Accepted In-Reply-To: References: Message-ID: <9F45540A-C761-4E4E-B5EF-94DABCCBC883@getmailspring.com> Sorry - I thought I responded to the whole list, but I just responded to Lars :| I've opened this back up again. It shut off at "midnight" i/o 23:59. Should be available for submissions. I'll set it to stay open an extra 8 hours as well. So people should have through 8am, 9/15 (Pacific). Cheers, Jimmy On Sep 14 2020, at 9:49 am, Julia Kreger wrote: > Curious about this as well since Ironic barely had meeting quorum last > week due to the holiday in the states and this is on our meeting > agenda for today. > > On Mon, Sep 14, 2020 at 12:48 AM Slawek Kaplonski wrote: > > > > Hi, > > > > Is deadline for proposing forum topics already reached? I'm trying to propose > > something now and on https://cfp.openstack.org/app/presentations I see only info > > that "Submission is closed". > > > > On Wed, Sep 09, 2020 at 05:57:00PM -0500, Jimmy McArthur wrote: > > > Hello Everyone! > > > > > > We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Virtual Forum page [4]. > > > > > > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. > > > > > > The timeline for submissions is as follows: > > > > > > Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org. > > > Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. > > > Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. > > > Sept 28th | Forum schedule final > > > Oct 19th | Forum begins! > > > > > > If you have questions or concerns, please reach out to speakersupport at openstack.org (mailto:speakersupport at openstack.org). > > > > > > Cheers, > > > Jimmy > > > > > > [1] https://wiki.openstack.org/wiki/Forum > > > [2] https://www.openstack.org/summit/2020/ > > > [3] https://cfp.openstack.org > > > [4]https://wiki.openstack.org/wiki/Forum/Virtual2020 > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Sep 14 15:27:21 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 14 Sep 2020 17:27:21 +0200 Subject: memchached connections In-Reply-To: References: Message-ID: Hello, python-memcached badly handles connections during a flush on reconnect and so connections can grow up exponentially [1]. I don't know if it is the same issue that you faced but it could be a track to follow. On oslo.cache a fix has been submitted but it is not yet merged [2]. [1] https://bugs.launchpad.net/oslo.cache/+bug/1888394 [2] https://review.opendev.org/#/c/742193/ Le ven. 11 sept. 2020 à 23:29, Tony Liu a écrit : > Hi, > > Is there any guidance or experiences to estimate the number > of memcached connections? > > Here is memcached connection on one of the 3 controllers. > Connection number is the total established connections to > all 3 memcached nodes. > > Node 1: > 10 Keystone workers have 62 connections. > 11 Nova API workers have 37 connections. > 6 Neutron server works have 4304 connections. > 1 memcached has 4973 connections. > > Node 2: > 10 Keystone workers have 62 connections. > 11 Nova API workers have 30 connections. > 6 Neutron server works have 3703 connections. > 1 memcached has 4973 connections. > > Node 3: > 10 Keystone workers have 54 connections. > 11 Nova API workers have 15 connections. > 6 Neutron server works have 6541 connections. > 1 memcached has 4973 connections. > > Before I increase the connection limit for memcached, I'd > like to understand if all the above is expected? > > How Neutron server and memcached take so many connections? > > Any elaboration is appreciated. > > BTW, the problem leading me here is memcached connection timeout, > which results all services depending on memcached stop working > properly. > > > Thanks! > Tony > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Mon Sep 14 16:09:47 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 14 Sep 2020 16:09:47 +0000 Subject: memchached connections In-Reply-To: References: Message-ID: Radosław pointed another bug https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 referring to the same fix https://review.opendev.org/#/c/742193/ Regarding to the fix, The comment says "This flag is off by default for backwards compatibility.". But I see this flag is on by default in current code. That's how it causes issues. This fix changes the default value from on to off. It does break backwards compatibility. To keep Keystone working as the old way, along with this fix, this flag has to be explicitly set to true in keystone.conf. For neutron-server and nova-api, it's good to leave this flag off by default. Am I correct? Thanks! Tony > -----Original Message----- > From: Herve Beraud > Sent: Monday, September 14, 2020 8:27 AM > To: Tony Liu > Cc: openstack-discuss > Subject: Re: memchached connections > > Hello, > > python-memcached badly handles connections during a flush on reconnect > and so connections can grow up exponentially [1]. > > > I don't know if it is the same issue that you faced but it could be a > track to follow. > > On oslo.cache a fix has been submitted but it is not yet merged [2]. > > > [1] https://bugs.launchpad.net/oslo.cache/+bug/1888394 > [2] https://review.opendev.org/#/c/742193/ > > Le ven. 11 sept. 2020 à 23:29, Tony Liu > a écrit : > > > Hi, > > Is there any guidance or experiences to estimate the number > of memcached connections? > > Here is memcached connection on one of the 3 controllers. > Connection number is the total established connections to > all 3 memcached nodes. > > Node 1: > 10 Keystone workers have 62 connections. > 11 Nova API workers have 37 connections. > 6 Neutron server works have 4304 connections. > 1 memcached has 4973 connections. > > Node 2: > 10 Keystone workers have 62 connections. > 11 Nova API workers have 30 connections. > 6 Neutron server works have 3703 connections. > 1 memcached has 4973 connections. > > Node 3: > 10 Keystone workers have 54 connections. > 11 Nova API workers have 15 connections. > 6 Neutron server works have 6541 connections. > 1 memcached has 4973 connections. > > Before I increase the connection limit for memcached, I'd > like to understand if all the above is expected? > > How Neutron server and memcached take so many connections? > > Any elaboration is appreciated. > > BTW, the problem leading me here is memcached connection timeout, > which results all services depending on memcached stop working > properly. > > > Thanks! > Tony > > > > > > > -- > > Hervé Beraud > Senior Software Engineer > > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From openstack at nemebean.com Mon Sep 14 16:14:14 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 14 Sep 2020 11:14:14 -0500 Subject: [oslo] Proposing Lance Bragstad as oslo.cache core In-Reply-To: References: Message-ID: This is now done. Welcome to the oslo.cache team, Lance! On 8/13/20 10:06 AM, Moises Guimaraes de Medeiros wrote: > Hello everybody, > > It is my pleasure to propose Lance Bragstad (lbragstad) as a new member > of the oslo.core core team. > > Lance has been a big contributor to the project and is known as a > walking version of the Keystone documentation, which happens to be one > of the biggest consumers of oslo.cache. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > From openstack at nemebean.com Mon Sep 14 16:19:56 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 14 Sep 2020 11:19:56 -0500 Subject: [oslo] PTG Planning Message-ID: <04041f82-b9ad-bb34-15a2-59a0f4f5a21c@nemebean.com> It's that time again. I've created an etherpad[0] for Oslo PTG planning. If you have anything to discuss at the PTG, please add it to the list. I've already requested a couple of hours of PTG time, but we can adjust that if we have more or less to talk about than normal. I don't have time/place details there yet, but it will be the same as last time: two hours starting at the regular meeting time. Thanks. -Ben 0: https://etherpad.opendev.org/p/oslo-wallaby-topics From fungi at yuggoth.org Mon Sep 14 16:39:49 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 Sep 2020 16:39:49 +0000 Subject: Is Storyboard really the future? In-Reply-To: References: <20200910154704.3erw242ynqldlq63@yuggoth.org> <32345cfd-86fd-b60e-ed3c-baf664aa4807@goirand.fr> Message-ID: <20200914163949.vio3c252vw2ghsrx@yuggoth.org> On 2020-09-14 11:47:59 +0100 (+0100), Sean Mooney wrote: [...] > it would be nice to have a terminal interface and git workflow for > this ( and everything, life in the terminal is less scary then on > the web :) ) [...] https://pypi.org/project/boartty/ Also, I agree. Using Git as a database (maybe like Gerrit does with its NoteDB) and base data exchange protocol is an intriguing idea, though makes me wonder how private stories would be handled if the idea is to be able to clone/fetch the stories and tasks. > well the programing lanugages that it is written in provides an > impediment to that if it was python based StoryBoard (the actual API service) is written entirely in Python. > or used a maintained framework then contibuting would be a lot > simpler. [...] The Web framework used for the StoryBoard Web Client is indeed showing its age. It was originally chosen to coincide with a proposed Horizon revamp years ago, but hasn't been updated (we'd love some help there if people familiar with AngularJS are interested in pitching in on it). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From hberaud at redhat.com Mon Sep 14 16:46:13 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 14 Sep 2020 18:46:13 +0200 Subject: memchached connections In-Reply-To: References: Message-ID: Le lun. 14 sept. 2020 à 18:09, Tony Liu a écrit : > Radosław pointed another bug > https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 > referring to the same fix > https://review.opendev.org/#/c/742193/ > > Regarding to the fix, The comment says "This flag is off by > default for backwards compatibility.". But I see this flag is > on by default in current code. That's how it causes issues. > This fix changes the default value from on to off. It does break > backwards compatibility. To keep Keystone working as the old way, > along with this fix, this flag has to be explicitly set to true > in keystone.conf. For neutron-server and nova-api, it's good to > leave this flag off by default. Am I correct? > > Long short story as far as I correctly remember this topic. Currently flush on reconnect is not an option and it is always triggered (in the corresponding scenario). If we decide to introduce this new option `memcache_pool_flush_on_reconnect` we need to set this option to `True` as the default value to keep the backward compat. If this option is set to `true` then flush on reconnect will be triggered all the time in the corresponding scenario. Use `True` as default value was my first choice for these changes, and I think we need to give prior to backward compat for the first time and in a second time start by deprecating this behavior and turn this option to `False` as the default value if it helps to fix things. Finally after some discussions `False` have been retained as default value (c.f comments on https://review.opendev.org/#/c/742193/) which mean that flush on reconnect will not be executed and in this case I think we can say that backward compat is broken as this is not the current behavior. AFAIK `flush_on_reconnect` have been added for Keystone and I think only Keystone really needs that but other people could confirm that. If we decide to continue with `False` as the default value then neutron-server and nova-api could leave this default value as I don't think we need that (c.f my previous line). Finally, it could be worth to deep dive in the python-memcached side which is where the root cause is (the exponential connections) and to see how to address that. Hope that helps you. > Thanks! > Tony > > -----Original Message----- > > From: Herve Beraud > > Sent: Monday, September 14, 2020 8:27 AM > > To: Tony Liu > > Cc: openstack-discuss > > Subject: Re: memchached connections > > > > Hello, > > > > python-memcached badly handles connections during a flush on reconnect > > and so connections can grow up exponentially [1]. > > > > > > I don't know if it is the same issue that you faced but it could be a > > track to follow. > > > > On oslo.cache a fix has been submitted but it is not yet merged [2]. > > > > > > [1] https://bugs.launchpad.net/oslo.cache/+bug/1888394 > > [2] https://review.opendev.org/#/c/742193/ > > > > Le ven. 11 sept. 2020 à 23:29, Tony Liu > > a écrit : > > > > > > Hi, > > > > Is there any guidance or experiences to estimate the number > > of memcached connections? > > > > Here is memcached connection on one of the 3 controllers. > > Connection number is the total established connections to > > all 3 memcached nodes. > > > > Node 1: > > 10 Keystone workers have 62 connections. > > 11 Nova API workers have 37 connections. > > 6 Neutron server works have 4304 connections. > > 1 memcached has 4973 connections. > > > > Node 2: > > 10 Keystone workers have 62 connections. > > 11 Nova API workers have 30 connections. > > 6 Neutron server works have 3703 connections. > > 1 memcached has 4973 connections. > > > > Node 3: > > 10 Keystone workers have 54 connections. > > 11 Nova API workers have 15 connections. > > 6 Neutron server works have 6541 connections. > > 1 memcached has 4973 connections. > > > > Before I increase the connection limit for memcached, I'd > > like to understand if all the above is expected? > > > > How Neutron server and memcached take so many connections? > > > > Any elaboration is appreciated. > > > > BTW, the problem leading me here is memcached connection timeout, > > which results all services depending on memcached stop working > > properly. > > > > > > Thanks! > > Tony > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > Senior Software Engineer > > > > Red Hat - Openstack Oslo > > irc: hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Mon Sep 14 17:17:33 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 14 Sep 2020 17:17:33 +0000 Subject: memchached connections In-Reply-To: References: Message-ID: Thanks for clarifications! I am fine with the fix. My point is that, to keep Keystone work as the way it used to be, with this fix, flush_on_reconnect has to be explicitly set to true in keystone.conf. This needs to be taken care of by TripleO, Kolla Ansible, Juju, etc. Tony > -----Original Message----- > From: Herve Beraud > Sent: Monday, September 14, 2020 9:46 AM > To: Tony Liu > Cc: openstack-discuss > Subject: Re: memchached connections > > > > Le lun. 14 sept. 2020 à 18:09, Tony Liu > a écrit : > > > Radosław pointed another bug > https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 > referring to the same fix > https://review.opendev.org/#/c/742193/ > > Regarding to the fix, The comment says "This flag is off by > default for backwards compatibility.". But I see this flag is > on by default in current code. That's how it causes issues. > This fix changes the default value from on to off. It does break > backwards compatibility. To keep Keystone working as the old way, > along with this fix, this flag has to be explicitly set to true > in keystone.conf. For neutron-server and nova-api, it's good to > leave this flag off by default. Am I correct? > > > > > Long short story as far as I correctly remember this topic. > > Currently flush on reconnect is not an option and it is always triggered > (in the corresponding scenario). > > If we decide to introduce this new option > `memcache_pool_flush_on_reconnect` we need to set this option to `True` > as the default value to keep the backward compat. > > If this option is set to `true` then flush on reconnect will be > triggered all the time in the corresponding scenario. > > Use `True` as default value was my first choice for these changes, and I > think we need to give prior to backward compat for the first time and in > a second time start by deprecating this behavior and turn this option to > `False` as the default value if it helps to fix things. > > Finally after some discussions `False` have been retained as default > value (c.f comments on https://review.opendev.org/#/c/742193/) which > mean that flush on reconnect will not be executed and in this case I > think we can say that backward compat is broken as this is not the > current behavior. > > AFAIK `flush_on_reconnect` have been added for Keystone and I think only > Keystone really needs that but other people could confirm that. > > If we decide to continue with `False` as the default value then neutron- > server and nova-api could leave this default value as I don't think we > need that (c.f my previous line). > > > Finally, it could be worth to deep dive in the python-memcached side > which is where the root cause is (the exponential connections) and to > see how to address that. > > Hope that helps you. > > > > Thanks! > Tony > > -----Original Message----- > > From: Herve Beraud > > > Sent: Monday, September 14, 2020 8:27 AM > > To: Tony Liu > > > Cc: openstack-discuss > > > Subject: Re: memchached connections > > > > Hello, > > > > python-memcached badly handles connections during a flush on > reconnect > > and so connections can grow up exponentially [1]. > > > > > > I don't know if it is the same issue that you faced but it could > be a > > track to follow. > > > > On oslo.cache a fix has been submitted but it is not yet merged > [2]. > > > > > > [1] https://bugs.launchpad.net/oslo.cache/+bug/1888394 > > [2] https://review.opendev.org/#/c/742193/ > > > > Le ven. 11 sept. 2020 à 23:29, Tony Liu > > > > a écrit : > > > > > > Hi, > > > > Is there any guidance or experiences to estimate the number > > of memcached connections? > > > > Here is memcached connection on one of the 3 controllers. > > Connection number is the total established connections to > > all 3 memcached nodes. > > > > Node 1: > > 10 Keystone workers have 62 connections. > > 11 Nova API workers have 37 connections. > > 6 Neutron server works have 4304 connections. > > 1 memcached has 4973 connections. > > > > Node 2: > > 10 Keystone workers have 62 connections. > > 11 Nova API workers have 30 connections. > > 6 Neutron server works have 3703 connections. > > 1 memcached has 4973 connections. > > > > Node 3: > > 10 Keystone workers have 54 connections. > > 11 Nova API workers have 15 connections. > > 6 Neutron server works have 6541 connections. > > 1 memcached has 4973 connections. > > > > Before I increase the connection limit for memcached, I'd > > like to understand if all the above is expected? > > > > How Neutron server and memcached take so many connections? > > > > Any elaboration is appreciated. > > > > BTW, the problem leading me here is memcached connection > timeout, > > which results all services depending on memcached stop > working > > properly. > > > > > > Thanks! > > Tony > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > Senior Software Engineer > > > > Red Hat - Openstack Oslo > > irc: hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > > > -- > > Hervé Beraud > Senior Software Engineer > > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From kennelson11 at gmail.com Mon Sep 14 17:21:14 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 14 Sep 2020 10:21:14 -0700 Subject: [release] [heat] [karbor] [patrole] [requirements] [swift] [tempest] Cycle with Intermediary Deliverables Needing Releases? Message-ID: Hello! Quick reminder that for deliverables following the cycle-with-intermediary model, the release team will use the latest victoria release available on release week. The following deliverables have done a victoria release, but it was not refreshed in the last two months: - heat-agents - karbor-dashboard - karbor - patrole - requirements - swift - tempest You should consider making a new one very soon, so that we don't use an outdated version for the final release. -Kendall Nelson (diablo_rojo) & the Release Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Sep 14 17:43:29 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 14 Sep 2020 19:43:29 +0200 Subject: memchached connections In-Reply-To: References: Message-ID: Feel free to leave comments on the review. Le lun. 14 sept. 2020 à 19:17, Tony Liu a écrit : > Thanks for clarifications! > I am fine with the fix. My point is that, to keep Keystone work > as the way it used to be, with this fix, flush_on_reconnect has > to be explicitly set to true in keystone.conf. This needs to be > taken care of by TripleO, Kolla Ansible, Juju, etc. > > Tony > > -----Original Message----- > > From: Herve Beraud > > Sent: Monday, September 14, 2020 9:46 AM > > To: Tony Liu > > Cc: openstack-discuss > > Subject: Re: memchached connections > > > > > > > > Le lun. 14 sept. 2020 à 18:09, Tony Liu > > a écrit : > > > > > > Radosław pointed another bug > > https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 > > referring to the same fix > > https://review.opendev.org/#/c/742193/ > > > > Regarding to the fix, The comment says "This flag is off by > > default for backwards compatibility.". But I see this flag is > > on by default in current code. That's how it causes issues. > > This fix changes the default value from on to off. It does break > > backwards compatibility. To keep Keystone working as the old way, > > along with this fix, this flag has to be explicitly set to true > > in keystone.conf. For neutron-server and nova-api, it's good to > > leave this flag off by default. Am I correct? > > > > > > > > > > Long short story as far as I correctly remember this topic. > > > > Currently flush on reconnect is not an option and it is always triggered > > (in the corresponding scenario). > > > > If we decide to introduce this new option > > `memcache_pool_flush_on_reconnect` we need to set this option to `True` > > as the default value to keep the backward compat. > > > > If this option is set to `true` then flush on reconnect will be > > triggered all the time in the corresponding scenario. > > > > Use `True` as default value was my first choice for these changes, and I > > think we need to give prior to backward compat for the first time and in > > a second time start by deprecating this behavior and turn this option to > > `False` as the default value if it helps to fix things. > > > > Finally after some discussions `False` have been retained as default > > value (c.f comments on https://review.opendev.org/#/c/742193/) which > > mean that flush on reconnect will not be executed and in this case I > > think we can say that backward compat is broken as this is not the > > current behavior. > > > > AFAIK `flush_on_reconnect` have been added for Keystone and I think only > > Keystone really needs that but other people could confirm that. > > > > If we decide to continue with `False` as the default value then neutron- > > server and nova-api could leave this default value as I don't think we > > need that (c.f my previous line). > > > > > > Finally, it could be worth to deep dive in the python-memcached side > > which is where the root cause is (the exponential connections) and to > > see how to address that. > > > > Hope that helps you. > > > > > > > > Thanks! > > Tony > > > -----Original Message----- > > > From: Herve Beraud > > > > > Sent: Monday, September 14, 2020 8:27 AM > > > To: Tony Liu > > > > > Cc: openstack-discuss > > > > > Subject: Re: memchached connections > > > > > > Hello, > > > > > > python-memcached badly handles connections during a flush on > > reconnect > > > and so connections can grow up exponentially [1]. > > > > > > > > > I don't know if it is the same issue that you faced but it could > > be a > > > track to follow. > > > > > > On oslo.cache a fix has been submitted but it is not yet merged > > [2]. > > > > > > > > > [1] https://bugs.launchpad.net/oslo.cache/+bug/1888394 > > > [2] https://review.opendev.org/#/c/742193/ > > > > > > Le ven. 11 sept. 2020 à 23:29, Tony Liu > > > > > > > a écrit : > > > > > > > > > Hi, > > > > > > Is there any guidance or experiences to estimate the number > > > of memcached connections? > > > > > > Here is memcached connection on one of the 3 controllers. > > > Connection number is the total established connections to > > > all 3 memcached nodes. > > > > > > Node 1: > > > 10 Keystone workers have 62 connections. > > > 11 Nova API workers have 37 connections. > > > 6 Neutron server works have 4304 connections. > > > 1 memcached has 4973 connections. > > > > > > Node 2: > > > 10 Keystone workers have 62 connections. > > > 11 Nova API workers have 30 connections. > > > 6 Neutron server works have 3703 connections. > > > 1 memcached has 4973 connections. > > > > > > Node 3: > > > 10 Keystone workers have 54 connections. > > > 11 Nova API workers have 15 connections. > > > 6 Neutron server works have 6541 connections. > > > 1 memcached has 4973 connections. > > > > > > Before I increase the connection limit for memcached, I'd > > > like to understand if all the above is expected? > > > > > > How Neutron server and memcached take so many connections? > > > > > > Any elaboration is appreciated. > > > > > > BTW, the problem leading me here is memcached connection > > timeout, > > > which results all services depending on memcached stop > > working > > > properly. > > > > > > > > > Thanks! > > > Tony > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Hervé Beraud > > > Senior Software Engineer > > > > > > Red Hat - Openstack Oslo > > > irc: hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > > > > -- > > > > Hervé Beraud > > Senior Software Engineer > > > > Red Hat - Openstack Oslo > > irc: hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 14 18:00:10 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 Sep 2020 13:00:10 -0500 Subject: REMINDER: 2020 Virtual Summit: Forum Submissions Now Accepted In-Reply-To: <9F45540A-C761-4E4E-B5EF-94DABCCBC883@getmailspring.com> References: <9F45540A-C761-4E4E-B5EF-94DABCCBC883@getmailspring.com> Message-ID: <1748dc6324e.b31d18a914020.7562912794318102829@ghanshyammann.com> I cannot select the Tags (not edit field or no tags are shown to select) while submitting a forum session. Not sure if it is just me? -gmann ---- On Mon, 14 Sep 2020 10:11:38 -0500 Jimmy McArthur wrote ---- > Sorry - I thought I responded to the whole list, but I just responded to Lars :| > I've opened this back up again. It shut off at "midnight" i/o 23:59. Should be available for submissions. I'll set it to stay open an extra 8 hours as well. So people should have through 8am, 9/15 (Pacific). > Cheers,Jimmy > On Sep 14 2020, at 9:49 am, Julia Kreger wrote:Curious about this as well since Ironic barely had meeting quorum lastweek due to the holiday in the states and this is on our meetingagenda for today. > On Mon, Sep 14, 2020 at 12:48 AM Slawek Kaplonski wrote:>> Hi,>> Is deadline for proposing forum topics already reached? I'm trying to propose> something now and on https://cfp.openstack.org/app/presentations I see only info> that "Submission is closed".>> On Wed, Sep 09, 2020 at 05:57:00PM -0500, Jimmy McArthur wrote:> > Hello Everyone!> >> > We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Virtual Forum page [4].> >> > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1].> >> > The timeline for submissions is as follows:> >> > Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org.> > Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda.> > Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins.> > Sept 28th | Forum schedule final> > Oct 19th | Forum begins!> >> > If you have questions or concerns, please reach out to speakersupport at openstack.org (mailto:speakersupport at openstack.org).> >> > Cheers,> > Jimmy> >> > [1] https://wiki.openstack.org/wiki/Forum> > [2] https://www.openstack.org/summit/2020/> > [3] https://cfp.openstack.org> > [4]https://wiki.openstack.org/wiki/Forum/Virtual2020>> --> Slawek Kaplonski> Senior software engineer> Red Hat>> From jimmy at openstack.org Mon Sep 14 18:16:13 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 14 Sep 2020 13:16:13 -0500 Subject: REMINDER: 2020 Virtual Summit: Forum Submissions Now Accepted In-Reply-To: <1748dc6324e.b31d18a914020.7562912794318102829@ghanshyammann.com> References: <1748dc6324e.b31d18a914020.7562912794318102829@ghanshyammann.com> Message-ID: <7D83E3D2-A06B-47EE-967D-2E7374677175@getmailspring.com> Hi Gmann, Don't worry about tags for Forum submissions. We'll tag them after the fact, when sessions are selected. Cheers, Jimmy On Sep 14 2020, at 1:00 pm, Ghanshyam Mann wrote: > I cannot select the Tags (not edit field or no tags are shown to select) while submitting a forum session. Not sure if it is just me? > > -gmann > > ---- On Mon, 14 Sep 2020 10:11:38 -0500 Jimmy McArthur wrote ---- > > Sorry - I thought I responded to the whole list, but I just responded to Lars :| > > I've opened this back up again. It shut off at "midnight" i/o 23:59. Should be available for submissions. I'll set it to stay open an extra 8 hours as well. So people should have through 8am, 9/15 (Pacific). > > Cheers,Jimmy > > On Sep 14 2020, at 9:49 am, Julia Kreger wrote:Curious about this as well since Ironic barely had meeting quorum lastweek due to the holiday in the states and this is on our meetingagenda for today. > > On Mon, Sep 14, 2020 at 12:48 AM Slawek Kaplonski wrote:>> Hi,>> Is deadline for proposing forum topics already reached? I'm trying to propose> something now and on https://cfp.openstack.org/app/presentations I see only info> that "Submission is closed".>> On Wed, Sep 09, 2020 at 05:57:00PM -0500, Jimmy McArthur wrote:> > Hello Everyone!> >> > We are now accepting Forum [1] submissions for the 2020 Virtual Open Infrastructure Summit [2]. Please submit your ideas through the Summit CFP tool [3] through September 14th. Don't forget to put your brainstorming etherpad up on the Virtual Forum page [4].> >> > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1].> >> > The timeline for submissions is as follows:> >> > Aug 31st | Formal topic submission tool opens: https://cfp.openstack.org.> > Sep 14th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda.> > Sep 21st | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins.> > Sept 28th | Forum schedule final> > Oct 19th | Forum begins!> >> > If you have questions or concerns, please reach out to speakersupport at openstack.org (mailto:speakersupport at openstack.org).> >> > Cheers,> > Jimmy> >> > [1] https://wiki.openstack.org/wiki/Forum> > [2] https://www.openstack.org/summit/2020/> > [3] https://cfp.openstack.org> > [4]https://wiki.openstack.org/wiki/Forum/Virtual2020>> --> Slawek Kaplonski> Senior software engineer> Red Hat>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Sep 14 21:11:17 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 14 Sep 2020 14:11:17 -0700 Subject: [TC] Seats Available in Upcoming Elections Message-ID: Hello :) Wanted to make you all aware of the conversation that happened in the TC channel today[1] which was a reminder of a previous conversation about reducing the size of the TC. It was concluded by the previous TC that we would gradually reduce from 13 seats to 9 over the course of two elections (dropping two seats each election). From that point on, we would continue with the normal cadence of cycling about half the seats each election ( 4 seats and 5 seats)[2]. So, this coming election 6 members will be up for reelection with 4 seats available. Hope this all makes sense! -Kendall Nelson (diablo_rojo) - Both a TC member up for re-election and a former election official :) [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2020-09-14T18:18:57 [2] https://review.opendev.org/#/c/681266/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Mon Sep 14 22:08:27 2020 From: donny at fortnebula.com (Donny Davis) Date: Mon, 14 Sep 2020 18:08:27 -0400 Subject: WIndows 10 instance hostname not updating In-Reply-To: <1748045d1f2.b9ebd2757273.2608405934809994219@zohocorp.com> References: <1748045d1f2.b9ebd2757273.2608405934809994219@zohocorp.com> Message-ID: On Fri, Sep 11, 2020 at 11:09 PM its-openstack at zohocorp.com < its-openstack at zohocorp.com> wrote: > Dear openstack, > > I have installed openstack train branch, I am facing issue with windows > image. all windows 10 instance dosen't get its hostname updated from the > metadata, > but able to get the metadata(hostname) from inside the instance using > powershell. > ``` $ Invoke-WebRequest http://169.254.169.254/latest/meta-data/hostname > -UseBasicParsing ``` > > windows2016 instance no issue. > > using the stable cloudbase-init package for preparation of windows 10 > (tried bot in v1909 and v2004). windows2016 server dosen't have this > issue. if you would so kindly help us with this issue > > Regards > sysadmin > > > > It seems to me that your meta-data service is working properly then. Could this issue be related? https://github.com/cloudbase/cloudbase-init/issues/42 -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Mon Sep 14 22:15:19 2020 From: donny at fortnebula.com (Donny Davis) Date: Mon, 14 Sep 2020 18:15:19 -0400 Subject: Ussuri CentOS 8 add mptsas driver to introspection initramfs In-Reply-To: <55c5b908-3d0e-4d92-8f8f-95443fbefb9f@me.com> References: <55c5b908-3d0e-4d92-8f8f-95443fbefb9f@me.com> Message-ID: On Fri, Sep 11, 2020 at 3:25 PM Oliver Weinmann wrote: > Hi, > > I already asked this question on serverfault. But I guess here is a better > place. > > I have a very ancient hardware with a MPTSAS controller. I use this for > TripleO deployment testing. With the release of Ussuri which is running > CentOS8, I can no longer provision my overcloud nodes as the MPTSAS driver > has been removed in CentOS8: > > > https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ > > I managed to include the driver provided from ELrepo in the introspection > image but It is not loaded automatically: > > All commands are run as user "stack". > > Extract the introspection image: > > cd ~ > mkdir imagesnew > cd imagesnew > tar xvf ../ironic-python-agent.tar > mkdir ~/ipa-tmp > cd ~/ipa-tmp > /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs | zcat > | cpio -ivd | pax -r > > Extract the contents of the mptsas driver rpm: > > rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r > > Put the kernel module in the right places. To figure out where the module > has to reside I installed the rpm on a already deployed node and used find > to locate it. > > xz -c ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz > mkdir ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas > sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko > sudo chown root . -R > find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8 > > ~/images/ironic-python-agent.initramfs > > Upload the new image > > cd ~/images > openstack overcloud image upload --update-existing --image-path > /home/stack/images/ > > Now when I start the introspection and ssh into the host I see no disks: > > [root at localhost ~]# fdisk -l > [root at localhost ~]# lsmod | grep mptsas > > Once i manually load the driver, I can see the disks: > > > [root at localhost ~]# modprobe mptsas > [root at localhost ~]# lsmod | grep mptsas > mptsas 69632 0 > mptscsih 45056 1 mptsas > mptbase 98304 2 mptsas,mptscsih > scsi_transport_sas 45056 1 mptsas > [root at localhost ~]# fdisk -l > Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors > Units: sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > > But how can I make it so that it will automatically load on boot? > > Best Regards, > > Oliver > I guess you could try using modules-load to load the module at boot. > sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko echo "mptsas" > ./etc/modules-load.d/mptsas.conf > sudo chown root . -R Also I would have a look see at these docs to build an image using ipa builder https://docs.openstack.org/ironic-python-agent-builder/latest/ -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Sep 14 22:17:29 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 14 Sep 2020 17:17:29 -0500 Subject: memchached connections In-Reply-To: References: Message-ID: On 9/14/20 12:17 PM, Tony Liu wrote: > Thanks for clarifications! > I am fine with the fix. My point is that, to keep Keystone work > as the way it used to be, with this fix, flush_on_reconnect has > to be explicitly set to true in keystone.conf. This needs to be > taken care of by TripleO, Kolla Ansible, Juju, etc. This issue is why I've -1'd the patch. We need to be able to enable the behavior by default for Keystone, even if we don't for other projects. On the review I linked to an example of how we could do that. > > Tony >> -----Original Message----- >> From: Herve Beraud >> Sent: Monday, September 14, 2020 9:46 AM >> To: Tony Liu >> Cc: openstack-discuss >> Subject: Re: memchached connections >> >> >> >> Le lun. 14 sept. 2020 à 18:09, Tony Liu > > a écrit : >> >> >> Radosław pointed another bug >> https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 >> referring to the same fix >> https://review.opendev.org/#/c/742193/ >> >> Regarding to the fix, The comment says "This flag is off by >> default for backwards compatibility.". But I see this flag is >> on by default in current code. That's how it causes issues. >> This fix changes the default value from on to off. It does break >> backwards compatibility. To keep Keystone working as the old way, >> along with this fix, this flag has to be explicitly set to true >> in keystone.conf. For neutron-server and nova-api, it's good to >> leave this flag off by default. Am I correct? >> >> >> >> >> Long short story as far as I correctly remember this topic. >> >> Currently flush on reconnect is not an option and it is always triggered >> (in the corresponding scenario). >> >> If we decide to introduce this new option >> `memcache_pool_flush_on_reconnect` we need to set this option to `True` >> as the default value to keep the backward compat. >> >> If this option is set to `true` then flush on reconnect will be >> triggered all the time in the corresponding scenario. >> >> Use `True` as default value was my first choice for these changes, and I >> think we need to give prior to backward compat for the first time and in >> a second time start by deprecating this behavior and turn this option to >> `False` as the default value if it helps to fix things. >> >> Finally after some discussions `False` have been retained as default >> value (c.f comments on https://review.opendev.org/#/c/742193/) which >> mean that flush on reconnect will not be executed and in this case I >> think we can say that backward compat is broken as this is not the >> current behavior. >> >> AFAIK `flush_on_reconnect` have been added for Keystone and I think only >> Keystone really needs that but other people could confirm that. >> >> If we decide to continue with `False` as the default value then neutron- >> server and nova-api could leave this default value as I don't think we >> need that (c.f my previous line). >> >> >> Finally, it could be worth to deep dive in the python-memcached side >> which is where the root cause is (the exponential connections) and to >> see how to address that. >> >> Hope that helps you. >> >> >> >> Thanks! >> Tony >> > -----Original Message----- >> > From: Herve Beraud > > >> > Sent: Monday, September 14, 2020 8:27 AM >> > To: Tony Liu > > >> > Cc: openstack-discuss > > >> > Subject: Re: memchached connections >> > >> > Hello, >> > >> > python-memcached badly handles connections during a flush on >> reconnect >> > and so connections can grow up exponentially [1]. >> > >> > >> > I don't know if it is the same issue that you faced but it could >> be a >> > track to follow. >> > >> > On oslo.cache a fix has been submitted but it is not yet merged >> [2]. >> > >> > >> > [1] https://bugs.launchpad.net/oslo.cache/+bug/1888394 >> > [2] https://review.opendev.org/#/c/742193/ >> > >> > Le ven. 11 sept. 2020 à 23:29, Tony Liu > >> > > > > a écrit : >> > >> > >> > Hi, >> > >> > Is there any guidance or experiences to estimate the number >> > of memcached connections? >> > >> > Here is memcached connection on one of the 3 controllers. >> > Connection number is the total established connections to >> > all 3 memcached nodes. >> > >> > Node 1: >> > 10 Keystone workers have 62 connections. >> > 11 Nova API workers have 37 connections. >> > 6 Neutron server works have 4304 connections. >> > 1 memcached has 4973 connections. >> > >> > Node 2: >> > 10 Keystone workers have 62 connections. >> > 11 Nova API workers have 30 connections. >> > 6 Neutron server works have 3703 connections. >> > 1 memcached has 4973 connections. >> > >> > Node 3: >> > 10 Keystone workers have 54 connections. >> > 11 Nova API workers have 15 connections. >> > 6 Neutron server works have 6541 connections. >> > 1 memcached has 4973 connections. >> > >> > Before I increase the connection limit for memcached, I'd >> > like to understand if all the above is expected? >> > >> > How Neutron server and memcached take so many connections? >> > >> > Any elaboration is appreciated. >> > >> > BTW, the problem leading me here is memcached connection >> timeout, >> > which results all services depending on memcached stop >> working >> > properly. >> > >> > >> > Thanks! >> > Tony >> > >> > >> > >> > >> > >> > >> > -- >> > >> > Hervé Beraud >> > Senior Software Engineer >> > >> > Red Hat - Openstack Oslo >> > irc: hberaud >> > -----BEGIN PGP SIGNATURE----- >> > >> > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> > v6rDpkeNksZ9fFSyoY2o >> > =ECSj >> > -----END PGP SIGNATURE----- >> > >> >> >> >> >> >> -- >> >> Hervé Beraud >> Senior Software Engineer >> >> Red Hat - Openstack Oslo >> irc: hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> > From samueldmq at gmail.com Mon Sep 14 23:33:42 2020 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Mon, 14 Sep 2020 20:33:42 -0300 Subject: [Outreachy] Call for mentors, projects by Sep 29 Message-ID: Hey Stackers, *TL;DR OpenStack is participating in the Outreachy internship program once again!* *Please submit projects as soon as possible, with final deadline being Sept. 29, 2020 at 4 pm UTC: https://www.outreachy.org/communities/cfp/openstack/ * Outreachy's goal is to support people from groups underrepresented in the technology industry. Interns will work remotely with mentors from our community. We are seeking mentors to propose projects that Outreachy interns can work on during their internship. If you want help crafting your project proposal, please contact me < samueldmq at gmail.com> or Mahati Chamarthy . Mentors should read the mentor FAQ: https://www.outreachy.org/mentor/mentor-faq Full details about the Outreachy program and the internship timeline can be found on the Call for Participation page on the Outreachy website: https://www.outreachy.org/communities/cfp/ Thank you, Samuel Queiroz -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Sep 15 04:50:34 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 14 Sep 2020 21:50:34 -0700 Subject: [manila][ptg] Wallaby PTG Planning Message-ID: Hello Zorillas and Interested Stackers, As you're aware, the virtual PTG for the Wallaby release cycle is between October 26-30, 2020. If you haven't registered yet, you must do so as soon as possible! [1]. We've signed up for some slots on the PTG timeslots ethercalc [2]. The PTG Planning etherpad [3] is now live. Please go ahead and add your name/irc nick and propose any topics. You may propose topics even if you wouldn't like to moderate the discussion. Thanks, and hope to see you all there! Goutham [1] https://www.eventbrite.com/e/project-teams-gathering-october-2020-tickets-116136313841 [2] https://ethercalc.openstack.org/7xp2pcbh1ncb [3] https://etherpad.opendev.org/p/wallaby-ptg-manila-planning -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Tue Sep 15 08:18:20 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Tue, 15 Sep 2020 16:18:20 +0800 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi Feilong, I hope you are keeping well. Thank you for sticking with me on this issue to try and help me here. I really appreciate it! I tried creating a new flavour like you suggested and using 10GB for root volume [1]. The cluster does start to be created (no error about 0mb disk) but while being created, I can check the compute node and see that the instance disk is being provisioned on the compute node [2]. I assume that this is the 10GB root volume that is specified in the flavour. When I list the volumes which have been created, I do not see the 10GB disk allocated on the compute node, but I do see the iSCSI network volume that has been created and attached to the instance (eg master) [3]. This is 15GB volume and this 15GB is coming from the kubernetes cluster template, under "Docker Volume Size (GB)" in the "node spec" section. There is very little data written to this volume at the time of master instance booted. Eventually, kube cluster failed to create with error "Status Create_Failed: Resource CREATE failed: Error: resources.kube_minions.resources[0].resources.node_config_deployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1". I'll try and find the root cause of this later. What are your thoughts on this outcome? Is it possible to avoid consuming compute node disk? I require it because it cannot scale. [1] http://paste.openstack.org/show/797862/ [2] http://paste.openstack.org/show/797865/ [3] http://paste.openstack.org/show/797863/ Kind regards, Tony Tony Pearce On Mon, 14 Sep 2020 at 17:44, feilong wrote: > Hi Tony, > > Does your Magnum support this config > https://github.com/openstack/magnum/blob/master/magnum/conf/cinder.py#L47 > can you try to change it from 0 to 10? 10 means the root disk volume size > for the k8s node. By default the 0 means the node will be based on image > instead of volume. > > > On 14/09/20 9:37 pm, Tony Pearce wrote: > > Hi Feilong, sure. The flavour I used has 2 CPU and 2GB memory. All other > values either unset or 0mb. > I also used the same fedora 27 image that is being used for the kubernetes > cluster. > > Thank you > Tony > > On Mon, 14 Sep 2020, 17:20 feilong, wrote: > >> Hi Tony, >> >> Could you please let me know your flavor details? I would like to test >> it in my devstack environment (based on LVM). Thanks. >> >> >> On 14/09/20 8:27 pm, Tony Pearce wrote: >> >> Hi feilong, hope you are keeping well. Thank you for the info! >> >> For issue 1. Maybe this should be with the kayobe/kolla-ansible team. >> Thanks for the insight :) >> >> For the 2nd one, I was able to run the HOT template in your link. There's >> no issues at all running that multiple times concurrently while using the >> 0MB disk flavour. I tried four times with the last three executing one >> after the other so that they ran parallelly. All were successful and >> completed and did not complain about the 0MB disk issue. >> >> Does this conclude that the error and create-failed issue relates to >> Magnum or could you suggest other steps to test on my side? >> >> Best regards, >> >> Tony Pearce >> >> >> >> >> On Thu, 10 Sep 2020 at 16:01, feilong wrote: >> >>> Hi Tony, >>> >>> Sorry for the late response for your thread. >>> >>> For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS and >>> it works. >>> >>> For the 2nd issue, I think we were misunderstanding the nodes disk >>> capacity. I was assuming you're talking about the k8s nodes, but seems >>> you're talking about the physical compute host. I still don't think it's a >>> Magnum issue because a k8s master/worker nodes are just normal Nova >>> instances and managed by Heat. So I would suggest you use a simple HOT to >>> test it, you can use this >>> https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 >>> >>> Most of the cloud providers or organizations who have adopted Magnum are >>> using Ceph as far as I know, just FYI. >>> >>> >>> On 10/09/20 4:35 pm, Tony Pearce wrote: >>> >>> Hi all, hope you are all keeping safe and well. I am looking for >>> information on the following two issues that I have which surrounds Magnum >>> project: >>> >>> 1. Magnum does not support Openstack API with HTTPS >>> 2. Magnum forces compute nodes to consume disk capacity for instance data >>> >>> My environment: Openstack Train deployed using Kayobe (Kolla-ansible). >>> >>> With regards to the HTTPS issue, Magnum stops working after enabling >>> HTTPS because the certificate / CA certificate is not trusted by Magnum. >>> The certificate which I am using is one that was purchased from GoDaddy and >>> is trusted in web browsers (and is valid), just not trusted by the Magnum >>> component. >>> >>> Regarding compute node disk consumption issue - I'm at a loss with >>> regards to this and so I'm looking for more information about why this is >>> being done and is there any way that I could avoid it? I have storage >>> provided by a Cinder integration and so the consumption of compute node >>> disk for instance data I need to avoid. >>> >>> Any information the community could provide to me with regards to the >>> above would be much appreciated. I would very much like to use the Magnum >>> project in this deployment for Kubernetes deployment within projects. >>> >>> Thanks in advance, >>> >>> Regards, >>> >>> Tony >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> ------------------------------------------------------ >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> ------------------------------------------------------ >>> >>> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> >> -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Tue Sep 15 08:59:58 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Tue, 15 Sep 2020 08:59:58 +0000 Subject: [ops] picking up some activity In-Reply-To: References: Message-ID: <20200915085958.GL8890@sync> Hey Chris, I'd like to join. To be sure, 10AM EST/DST means 2PM UTC, right? Cheers, -- Arnaud Morin On 04.09.20 - 13:48, Chris Morgan wrote: > Greetings! > > The OpenStack Operators ("ops") meetups team will attempt to have an IRC > meeting at the normal time and place (#openstack-operators on freenode at > 10am EST/DST) on *Sept 15th*( following a period of complete inactivity for > obvious reasons. > > If you're an official member of the team or even just interested in what we > do, please feel free to join us. Whilst we can't yet contemplate resuming > in-person meetups during this global pandemic, we can resume attempting to > build the openstack operators community, share knowledge and perhaps even > do some more virtual get-togethers. > > See you then > > Chris > on behalf of the openstack ops meetups team > > -- > Chris Morgan From amy at demarco.com Tue Sep 15 11:52:34 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 15 Sep 2020 06:52:34 -0500 Subject: [ops] picking up some activity In-Reply-To: <20200915085958.GL8890@sync> References: <20200915085958.GL8890@sync> Message-ID: I think it’s 14:00 UTC if that helps Amy > On Sep 15, 2020, at 4:03 AM, Arnaud Morin wrote: > > Hey Chris, > > I'd like to join. > To be sure, 10AM EST/DST means 2PM UTC, right? > > Cheers, > > -- > Arnaud Morin > >> On 04.09.20 - 13:48, Chris Morgan wrote: >> Greetings! >> >> The OpenStack Operators ("ops") meetups team will attempt to have an IRC >> meeting at the normal time and place (#openstack-operators on freenode at >> 10am EST/DST) on *Sept 15th*( following a period of complete inactivity for >> obvious reasons. >> >> If you're an official member of the team or even just interested in what we >> do, please feel free to join us. Whilst we can't yet contemplate resuming >> in-person meetups during this global pandemic, we can resume attempting to >> build the openstack operators community, share knowledge and perhaps even >> do some more virtual get-togethers. >> >> See you then >> >> Chris >> on behalf of the openstack ops meetups team >> >> -- >> Chris Morgan > From dmendiza at redhat.com Tue Sep 15 12:46:32 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Tue, 15 Sep 2020 07:46:32 -0500 Subject: [nova][barbican][qa] barbican-tempest-plugin change breaking bfv [ceph] In-Reply-To: References: Message-ID: <7d338f5d-13d2-39c4-4aa1-7411c5733560@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 8/26/20 2:44 PM, Mohammed Naser wrote: > Hi everyone, > > We just had our gating break due to a change merging inside > barbican-tempest-plugin which is the following: > > https://review.opendev.org/#/c/515210/ > > It is resulting in an exception in our CI: [snip] Hi Mohammed, Unfortunately I still haven't had a chance to chase down this CI failure. I did open a new Bug story for barbican-tempest-plugin so this issue doesn't get lost in the mailing list: https://storyboard.openstack.org/#!/story/2008146 Thanks, Douglas Mendizábal -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl9gt6QACgkQgB6WFOq/ OrfjOhAAhw3DGKvBdRA4Lei0zhTIM4bdeR/7kcqokNNv/iyGCBo9vmnx0NuUwmkO gvx3S+G9amscyuND9stmEqRDw8BC9mxPutS99G7VPJ/VIFpEqeAiEYELFfIBLKfO QzEJOjYzwDgoW5FODDffnhXPuAXXnPPzPpCA2Ey7d4DXo5X4kL8lkXIwEEibrrz4 o1843h36pxVKjvWCsEra1rKX9txvzDZSxT9XJbU44J5Umi8MHPi2rQE0qNeoUNac XfZE0cpsBsM+FUW0w2fD8L7GJKgF02N5wwXaglnbg/XgEPwfg8Cg0gcCD7q6xPgn yWNKHJkJVdlpXFM1Yqa+TKlMSUaw/4iAWR52WmKbJ09ro2HnGxiOWaU2OJFIea/g raDIj31qhhaEue9qf71dheRmwHUWcw3SF6wfyFkQgEWm1vW6F23/m4Cv9f3uKjJ1 zXD839vf7UtA28N3KfrkOcMEV05/QmMSxNuSPYqgFVLYW+oGt5xWjI9eKhhS4d5g AWFxEvUB2atuljGeFJKq8iIgeNiQzRgg5mb6hB8/YrFRTDWB8JpSJsftitIDvkUe hrUntGvP2WMzfooDJhgYtR5Fnbl0Fhg5SGM3ZkHvBZnhyX+IIvlLYjnAmcCqZ3Pb UEYXzB8F9dTJ0Ekp0w4VoWBUBpjBCtjRhccHBrngvp8htRxt6ug= =SOQs -----END PGP SIGNATURE----- From ionut at fleio.com Tue Sep 15 12:53:27 2020 From: ionut at fleio.com (Ionut Biru) Date: Tue, 15 Sep 2020 15:53:27 +0300 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi, To boot minions or master from volume, I use the following labels: boot_volume_size = 20 boot_volume_type = ssd availability_zone = nova volume type and zone might differ on your setup. On Tue, Sep 15, 2020 at 11:23 AM Tony Pearce wrote: > Hi Feilong, I hope you are keeping well. > > Thank you for sticking with me on this issue to try and help me here. I > really appreciate it! > > I tried creating a new flavour like you suggested and using 10GB for root > volume [1]. The cluster does start to be created (no error about 0mb disk) > but while being created, I can check the compute node and see that the > instance disk is being provisioned on the compute node [2]. I assume that > this is the 10GB root volume that is specified in the flavour. > > When I list the volumes which have been created, I do not see the 10GB > disk allocated on the compute node, but I do see the iSCSI network volume > that has been created and attached to the instance (eg master) [3]. This is > 15GB volume and this 15GB is coming from the kubernetes cluster template, > under "Docker Volume Size (GB)" in the "node spec" section. There is very > little data written to this volume at the time of master instance booted. > > Eventually, kube cluster failed to create with error "Status > Create_Failed: Resource CREATE failed: Error: > resources.kube_minions.resources[0].resources.node_config_deployment: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 1". I'll try and find the root cause of this later. > > What are your thoughts on this outcome? Is it possible to avoid consuming > compute node disk? I require it because it cannot scale. > > [1] http://paste.openstack.org/show/797862/ > [2] http://paste.openstack.org/show/797865/ > [3] http://paste.openstack.org/show/797863/ > > Kind regards, > Tony > > Tony Pearce > > > > On Mon, 14 Sep 2020 at 17:44, feilong wrote: > >> Hi Tony, >> >> Does your Magnum support this config >> https://github.com/openstack/magnum/blob/master/magnum/conf/cinder.py#L47 >> can you try to change it from 0 to 10? 10 means the root disk volume size >> for the k8s node. By default the 0 means the node will be based on image >> instead of volume. >> >> >> On 14/09/20 9:37 pm, Tony Pearce wrote: >> >> Hi Feilong, sure. The flavour I used has 2 CPU and 2GB memory. All other >> values either unset or 0mb. >> I also used the same fedora 27 image that is being used for the >> kubernetes cluster. >> >> Thank you >> Tony >> >> On Mon, 14 Sep 2020, 17:20 feilong, wrote: >> >>> Hi Tony, >>> >>> Could you please let me know your flavor details? I would like to test >>> it in my devstack environment (based on LVM). Thanks. >>> >>> >>> On 14/09/20 8:27 pm, Tony Pearce wrote: >>> >>> Hi feilong, hope you are keeping well. Thank you for the info! >>> >>> For issue 1. Maybe this should be with the kayobe/kolla-ansible team. >>> Thanks for the insight :) >>> >>> For the 2nd one, I was able to run the HOT template in your link. >>> There's no issues at all running that multiple times concurrently while >>> using the 0MB disk flavour. I tried four times with the last three >>> executing one after the other so that they ran parallelly. All were >>> successful and completed and did not complain about the 0MB disk issue. >>> >>> Does this conclude that the error and create-failed issue relates to >>> Magnum or could you suggest other steps to test on my side? >>> >>> Best regards, >>> >>> Tony Pearce >>> >>> >>> >>> >>> On Thu, 10 Sep 2020 at 16:01, feilong wrote: >>> >>>> Hi Tony, >>>> >>>> Sorry for the late response for your thread. >>>> >>>> For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS >>>> and it works. >>>> >>>> For the 2nd issue, I think we were misunderstanding the nodes disk >>>> capacity. I was assuming you're talking about the k8s nodes, but seems >>>> you're talking about the physical compute host. I still don't think it's a >>>> Magnum issue because a k8s master/worker nodes are just normal Nova >>>> instances and managed by Heat. So I would suggest you use a simple HOT to >>>> test it, you can use this >>>> https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 >>>> >>>> Most of the cloud providers or organizations who have adopted Magnum >>>> are using Ceph as far as I know, just FYI. >>>> >>>> >>>> On 10/09/20 4:35 pm, Tony Pearce wrote: >>>> >>>> Hi all, hope you are all keeping safe and well. I am looking for >>>> information on the following two issues that I have which surrounds Magnum >>>> project: >>>> >>>> 1. Magnum does not support Openstack API with HTTPS >>>> 2. Magnum forces compute nodes to consume disk capacity for instance >>>> data >>>> >>>> My environment: Openstack Train deployed using Kayobe (Kolla-ansible). >>>> >>>> With regards to the HTTPS issue, Magnum stops working after enabling >>>> HTTPS because the certificate / CA certificate is not trusted by Magnum. >>>> The certificate which I am using is one that was purchased from GoDaddy and >>>> is trusted in web browsers (and is valid), just not trusted by the Magnum >>>> component. >>>> >>>> Regarding compute node disk consumption issue - I'm at a loss with >>>> regards to this and so I'm looking for more information about why this is >>>> being done and is there any way that I could avoid it? I have storage >>>> provided by a Cinder integration and so the consumption of compute node >>>> disk for instance data I need to avoid. >>>> >>>> Any information the community could provide to me with regards to the >>>> above would be much appreciated. I would very much like to use the Magnum >>>> project in this deployment for Kubernetes deployment within projects. >>>> >>>> Thanks in advance, >>>> >>>> Regards, >>>> >>>> Tony >>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> ------------------------------------------------------ >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email: flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>> ------------------------------------------------------ >>>> >>>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> ------------------------------------------------------ >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> ------------------------------------------------------ >>> >>> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> >> -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Tue Sep 15 12:58:51 2020 From: tonyppe at gmail.com (Tony Pearce) Date: Tue, 15 Sep 2020 20:58:51 +0800 Subject: [Magnum][kolla-ansible][kayobe] Information gathering for 2 blocking issues In-Reply-To: References: Message-ID: Hi Ionut, thank you for your reply. Do you know if this configuration prevents consuming of local disk on the compute node for instance storage eg OS or swap etc? Kind regards On Tue, 15 Sep 2020, 20:53 Ionut Biru, wrote: > Hi, > > To boot minions or master from volume, I use the following labels: > > boot_volume_size = 20 > boot_volume_type = ssd > availability_zone = nova > > volume type and zone might differ on your setup. > > > > On Tue, Sep 15, 2020 at 11:23 AM Tony Pearce wrote: > >> Hi Feilong, I hope you are keeping well. >> >> Thank you for sticking with me on this issue to try and help me here. I >> really appreciate it! >> >> I tried creating a new flavour like you suggested and using 10GB for root >> volume [1]. The cluster does start to be created (no error about 0mb disk) >> but while being created, I can check the compute node and see that the >> instance disk is being provisioned on the compute node [2]. I assume that >> this is the 10GB root volume that is specified in the flavour. >> >> When I list the volumes which have been created, I do not see the 10GB >> disk allocated on the compute node, but I do see the iSCSI network volume >> that has been created and attached to the instance (eg master) [3]. This is >> 15GB volume and this 15GB is coming from the kubernetes cluster template, >> under "Docker Volume Size (GB)" in the "node spec" section. There is very >> little data written to this volume at the time of master instance booted. >> >> Eventually, kube cluster failed to create with error "Status >> Create_Failed: Resource CREATE failed: Error: >> resources.kube_minions.resources[0].resources.node_config_deployment: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 1". I'll try and find the root cause of this later. >> >> What are your thoughts on this outcome? Is it possible to avoid consuming >> compute node disk? I require it because it cannot scale. >> >> [1] http://paste.openstack.org/show/797862/ >> [2] http://paste.openstack.org/show/797865/ >> [3] http://paste.openstack.org/show/797863/ >> >> Kind regards, >> Tony >> >> Tony Pearce >> >> >> >> On Mon, 14 Sep 2020 at 17:44, feilong wrote: >> >>> Hi Tony, >>> >>> Does your Magnum support this config >>> https://github.com/openstack/magnum/blob/master/magnum/conf/cinder.py#L47 >>> can you try to change it from 0 to 10? 10 means the root disk volume size >>> for the k8s node. By default the 0 means the node will be based on image >>> instead of volume. >>> >>> >>> On 14/09/20 9:37 pm, Tony Pearce wrote: >>> >>> Hi Feilong, sure. The flavour I used has 2 CPU and 2GB memory. All other >>> values either unset or 0mb. >>> I also used the same fedora 27 image that is being used for the >>> kubernetes cluster. >>> >>> Thank you >>> Tony >>> >>> On Mon, 14 Sep 2020, 17:20 feilong, wrote: >>> >>>> Hi Tony, >>>> >>>> Could you please let me know your flavor details? I would like to test >>>> it in my devstack environment (based on LVM). Thanks. >>>> >>>> >>>> On 14/09/20 8:27 pm, Tony Pearce wrote: >>>> >>>> Hi feilong, hope you are keeping well. Thank you for the info! >>>> >>>> For issue 1. Maybe this should be with the kayobe/kolla-ansible team. >>>> Thanks for the insight :) >>>> >>>> For the 2nd one, I was able to run the HOT template in your link. >>>> There's no issues at all running that multiple times concurrently while >>>> using the 0MB disk flavour. I tried four times with the last three >>>> executing one after the other so that they ran parallelly. All were >>>> successful and completed and did not complain about the 0MB disk issue. >>>> >>>> Does this conclude that the error and create-failed issue relates to >>>> Magnum or could you suggest other steps to test on my side? >>>> >>>> Best regards, >>>> >>>> Tony Pearce >>>> >>>> >>>> >>>> >>>> On Thu, 10 Sep 2020 at 16:01, feilong wrote: >>>> >>>>> Hi Tony, >>>>> >>>>> Sorry for the late response for your thread. >>>>> >>>>> For you HTTPS issue, we (Catalyst Cloud) are using Magnum with HTTPS >>>>> and it works. >>>>> >>>>> For the 2nd issue, I think we were misunderstanding the nodes disk >>>>> capacity. I was assuming you're talking about the k8s nodes, but seems >>>>> you're talking about the physical compute host. I still don't think it's a >>>>> Magnum issue because a k8s master/worker nodes are just normal Nova >>>>> instances and managed by Heat. So I would suggest you use a simple HOT to >>>>> test it, you can use this >>>>> https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab6 >>>>> >>>>> Most of the cloud providers or organizations who have adopted Magnum >>>>> are using Ceph as far as I know, just FYI. >>>>> >>>>> >>>>> On 10/09/20 4:35 pm, Tony Pearce wrote: >>>>> >>>>> Hi all, hope you are all keeping safe and well. I am looking for >>>>> information on the following two issues that I have which surrounds Magnum >>>>> project: >>>>> >>>>> 1. Magnum does not support Openstack API with HTTPS >>>>> 2. Magnum forces compute nodes to consume disk capacity for instance >>>>> data >>>>> >>>>> My environment: Openstack Train deployed using Kayobe (Kolla-ansible). >>>>> >>>>> With regards to the HTTPS issue, Magnum stops working after enabling >>>>> HTTPS because the certificate / CA certificate is not trusted by Magnum. >>>>> The certificate which I am using is one that was purchased from GoDaddy and >>>>> is trusted in web browsers (and is valid), just not trusted by the Magnum >>>>> component. >>>>> >>>>> Regarding compute node disk consumption issue - I'm at a loss with >>>>> regards to this and so I'm looking for more information about why this is >>>>> being done and is there any way that I could avoid it? I have storage >>>>> provided by a Cinder integration and so the consumption of compute node >>>>> disk for instance data I need to avoid. >>>>> >>>>> Any information the community could provide to me with regards to the >>>>> above would be much appreciated. I would very much like to use the Magnum >>>>> project in this deployment for Kubernetes deployment within projects. >>>>> >>>>> Thanks in advance, >>>>> >>>>> Regards, >>>>> >>>>> Tony >>>>> >>>>> -- >>>>> Cheers & Best regards, >>>>> Feilong Wang (王飞龙) >>>>> ------------------------------------------------------ >>>>> Senior Cloud Software Engineer >>>>> Tel: +64-48032246 >>>>> Email: flwang at catalyst.net.nz >>>>> Catalyst IT Limited >>>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>>> ------------------------------------------------------ >>>>> >>>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> ------------------------------------------------------ >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email: flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>> ------------------------------------------------------ >>>> >>>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> ------------------------------------------------------ >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> ------------------------------------------------------ >>> >>> > > -- > Ionut Biru - https://fleio.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 15 13:06:30 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 15 Sep 2020 09:06:30 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. We've also included a few references to some important mailing list threads that you should check out. # Patches ## Open Reviews - Remove tc:approved-release tag https://review.opendev.org/749363 - Retire the devstack-plugin-zmq project https://review.opendev.org/748731 - Add openstack/osops to Ops Docs and Tooling SIG https://review.opendev.org/749835 - Retire devstack-plugin-pika project https://review.opendev.org/748730 - Reinstate weekly meetings https://review.opendev.org/#/c/749279/ - Resolution to define distributed leadership for projects https://review.opendev.org/744995 - Add assert:supports-standalone https://review.opendev.org/722399 ## Project Updates - Add openstack-helm-deployments to openstack-helm https://review.opendev.org/748302 - Add openstack-ansible/os_senlin role https://review.opendev.org/748677 - kolla-cli: deprecation - Mark as deprecated https://review.opendev.org/749694 ## General Changes - Create starter-kit:kubernetes-in-virt tag https://review.opendev.org/736369 # Other Reminders - PTG Brainstorming: https://etherpad.opendev.org/p/tc-wallaby-ptg Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From xin.zeng at intel.com Tue Sep 15 07:46:56 2020 From: xin.zeng at intel.com (Zeng, Xin) Date: Tue, 15 Sep 2020 07:46:56 +0000 Subject: device compatibility interface for live migration with assigned devices In-Reply-To: <20200914084449.0182e8a9@x1.home> References: <20200825163925.1c19b0f0.cohuck@redhat.com> <20200826064117.GA22243@joy-OptiPlex-7040> <20200828154741.30cfc1a3.cohuck@redhat.com> <8f5345be73ebf4f8f7f51d6cdc9c2a0d8e0aa45e.camel@redhat.com> <20200831044344.GB13784@joy-OptiPlex-7040> <20200908164130.2fe0d106.cohuck@redhat.com> <20200909021308.GA1277@joy-OptiPlex-7040> <20200910143822.2071eca4.cohuck@redhat.com> <7cebcb6c8d1a1452b43e8358ee6ee18a150a0238.camel@redhat.com> <20200910120244.71e7b630@w520.home> <20200911005559.GA3932@joy-OptiPlex-7040> <20200911105155.184e32a0@w520.home> <20200914084449.0182e8a9@x1.home> Message-ID: On Monday, September 14, 2020 10:45 PM Alex Williamson wrote: > To: Zeng, Xin > Cc: Zhao, Yan Y ; Sean Mooney > ; Cornelia Huck ; Daniel > P.Berrangé ; kvm at vger.kernel.org; libvir- > list at redhat.com; Jason Wang ; qemu- > devel at nongnu.org; kwankhede at nvidia.com; eauger at redhat.com; Wang, > Xin-ran ; corbet at lwn.net; openstack- > discuss at lists.openstack.org; Feng, Shaohe ; Tian, > Kevin ; Parav Pandit ; Ding, > Jian-feng ; dgilbert at redhat.com; > zhenyuw at linux.intel.com; Xu, Hejie ; > bao.yumeng at zte.com.cn; intel-gvt-dev at lists.freedesktop.org; > eskultet at redhat.com; Jiri Pirko ; dinechin at redhat.com; > devel at ovirt.org > Subject: Re: device compatibility interface for live migration with assigned > devices > > On Mon, 14 Sep 2020 13:48:43 +0000 > "Zeng, Xin" wrote: > > > On Saturday, September 12, 2020 12:52 AM > > Alex Williamson wrote: > > > To: Zhao, Yan Y > > > Cc: Sean Mooney ; Cornelia Huck > > > ; Daniel P.Berrangé ; > > > kvm at vger.kernel.org; libvir-list at redhat.com; Jason Wang > > > ; qemu-devel at nongnu.org; > > > kwankhede at nvidia.com; eauger at redhat.com; Wang, Xin-ran > > ran.wang at intel.com>; corbet at lwn.net; openstack- > > > discuss at lists.openstack.org; Feng, Shaohe ; > Tian, > > > Kevin ; Parav Pandit ; > Ding, > > > Jian-feng ; dgilbert at redhat.com; > > > zhenyuw at linux.intel.com; Xu, Hejie ; > > > bao.yumeng at zte.com.cn; intel-gvt-dev at lists.freedesktop.org; > > > eskultet at redhat.com; Jiri Pirko ; > dinechin at redhat.com; > > > devel at ovirt.org > > > Subject: Re: device compatibility interface for live migration with assigned > > > devices > > > > > > On Fri, 11 Sep 2020 08:56:00 +0800 > > > Yan Zhao wrote: > > > > > > > On Thu, Sep 10, 2020 at 12:02:44PM -0600, Alex Williamson wrote: > > > > > On Thu, 10 Sep 2020 13:50:11 +0100 > > > > > Sean Mooney wrote: > > > > > > > > > > > On Thu, 2020-09-10 at 14:38 +0200, Cornelia Huck wrote: > > > > > > > On Wed, 9 Sep 2020 10:13:09 +0800 > > > > > > > Yan Zhao wrote: > > > > > > > > > > > > > > > > > still, I'd like to put it more explicitly to make ensure it's not > > > missed: > > > > > > > > > > the reason we want to specify compatible_type as a trait and > > > check > > > > > > > > > > whether target compatible_type is the superset of source > > > > > > > > > > compatible_type is for the consideration of backward > > > compatibility. > > > > > > > > > > e.g. > > > > > > > > > > an old generation device may have a mdev type xxx-v4-yyy, > > > while a newer > > > > > > > > > > generation device may be of mdev type xxx-v5-yyy. > > > > > > > > > > with the compatible_type traits, the old generation device is > still > > > > > > > > > > able to be regarded as compatible to newer generation > device > > > even their > > > > > > > > > > mdev types are not equal. > > > > > > > > > > > > > > > > > > If you want to support migration from v4 to v5, can't the > > > (presumably > > > > > > > > > newer) driver that supports v5 simply register the v4 type as > well, > > > so > > > > > > > > > that the mdev can be created as v4? (Just like QEMU > versioned > > > machine > > > > > > > > > types work.) > > > > > > > > > > > > > > > > yes, it should work in some conditions. > > > > > > > > but it may not be that good in some cases when v5 and v4 in the > > > name string > > > > > > > > of mdev type identify hardware generation (e.g. v4 for gen8, > and v5 > > > for > > > > > > > > gen9) > > > > > > > > > > > > > > > > e.g. > > > > > > > > (1). when src mdev type is v4 and target mdev type is v5 as > > > > > > > > software does not support it initially, and v4 and v5 identify > > > hardware > > > > > > > > differences. > > > > > > > > > > > > > > My first hunch here is: Don't introduce types that may be > compatible > > > > > > > later. Either make them compatible, or make them distinct by > design, > > > > > > > and possibly add a different, compatible type later. > > > > > > > > > > > > > > > then after software upgrade, v5 is now compatible to v4, should > the > > > > > > > > software now downgrade mdev type from v5 to v4? > > > > > > > > not sure if moving hardware generation info into a separate > > > attribute > > > > > > > > from mdev type name is better. e.g. remove v4, v5 in mdev type, > > > while use > > > > > > > > compatible_pci_ids to identify compatibility. > > > > > > > > > > > > > > If the generations are compatible, don't mention it in the mdev > type. > > > > > > > If they aren't, use distinct types, so that management software > > > doesn't > > > > > > > have to guess. At least that would be my naive approach here. > > > > > > yep that is what i would prefer to see too. > > > > > > > > > > > > > > > > > > > > > > > (2) name string of mdev type is composed by "driver_name + > > > type_name". > > > > > > > > in some devices, e.g. qat, different generations of devices are > > > binding to > > > > > > > > drivers of different names, e.g. "qat-v4", "qat-v5". > > > > > > > > then though type_name is equal, mdev type is not equal. e.g. > > > > > > > > "qat-v4-type1", "qat-v5-type1". > > > > > > > > > > > > > > I guess that shows a shortcoming of that "driver_name + > type_name" > > > > > > > approach? Or maybe I'm just confused. > > > > > > yes i really dont like haveing the version in the mdev-type name > > > > > > i would stongly perfger just qat-type-1 wehere qat is just there as a > way > > > of namespacing. > > > > > > although symmetric-cryto, asymmetric-cryto and compression > woudl > > > be a better name then type-1, type-2, type-3 if > > > > > > that is what they would end up mapping too. e.g. qat-compression > or > > > qat-aes is a much better name then type-1 > > > > > > higher layers of software are unlikely to parse the mdev names but > as a > > > human looking at them its much eaiser to > > > > > > understand if the names are meaningful. the qat prefix i think is > > > important however to make sure that your mdev-types > > > > > > dont colide with other vendeors mdev types. so i woudl encurage all > > > vendors to prefix there mdev types with etiher the > > > > > > device name or the vendor. > > > > > > > > > > +1 to all this, the mdev type is meant to indicate a software > > > > > compatible interface, if different hardware versions can be software > > > > > compatible, then don't make the job of finding a compatible device > > > > > harder. The full type is a combination of the vendor driver name plus > > > > > the vendor provided type name specifically in order to provide a type > > > > > namespace per vendor driver. That's done at the mdev core level. > > > > > Thanks, > > > > > > > > hi Alex, > > > > got it. so do you suggest that vendors use consistent driver name over > > > > generations of devices? > > > > for qat, they create different modules for each generation. This > > > > practice is not good if they want to support migration between devices > > > > of different generations, right? > > > > > > > > and can I understand that we don't want support of migration between > > > > different mdev types even in future ? > > > > > > You need to balance your requirements here. If you're creating > > > different drivers per generation, that suggests different device APIs, > > > which is a legitimate use case for different mdev types. However if > > > you're expecting migration compatibility, that must be seamless to the > > > guest, therefore the device API must be identical. That suggests that > > > migration between different types doesn't make much sense. If a new > > > generation device wants to expose a new mdev type with new features > or > > > device API, yet also support migration with an older mdev type, why > > > wouldn't it simply expose both the old and the new type? > > > > I think all of these make sense, and I am assuming it's also reasonable and > > common that each generation of device has a separate device driver > module. > > On the other hand, please be aware that, the mdev type is consisted of the > > driver name of the mdev's parent device and the name of a mdev type > which > > the device driver specifies. > > If a new generation device driver wants to expose an old mdev type, it has > to > > register the same driver name as the old one so that the mdev type could > > be completely same. This doesn't make sense as a) driver name usually is > > unique for a device driver module. b) If a system has both these two > > generation devices, once one generation device driver is loaded, the other > > is not allowed to be loaded due to the same driver name. > > So to allow a new generation device to simply expose the old mdev type > for > > compatibility like you proposed, is it possible to create the mdev type by > > another approach, e.g. device driver creates its own namespace for the > > mdev type instead of mdev's parent device driver name being used > currently? > > TBH, I don't think that it's reasonable or common that different > drivers are used for each generation of hardware. Drivers typically > evolve to support new generations of hardware, often sharing > significant code between generations. > When we deal with mdev > migration, we have an opaque data stream managed by the driver, our > default assumption is therefore that the driver plays a significant > role in the composition of that data stream. I'm not ruling out that > we should support some form of compatibility between types, but in the > described scenario it seems the development model of the vendor drivers > is not conducive to the most obvious form of compatibility checking. > Thanks, Current in-tree QAT driver does the same thing as you said, i.e. Drivers evolve to support new generations of hardware, sharing significant code between generations. We have a kernel module which contains those significant common code and a couple of device specific modules which contain device specific code. |<--------------- qat_c62x.ko |<--------------- qat_c62xvf.ko Intel_qat.ko --|<--------------- qat_dh895xcc.ko |<--------------- qat_dh895xccvf.ko |<--------------- qat_c3xxx.ko |<--------------- qat_c3xxxvf.ko The benefit is we only need load the device driver modules for those devices existing in the system, and leave those non-related code for non-existing devices. Besides QAT, there are still other drivers who are using this model, e.g. Intel NIC driver. For QAT, we will have new generations of QAT devices in future which could expose compatible mdev with current one, but because of the naming convention of the mdev type, they are not able to do this. I am not proposing the mdev migration between different types, but looking for how can we allow multiple device drivers from the same vendor to expose the same mdev type. It would be great if you think it's worth supporting it. Thanks, Xin > > Alex From rajput4u4ver at gmail.com Tue Sep 15 08:02:14 2020 From: rajput4u4ver at gmail.com (Rahul Kumar) Date: Tue, 15 Sep 2020 13:32:14 +0530 Subject: Need help for stack update feature for scaling out use case or add new resources Message-ID: Hi Team, I have a stack template with 3 nodes , each node having a volume attached ! Now i updated my template with additional node and additional volume for that node ! This results in user_data update of a node in template ! And when i perform stack update feature , it gives me error: Invalid volume: Volume 01e40c6e-4467-42fe-ba9d-ce7012db8978 status must be available or downloading to reserve, but the current status is in-use. This shows for the node (where user_data is changed) and yes it is currently in use ! Then how one can update the stack using stack update feature of openstack with volumes ? Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From CAPSEY at augusta.edu Tue Sep 15 15:39:40 2020 From: CAPSEY at augusta.edu (Apsey, Christopher) Date: Tue, 15 Sep 2020 15:39:40 +0000 Subject: [horizon][dev] Horizon.next Message-ID: Horizon team, I recently picked up a few talented dev interns, and one of them is very interested in webdev and quite skilled. He will have up to 25 hours/week to devote to development tasks of my choosing over the 8 months or so, and I'd like to offer him up as someone who can spend all of his time working with the horizon team on horizon.next (npm-based). We rely on horizon heavily internally, and a better experience for the end-user is pretty high on our list (particularly performance-wise). He has no experience developing in the OpenStack ecosystem, but is otherwise ready to 'get to work' after process/code familiarization. Can anyone on the horizon team offer up some cycles to help get him spun up/integrated? Or is the best bet just pointing him at the IRC channel? Thanks Chris Apsey GEORGIA CYBER CENTER -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Tue Sep 15 15:53:49 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 15 Sep 2020 15:53:49 +0000 Subject: [Neutron] memchached connections In-Reply-To: References: Message-ID: I wonder why I see this problem only with neutron-server, but not nova-api? Thanks! Tony > -----Original Message----- > From: Radosław Piliszek > Sent: Saturday, September 12, 2020 2:41 AM > To: Tony Liu > Cc: openstack-discuss > Subject: Re: [Neutron] memchached connections > > I believe you are hitting [1]. > > We have recently worked around that in Kolla-Ansible [2]. > > We'd like to get clarifications on whether the workaround is the right > choice but it seems to work. :-) > > [1] https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 > [2] https://review.opendev.org/746966 > > -yoctozepto > > On Sat, Sep 12, 2020 at 4:34 AM Tony Liu wrote: > > > > I restarted neutron-server on all 3 nodes. Those memcached connections > > from neutron-server are gone. Everything is back to normal. It seems > > like that memcached connections are not closed properly in > > neutron-server. The connections pile up over time. Is there any know > > issue related? Could any Neutron experts comment here? > > > > Thanks! > > Tony > > > -----Original Message----- > > > From: Tony Liu > > > Sent: Friday, September 11, 2020 2:26 PM > > > To: openstack-discuss > > > Subject: memchached connections > > > > > > Hi, > > > > > > Is there any guidance or experiences to estimate the number of > > > memcached connections? > > > > > > Here is memcached connection on one of the 3 controllers. > > > Connection number is the total established connections to all 3 > > > memcached nodes. > > > > > > Node 1: > > > 10 Keystone workers have 62 connections. > > > 11 Nova API workers have 37 connections. > > > 6 Neutron server works have 4304 connections. > > > 1 memcached has 4973 connections. > > > > > > Node 2: > > > 10 Keystone workers have 62 connections. > > > 11 Nova API workers have 30 connections. > > > 6 Neutron server works have 3703 connections. > > > 1 memcached has 4973 connections. > > > > > > Node 3: > > > 10 Keystone workers have 54 connections. > > > 11 Nova API workers have 15 connections. > > > 6 Neutron server works have 6541 connections. > > > 1 memcached has 4973 connections. > > > > > > Before I increase the connection limit for memcached, I'd like to > > > understand if all the above is expected? > > > > > > How Neutron server and memcached take so many connections? > > > > > > Any elaboration is appreciated. > > > > > > BTW, the problem leading me here is memcached connection timeout, > > > which results all services depending on memcached stop working > properly. > > > > > > > > > Thanks! > > > Tony > > > > From katonalala at gmail.com Tue Sep 15 15:57:43 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 15 Sep 2020 17:57:43 +0200 Subject: [Neutron][FFE] request for QoS policy update for bound ports feature Message-ID: Hi, I would like to ask for FFE for the RFE "allow replacing the QoS policy of bound port", [1]. This feature adds the extra step to port update operation to change the allocation in Placement to the min_kbps values of the new QoS policy, if the port has a QoS policy with minimum_bandwidth rule and is bound and used by a server. In neutron there's one open patch: https://review.opendev.org/747774 There's an open bug report for the neutron-lib side: https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: https://storyboard.openstack.org/#!/story/2008111 ) and a fix for that: https://review.opendev.org/750349 [1] https://bugs.launchpad.net/neutron/+bug/1882804 Thanks and Regards, Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Sep 15 16:03:09 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 15 Sep 2020 11:03:09 -0500 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: References: Message-ID: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> > I would like to ask for FFE for the RFE "allow replacing the QoS > policy of bound port", [1]. > This feature adds the extra step to port update operation to change > the allocation in Placement to the min_kbps values of the new QoS > policy, if the port has a QoS policy with minimum_bandwidth rule and > is bound and used by a server. > > In neutron there's one open patch: > https://review.opendev.org/747774 > > There's an open bug report for the neutron-lib side: > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > https://storyboard.openstack.org/#!/story/2008111 )  and a fix for that: > https://review.opendev.org/750349 > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > Since this requires an update to neutron-lib, adding [requirements] to the subject. Non-client library freeze was two weeks ago now, so it's a bit late. The fix looks fairly minor, but I don't know that code. Can you comment on the potential risks of this change? We should be stabilizing as much as possible at this point as we approach the final victoria release date. Sean From smooney at redhat.com Tue Sep 15 16:39:43 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 15 Sep 2020 17:39:43 +0100 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> Message-ID: On Tue, 2020-09-15 at 11:03 -0500, Sean McGinnis wrote: > > I would like to ask for FFE for the RFE "allow replacing the QoS > > policy of bound port", [1]. > > This feature adds the extra step to port update operation to change > > the allocation in Placement to the min_kbps values of the new QoS > > policy, if the port has a QoS policy with minimum_bandwidth rule and > > is bound and used by a server. > > > > In neutron there's one open patch: > > https://review.opendev.org/747774 > > > > There's an open bug report for the neutron-lib side: > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix for that: > > https://review.opendev.org/750349 > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > > Since this requires an update to neutron-lib, adding [requirements] to > the subject. Non-client library freeze was two weeks ago now, so it's a > bit late. so this is a new feature right. this is not a bug fix so this also need a neutron feature freeze exception. i have not reviewd the patch yet but didnt we agree to now allow modifyign existign rules in place os i assume the replacemnt this enables is changign form one qos rule set to another. looking at the neutorn patch this seams incomplte and only allows modifying the placment allocation i a limited edgecase, mainly when teh prot was orginally booted with a qos policy. as written i dont think https://review.opendev.org/#/c/747774/18 should be merged. im reviewing it now. > > The fix looks fairly minor, but I don't know that code. Can you comment > on the potential risks of this change? We should be stabilizing as much > as possible at this point as we approach the final victoria release date. > > Sean > > > From tbishop at liquidweb.com Tue Sep 15 19:12:16 2020 From: tbishop at liquidweb.com (Tyler Bishop) Date: Tue, 15 Sep 2020 15:12:16 -0400 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: References: Message-ID: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Hi, My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used?    I have several environments using Kolla-Ansible and this one seems to be the only behaving like this.  My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Sep 15 20:54:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 15 Sep 2020 14:54:14 -0600 Subject: [tripleo][ci] Red Tuesday Message-ID: Greetings, Well if you were working today you know there were several issues upstream. pip issues: https://bugs.launchpad.net/tripleo/+bug/1449136 containers-multinode: https://bugs.launchpad.net/tripleo/+bug/1895290 https://bugs.launchpad.net/tripleo/+bug/1895288 validations: https://launchpad.net/bugs/1895507 Everything I know of atm is either merged on or in the gate. https://review.opendev.org/#/c/751828/ https://review.opendev.org/#/c/751653/ There is a lot going on, please report bugs to launchpad and ping nick-ruck or nick-rover in #tripleo if you see something else. Thanks to everyone helping to push things along. 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 15 23:44:08 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Sep 2020 23:44:08 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Election Season Message-ID: <20200915234407.wddb6solkwzizxj3@yuggoth.org> Election details: https://governance.openstack.org/election/ The nomination period officially begins Sep 22, 2020 23:45 UTC. Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Be aware that the Wallaby cycle elections are a few weeks behind schedule[1]. We're still making sure the new TC is confirmed and any PTL runoff polls are closed and resolved before Victoria release day. Due to circumstances of timing, PTL and TC elections for the coming cycle will run concurrently; deadlines for their nomination and voting activities are synchronized but will still use separate ballots. Please note, if only one candidate is nominated as PTL for a project team during the PTL nomination period, that candidate will win by acclaim, and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a project team's PTL position. If teams do not produce any PTL candidate during the nomination period, or are interested in considering alternatives prior to nominations, the TC may consider requests to switch to the new distributed leadership model they recently documented[2]. In keeping with the established plan[3] to gradually reduce the size of the Technical Committee, we will only fill four (4) seats in this coming election. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[4] so that we may address your concerns. [1] https://governance.openstack.org/tc/reference/election-exceptions.html [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html [3] https://governance.openstack.org/tc/reference/charter.html#number-of-seats-to-elect [4] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tkajinam at redhat.com Tue Sep 15 23:47:48 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 16 Sep 2020 08:47:48 +0900 Subject: [storlets] Wallaby PTG planning Message-ID: Hello, After discussion in IRC I booked a slot for Storlets project in the Wallaby vPTG. I created a planning etherpad[1] so please put your name/nick if you are interested to join , and also put any topics you want to discuss there. [1] https://etherpad.opendev.org/p/storlets-ptg-wallaby Currently we have a slot from 13:00 UTC booked, but if we don't see attendance from EMEA/NA, we might reschedule it to "earlier" slots since current active cores are based in APAC. Please let me know if you have any questions. Thank you Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Sep 16 07:30:31 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 16 Sep 2020 09:30:31 +0200 Subject: [qa] Wallaby PTG planning In-Reply-To: References: Message-ID: Hey, Masayuki et al, I just noticed QA got in the same time slots as Kolla this time again. Could we move at least one session not to conflict? -yoctozepto On Tue, Aug 25, 2020 at 1:45 PM Radosław Piliszek wrote: > > Thanks, Masayuki. > I added myself. > > I hope we can get it non-colliding with Kolla meetings this time. > I'll try to do a better job at early collision detection. :-) > > -yoctozepto > > On Tue, Aug 25, 2020 at 1:16 PM Masayuki Igawa wrote: > > > > Hi, > > > > We need to start thinking about the next cycle already. > > As you probably know, next virtual PTG will be held in October 26-30[0]. > > > > I prepared an etherpad[1] to discuss and track our topics. So, please add > > your name if you are going to attend the PTG session. And also, please add > > your proposals of the topics which you want to discuss during the PTG. > > > > I also made a doodle[2] with possible time slots. Please put your best days and hours > > so that we can try to schedule and book our sessions in the time slots. > > > > [0] https://www.openstack.org/ptg/ > > [1] https://etherpad.opendev.org/p/qa-wallaby-ptg > > [2] https://doodle.com/poll/qqd7ayz3i4ubnsbb > > > > Best Regards, > > -- Masayuki Igawa > > Key fingerprint = C27C 2F00 3A2A 999A 903A 753D 290F 53ED C899 BF89 > > From mark at stackhpc.com Wed Sep 16 07:53:27 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 16 Sep 2020 08:53:27 +0100 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Message-ID: On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > Hi, > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 Hi Tyler, Ironic adds DHCP options to the neutron port on the provisioning network. Specifically, the boot interface in ironic is responsible for adding DHCP options. See the PXEBaseMixin class. From katonalala at gmail.com Wed Sep 16 08:15:00 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 16 Sep 2020 10:15:00 +0200 Subject: [Neutron] vxlan to vlan bridge In-Reply-To: References: Message-ID: Hi, Yeah networking-l2gw can do that, but not sure if it can cover your use case. https://opendev.org/x/networking-l2gw Regards Lajos Fabian Zimmermann ezt írta (időpont: 2020. szept. 12., Szo, 15:39): > Hi, > > something like networking-l2gw? > > Fabian > > Ignazio Cassano schrieb am Sa., 12. Sept. > 2020, 09:47: > >> Hello Stackers, is it possibile to create a vxlan to vlan bridge in >> openstack like vmware nsx does? >> Thanks >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Wed Sep 16 08:27:39 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 16 Sep 2020 03:27:39 -0500 Subject: [osc] load command in the CLI is no longer? Message-ID: <046E9C0290DD9149B106B72FC9156BEA0481472D@gmsxchsvr01.thecreation.com> Hi, We use the "load" command in various OpenStack client CLIs to load a list of commands to run in a single session (one example is when loading a large number of Gnocchi metrics manually). However, when testing with a newer version of the OpenStack Client, it looks like it doesn't exist anymore. OSC version 4.0.0 has the load command, version 5.2.1 does not. Did something happen that justified removing this? Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Wed Sep 16 10:00:31 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 16 Sep 2020 13:00:31 +0300 Subject: [ansible-sig][openstack-ansible-modules][PTG] Wallaby PTG for Openstack Ansible collections Message-ID: Hi, all We have a scheduled session for Openstack Ansible collection project in PTG: Wednesday 16.00 - 17.00 UTC The Etherpad for it is in https://etherpad.opendev.org/p/os-ansible-wallaby-ptg Please add your name if you plan to attend, questions and topics that you want to address in PTG. If it will be too much for one session, I'll schedule more. Thanks -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtint.stfc at gmail.com Wed Sep 16 10:53:59 2020 From: mtint.stfc at gmail.com (Michael STFC) Date: Wed, 16 Sep 2020 03:53:59 -0700 Subject: core os password reset In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Message-ID: Hi New to openstack and wanting to know how to get boot core os and reset user core password. Please advise. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Sep 16 12:18:28 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 16 Sep 2020 06:18:28 -0600 Subject: [tripleo] stepping down as PTL Message-ID: Greetings, Thank you for the opportunity to be the TripleO PTL. This has been a great learning opportunity to work with a pure upstream community, other projects and the OpenStack leadership. Thank you to the TripleO team for your help and dedication in adding features, fixing bugs, and responding to whatever has come our way upstream. Lastly.. Thank you to Alex and Emilien for all the assistance throughout!! Managing the work required here with Covid-19, home schooling is a little much for me at this time, I would like to encourage others to volunteer for the opportunity in Wallaby. 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Sep 16 12:21:40 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 16 Sep 2020 06:21:40 -0600 Subject: [tripleo] Call for Topics Message-ID: Greetings, You folks know the drill. Please take a moment to add topics for discussion for the Wallaby PTG. https://etherpad.opendev.org/p/tripleo-wallaby-topics Thanks all! -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Sep 16 12:27:10 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 16 Sep 2020 14:27:10 +0200 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> Message-ID: Hi, I think I addressed your comments in the patch. Regards Lajos Katona (lajoskatona) Sean Mooney ezt írta (időpont: 2020. szept. 15., K, 18:41): > On Tue, 2020-09-15 at 11:03 -0500, Sean McGinnis wrote: > > > I would like to ask for FFE for the RFE "allow replacing the QoS > > > policy of bound port", [1]. > > > This feature adds the extra step to port update operation to change > > > the allocation in Placement to the min_kbps values of the new QoS > > > policy, if the port has a QoS policy with minimum_bandwidth rule and > > > is bound and used by a server. > > > > > > In neutron there's one open patch: > > > https://review.opendev.org/747774 > > > > > > There's an open bug report for the neutron-lib side: > > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix for > that: > > > https://review.opendev.org/750349 > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > > > > > Since this requires an update to neutron-lib, adding [requirements] to > > the subject. Non-client library freeze was two weeks ago now, so it's a > > bit late. > so this is a new feature right. > this is not a bug fix so this also need a neutron feature freeze exception. > > i have not reviewd the patch yet but didnt we agree to now allow modifyign > existign rules in place > os i assume the replacemnt this enables is changign form one qos rule set > to another. > > looking at the neutorn patch this seams incomplte and only allows > modifying the placment allocation i > a limited edgecase, mainly when teh prot was orginally booted with a qos > policy. > as written i dont think https://review.opendev.org/#/c/747774/18 should > be merged. > im reviewing it now. > > > > > > The fix looks fairly minor, but I don't know that code. Can you comment > > on the potential risks of this change? We should be stabilizing as much > > as possible at this point as we approach the final victoria release date. > > > > Sean > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Sep 16 12:31:32 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 16 Sep 2020 06:31:32 -0600 Subject: [tripleo] distributed-project-leadership RFC Message-ID: Greetings, The TripleO project would like to hear your thoughts with regards to breaking down the typical PTL role into smaller roles. Please read through this document [1] and share your thoughts in this thread. If you are interested in growing your leadership skills and being responsible for any of the roles listed there please also indicate that in your response. Thanks all! [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian at datalounges.com Wed Sep 16 12:31:10 2020 From: florian at datalounges.com (Florian Rommel) Date: Wed, 16 Sep 2020 15:31:10 +0300 Subject: core os password reset In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Message-ID: Hi Michael. So, if I remember coreOS correctly, its the same as all of the cloud based images. It uses SSH keys to authenticate. If you have a an SSH public key in there where you do no longer have the private key for, you can “easily” reset it by 2 ways. 1. If its volume based instance, delete the instance but not the volume. Create the instance again by adding your own ssh key into the boot process. This will ADD the ssh key, but not overwrite the existing one in the authorized_key file 2. If it is normal ephermal disk based instance, make a snapshot and create a new instance from the snapshot, adding your own ssh key into it. Either or, if they are ssh key authenticated (which they should be), there isn’t really an EASY way unless you want to have the volume directly. Best regards, //Florian > On 16. Sep 2020, at 13.53, Michael STFC wrote: > > Hi > > New to openstack and wanting to know how to get boot core os and reset user core password. > > Please advise. > > Michael From katonalala at gmail.com Wed Sep 16 12:39:27 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 16 Sep 2020 14:39:27 +0200 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> Message-ID: Hi The neutron-lib patch (https://review.opendev.org/750349 ) is a bug fix (see [1]) which as do not touch db or API can be backported later in the worst case. The fix itself doesn't affect other Neutron features, so no harm. Thanks for your help. Regards Lajos Katona (lajoskatona) [1] https://launchpad.net/bugs/1894825 Sean McGinnis ezt írta (időpont: 2020. szept. 15., K, 18:05): > > I would like to ask for FFE for the RFE "allow replacing the QoS > > policy of bound port", [1]. > > This feature adds the extra step to port update operation to change > > the allocation in Placement to the min_kbps values of the new QoS > > policy, if the port has a QoS policy with minimum_bandwidth rule and > > is bound and used by a server. > > > > In neutron there's one open patch: > > https://review.opendev.org/747774 > > > > There's an open bug report for the neutron-lib side: > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix for that: > > https://review.opendev.org/750349 > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > Since this requires an update to neutron-lib, adding [requirements] to > the subject. Non-client library freeze was two weeks ago now, so it's a > bit late. > > The fix looks fairly minor, but I don't know that code. Can you comment > on the potential risks of this change? We should be stabilizing as much > as possible at this point as we approach the final victoria release date. > > Sean > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Sep 16 12:42:38 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 16 Sep 2020 14:42:38 +0200 Subject: [openstack][cinder] volume revert to snapshot on stein Message-ID: Hello Stackers, does cinder revert to snapshot works on stein ? The command cinder revert-to-snapshot returns: error: argument : invalid choice: u'revert-to-snapshot' *I also tried using cinder v3 api* post /v3/{project_id}/volumes/{volume_id} /action but it returns : {"itemNotFound": {"message": "The resource could not be found.", "code": 404}} Please, anyone could help me ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucioseki at gmail.com Wed Sep 16 13:05:03 2020 From: lucioseki at gmail.com (Lucio Seki) Date: Wed, 16 Sep 2020 10:05:03 -0300 Subject: [openstack][cinder] volume revert to snapshot on stein In-Reply-To: References: Message-ID: Hi Ignazio, The feature was introduced in Pike release [0], so it should work. Please specify the Volume API version >= 3.40 either by: a) passing `--os-volume-api-version` parameter to the command line: cinder --os-volume-api-version 3.40 revert-to-snapshot snap1 b) exporting the environment variable OS_VOLUME_API_VERSION: export OS_VOLUME_API_VERSION=3.40 cinder revert-to-snapshot snap1 Please let us know if it works or not. [0] https://docs.openstack.org/releasenotes/cinder/pike.html#relnotes-11-0-0-stable-pike-new-features Lucio On Wed, 16 Sep 2020 at 09:52, Ignazio Cassano wrote: > Hello Stackers, > does cinder revert to snapshot works on stein ? > The command cinder revert-to-snapshot returns: > > error: argument : invalid choice: u'revert-to-snapshot' > > *I also tried using cinder v3 api* post /v3/{project_id}/volumes/ > {volume_id}/action > but it returns : > {"itemNotFound": {"message": "The resource could not be found.", "code": > 404}} > > Please, anyone could help me ? > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Sep 16 13:15:05 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 16 Sep 2020 15:15:05 +0200 Subject: [openstack][cinder] volume revert to snapshot on stein In-Reply-To: References: Message-ID: I will chek. Thanks Lucio I'll keep in touch Il giorno mer 16 set 2020 alle ore 15:05 Lucio Seki ha scritto: > Hi Ignazio, > > The feature was introduced in Pike release [0], so it should work. > Please specify the Volume API version >= 3.40 either by: > a) passing `--os-volume-api-version` parameter to the command line: > > cinder --os-volume-api-version 3.40 revert-to-snapshot snap1 > > > b) exporting the environment variable OS_VOLUME_API_VERSION: > > export OS_VOLUME_API_VERSION=3.40 > > cinder revert-to-snapshot snap1 > > > Please let us know if it works or not. > > [0] > https://docs.openstack.org/releasenotes/cinder/pike.html#relnotes-11-0-0-stable-pike-new-features > > Lucio > > On Wed, 16 Sep 2020 at 09:52, Ignazio Cassano > wrote: > >> Hello Stackers, >> does cinder revert to snapshot works on stein ? >> The command cinder revert-to-snapshot returns: >> >> error: argument : invalid choice: u'revert-to-snapshot' >> >> *I also tried using cinder v3 api* post /v3/{project_id}/volumes/ >> {volume_id}/action >> but it returns : >> {"itemNotFound": {"message": "The resource could not be found.", "code": >> 404}} >> >> Please, anyone could help me ? >> Ignazio >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Sep 16 13:23:58 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Sep 2020 09:23:58 -0400 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> Message-ID: <55b06273-fbdf-5f72-d20e-ab7a3b9ba7d8@gmail.com> On 9/10/20 9:51 PM, Brian Rosmaita wrote: > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > answering questions in IRC, and participating in the Cinder weekly > meetings and at the midcycles.  He's been particularly thorough and > helpful in his reviews of backend drivers, and has also been helpful in > giving pointers to new driver maintainers who are setting up third party > CI for their drivers.  Having Lucio as a core reviewer will help improve > the team's review bandwidth without sacrificing review quality. > > In the absence of objections, I'll add Lucio to the core team just > before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC > in #openstack-meeting-alt).  Please communicate any concerns to me > before that time. Having heard only positive responses, I've added Lucio to the cinder core team, with all the privileges and responsibilities pertaining thereto. Congratulations, Lucio! > cheers, > brian From anlin.kong at gmail.com Wed Sep 16 13:29:36 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 17 Sep 2020 01:29:36 +1200 Subject: [Requirements] [FFE] python-troveclient 5.1.1 Message-ID: Hi, I'd like to ask FFE for bumping python-troveclient to 5.1.1[1] in Victoria. The patch version 5.1.1 includes a fix for multi-region support for python-troveclient[2] OSC plugin. [1]: https://review.opendev.org/#/c/752122/ [2]: https://review.opendev.org/#/q/Ia0580a599fc2385d54def4e18e0780209b82eff7 --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucioseki at gmail.com Wed Sep 16 13:35:27 2020 From: lucioseki at gmail.com (Lucio Seki) Date: Wed, 16 Sep 2020 10:35:27 -0300 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: <55b06273-fbdf-5f72-d20e-ab7a3b9ba7d8@gmail.com> References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> <55b06273-fbdf-5f72-d20e-ab7a3b9ba7d8@gmail.com> Message-ID: Hello Cinderinos, It's an honor for me to have this opportunity & responsibility. I'll do my best to keep contributing to the Cinder community. Thank you all, Lucio Seki On Wed, 16 Sep 2020 at 10:32, Brian Rosmaita wrote: > On 9/10/20 9:51 PM, Brian Rosmaita wrote: > > Lucio Seki (lseki on IRC) has been very active this cycle doing reviews, > > answering questions in IRC, and participating in the Cinder weekly > > meetings and at the midcycles. He's been particularly thorough and > > helpful in his reviews of backend drivers, and has also been helpful in > > giving pointers to new driver maintainers who are setting up third party > > CI for their drivers. Having Lucio as a core reviewer will help improve > > the team's review bandwidth without sacrificing review quality. > > > > In the absence of objections, I'll add Lucio to the core team just > > before the next Cinder team meeting (Wednesday, 16 September at 1400 UTC > > in #openstack-meeting-alt). Please communicate any concerns to me > > before that time. > > Having heard only positive responses, I've added Lucio to the cinder > core team, with all the privileges and responsibilities pertaining thereto. > > Congratulations, Lucio! > > > cheers, > > brian > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Sep 16 13:35:32 2020 From: marios at redhat.com (Marios Andreou) Date: Wed, 16 Sep 2020 16:35:32 +0300 Subject: [tripleo] stepping down as PTL In-Reply-To: References: Message-ID: On Wed, Sep 16, 2020 at 3:20 PM Wesley Hayutin wrote: > Greetings, > > Thank you for the opportunity to be the TripleO PTL. This has been a > great learning opportunity to work with a pure upstream community, other > projects and the OpenStack leadership. Thank you to the TripleO team for > your help and dedication in adding features, fixing bugs, and responding to > whatever has come our way upstream. Lastly.. Thank you to Alex and Emilien > for all the assistance throughout!! > Wes, thanks for all your hard work over the past two cycles as PTL! > > Managing the work required here with Covid-19, home schooling is a little > much for me at this time, I would like to encourage others to volunteer for > the opportunity in Wallaby. > I'd like to put my name forward as 'interested'. As discussed offline, I think it may be a good time to consider adopting the distributed ptl [1] for TripleO assuming there is consensus in doing so. That should probably be its own thread though so I won't analyse it any further here beyond introducing the idea, thanks, marios > > 0/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtint.stfc at gmail.com Wed Sep 16 13:39:03 2020 From: mtint.stfc at gmail.com (Michael STFC) Date: Wed, 16 Sep 2020 06:39:03 -0700 Subject: core os password reset In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Message-ID: Our openstack env automatically injects SSH keys) and already does that with all other images I have downloaded to deployed e.g fedora cloud images and ceros cloud image. However core os is different and I have tried to edit grub added coreos.autologin=tty1 but nothing. Also tried to do this via cloud-config #cloud-config coreos: units: - name: etcd.service command: start users: - name: core passwd: coreos ssh_authorized_keys: - "ssh-rsa xxxxx" And not luck - when vm boots it hangs. On 16 Sep 2020 at 13:31:10, Florian Rommel wrote: > Hi Michael. > So, if I remember coreOS correctly, its the same as all of the cloud based > images. It uses SSH keys to authenticate. If you have a an SSH public key > in there where you do no longer have the private key for, you can “easily” > reset it by 2 ways. > 1. If its volume based instance, delete the instance but not the volume. > Create the instance again by adding your own ssh key into the boot process. > This will ADD the ssh key, but not overwrite the existing one in the > authorized_key file > 2. If it is normal ephermal disk based instance, make a snapshot and > create a new instance from the snapshot, adding your own ssh key into it. > > Either or, if they are ssh key authenticated (which they should be), there > isn’t really an EASY way unless you want to have the volume directly. > > Best regards, > //Florian > > On 16. Sep 2020, at 13.53, Michael STFC wrote: > > > Hi > > > New to openstack and wanting to know how to get boot core os and reset > user core password. > > > Please advise. > > > Michael > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Wed Sep 16 13:40:37 2020 From: johfulto at redhat.com (John Fulton) Date: Wed, 16 Sep 2020 09:40:37 -0400 Subject: [tripleo] stepping down as PTL In-Reply-To: References: Message-ID: On Wed, Sep 16, 2020 at 9:38 AM Marios Andreou wrote: > > > > On Wed, Sep 16, 2020 at 3:20 PM Wesley Hayutin wrote: >> >> Greetings, >> >> Thank you for the opportunity to be the TripleO PTL. This has been a great learning opportunity to work with a pure upstream community, other projects and the OpenStack leadership. Thank you to the TripleO team for your help and dedication in adding features, fixing bugs, and responding to whatever has come our way upstream. Lastly.. Thank you to Alex and Emilien for all the assistance throughout!! > > > Wes, > thanks for all your hard work over the past two cycles as PTL! +100 > >> >> >> Managing the work required here with Covid-19, home schooling is a little much for me at this time, I would like to encourage others to volunteer for the opportunity in Wallaby. > > > I'd like to put my name forward as 'interested'. > > As discussed offline, I think it may be a good time to consider adopting the distributed ptl [1] for TripleO assuming there is consensus in doing so. That should probably be its own thread though so I won't analyse it any further here beyond introducing the idea, > > thanks, marios > > >> >> >> 0/ From smooney at redhat.com Wed Sep 16 13:45:42 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 16 Sep 2020 14:45:42 +0100 Subject: core os password reset In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Message-ID: <6478bb0818a2edcf3cfd68e3ed68436428567927.camel@redhat.com> On Wed, 2020-09-16 at 06:39 -0700, Michael STFC wrote: > Our openstack env automatically injects SSH keys) and already does that > with all other images I have downloaded to deployed e.g fedora cloud images > and ceros cloud image. > > However core os is different and I have tried to edit grub added > coreos.autologin=tty1 > but nothing. > > Also tried to do this via cloud-config > > #cloud-config > > coreos: > units: > - name: etcd.service > command: start > > users: > - name: core > passwd: coreos > ssh_authorized_keys: > - "ssh-rsa xxxxx" > > > And not luck - when vm boots it hangs. Coreos does not use cloud config by default it uses ignition. i belive you can still configure it with cloud init but you have to do it slightly differnet then normal. https://coreos.com/os/docs/latest/booting-on-openstack.html#container-linux-configs has the detail you need. basically you have to either pass an ignition script as the user data or Container Linux Config format. cloud init wont work. e.g. nova boot \ --user-data ./config.ign \ --image cdf3874c-c27f-4816-bc8c-046b240e0edd \ --key-name coreos \ --flavor m1.medium \ --min-count 3 \ --security-groups default,coreos were ./config.ign is an ignition file. > > On 16 Sep 2020 at 13:31:10, Florian Rommel wrote: > > > Hi Michael. > > So, if I remember coreOS correctly, its the same as all of the cloud based > > images. It uses SSH keys to authenticate. If you have a an SSH public key > > in there where you do no longer have the private key for, you can “easily” > > reset it by 2 ways. > > 1. If its volume based instance, delete the instance but not the volume. > > Create the instance again by adding your own ssh key into the boot process. > > This will ADD the ssh key, but not overwrite the existing one in the > > authorized_key file > > 2. If it is normal ephermal disk based instance, make a snapshot and > > create a new instance from the snapshot, adding your own ssh key into it. > > > > Either or, if they are ssh key authenticated (which they should be), there > > isn’t really an EASY way unless you want to have the volume directly. > > > > Best regards, > > //Florian > > > > On 16. Sep 2020, at 13.53, Michael STFC wrote: > > > > > > Hi > > > > > > New to openstack and wanting to know how to get boot core os and reset > > user core password. > > > > > > Please advise. > > > > > > Michael > > > > > > > > From marios at redhat.com Wed Sep 16 13:51:10 2020 From: marios at redhat.com (Marios Andreou) Date: Wed, 16 Sep 2020 16:51:10 +0300 Subject: [tripleo] distributed-project-leadership RFC In-Reply-To: References: Message-ID: I already replied on your other thread [1] but I think *this* is the right one ! I'd like to nominate myself for the PTL role - ideally working with a team of liaisons. Of course we may not find enough folks to fill all the roles documented in the spec, in which case the PTL will have to ensure this is done by some other means (in the worst case doing it themself); so let's see what the level of interest is here. Perhaps some folks will even disagree with the distributed PTL. Otherwise I'm also interested in any of the liaison - I have some prior experience cutting tripleo releases so possibly release liaison. thanks, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017326.html On Wed, Sep 16, 2020 at 3:34 PM Wesley Hayutin wrote: > Greetings, > > The TripleO project would like to hear your thoughts with regards to > breaking down the typical PTL role into smaller roles. Please read through > this document [1] and share your thoughts in this thread. > > If you are interested in growing your leadership skills and being > responsible for any of the roles listed there please also indicate that in > your response. > > Thanks all! > > > [1] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Sep 16 13:54:53 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 16 Sep 2020 08:54:53 -0500 Subject: [cinder] propose Lucio Seki for cinder core In-Reply-To: References: <2f42f092-5037-5f9d-48d6-cc52097f2479@gmail.com> <55b06273-fbdf-5f72-d20e-ab7a3b9ba7d8@gmail.com> Message-ID: <06149649-b20e-3dda-866f-343d7cac2593@gmail.com> Lucio, Congratulations and welcome to the team! Jay On 9/16/2020 8:35 AM, Lucio Seki wrote: > Hello Cinderinos, > > It's an honor for me to have this opportunity & responsibility. > I'll do my best to keep contributing to the Cinder community. > > Thank you all, > Lucio Seki > > On Wed, 16 Sep 2020 at 10:32, Brian Rosmaita > > wrote: > > On 9/10/20 9:51 PM, Brian Rosmaita wrote: > > Lucio Seki (lseki on IRC) has been very active this cycle doing > reviews, > > answering questions in IRC, and participating in the Cinder weekly > > meetings and at the midcycles.  He's been particularly thorough and > > helpful in his reviews of backend drivers, and has also been > helpful in > > giving pointers to new driver maintainers who are setting up > third party > > CI for their drivers.  Having Lucio as a core reviewer will help > improve > > the team's review bandwidth without sacrificing review quality. > > > > In the absence of objections, I'll add Lucio to the core team just > > before the next Cinder team meeting (Wednesday, 16 September at > 1400 UTC > > in #openstack-meeting-alt).  Please communicate any concerns to me > > before that time. > > Having heard only positive responses, I've added Lucio to the cinder > core team, with all the privileges and responsibilities pertaining > thereto. > > Congratulations, Lucio! > > > cheers, > > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 16 14:10:24 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Sep 2020 14:10:24 +0000 Subject: [tripleo] distributed-project-leadership RFC In-Reply-To: References: Message-ID: <20200916141024.ajwqeyz7zvwem25n@yuggoth.org> On 2020-09-16 16:51:10 +0300 (+0300), Marios Andreou wrote: [...] > I'd like to nominate myself for the PTL role - ideally working with a team > of liaisons. Of course we may not find enough folks to fill all the roles > documented in the spec, in which case the PTL will have to ensure this is > done by some other means (in the worst case doing it themself); so let's > see what the level of interest is here. Perhaps some folks will even > disagree with the distributed PTL. [...] I think there may be some misunderstanding due to how that document is laid out. It briefly describes our traditional "PTL with Liaisons" project leadership model for purposes of comparison (and to indicate that it's still the default), but then goes on to describe a new "Distributed Leadership" model which has no PTL. If you want to nominate yourself as PTL candidate for TripleO, you would do that as usual next week when PTL nominations open (see https://governance.openstack.org/election/ for instructions, but we'll also send another announcement to the ML at the start of nomination week). Whoever is elected TripleO PTL can delegate whatever duties to as many liaisons as they like, this has always been a strong recommendation anyway for a number of very good reasons. If the TripleO wants "PTL-less" distributed leadership instead (that is, a group of liaisons reporting to the team and to the TC, with no PTL as an intermediary), then that's what the 2020-08-03 Distributed Project Leadership resolution is intended to allow. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From marios at redhat.com Wed Sep 16 14:18:35 2020 From: marios at redhat.com (Marios Andreou) Date: Wed, 16 Sep 2020 17:18:35 +0300 Subject: [tripleo] distributed-project-leadership RFC In-Reply-To: <20200916141024.ajwqeyz7zvwem25n@yuggoth.org> References: <20200916141024.ajwqeyz7zvwem25n@yuggoth.org> Message-ID: On Wed, Sep 16, 2020 at 5:12 PM Jeremy Stanley wrote: > On 2020-09-16 16:51:10 +0300 (+0300), Marios Andreou wrote: > [...] > > I'd like to nominate myself for the PTL role - ideally working with a > team > > of liaisons. Of course we may not find enough folks to fill all the roles > > documented in the spec, in which case the PTL will have to ensure this is > > done by some other means (in the worst case doing it themself); so let's > > see what the level of interest is here. Perhaps some folks will even > > disagree with the distributed PTL. > [...] > > I think there may be some misunderstanding due to how that document > is laid out. It briefly describes our traditional "PTL with > Liaisons" project leadership model for purposes of comparison (and > to indicate that it's still the default), but then goes on to > describe a new "Distributed Leadership" model which has no PTL. > > If you want to nominate yourself as PTL candidate for TripleO, you > would do that as usual next week when PTL nominations open (see > https://governance.openstack.org/election/ for instructions, but > we'll also send another announcement to the ML at the start of > nomination week). Whoever is elected TripleO PTL can delegate > whatever duties to as many liaisons as they like, this has always > been a strong recommendation anyway for a number of very good > reasons. > > If the TripleO wants "PTL-less" distributed leadership instead (that > is, a group of liaisons reporting to the team and to the TC, with no > PTL as an intermediary), then that's what the 2020-08-03 Distributed > Project Leadership resolution is intended to allow. > thanks for clarifying - so I'm in fact describing PTL with liaisons above. I thought even with distributed PTL there is still a central figure that is to be held accountable should any of the liaisons be unable to fulfill their responsibilities. I still think the distributed leadership model can work for TripleO but that will depend on how many folks step forward to signal interest here. I guess what we do ultimately will depend on that so let's give it a couple of days, thanks, marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian at datalounges.com Wed Sep 16 14:25:11 2020 From: florian at datalounges.com (Florian Rommel) Date: Wed, 16 Sep 2020 17:25:11 +0300 Subject: core os password reset In-Reply-To: <6478bb0818a2edcf3cfd68e3ed68436428567927.camel@redhat.com> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <6478bb0818a2edcf3cfd68e3ed68436428567927.camel@redhat.com> Message-ID: <69CEE84A-7BCA-4920-B161-683F2856E4A9@datalounges.com> Just as a side question, is there a benefit of ignition over cloud-init? (Not trying to start a flame war.. genuinely interested, and not trying to hijack the thread either) //florian > On 16. Sep 2020, at 16.45, Sean Mooney wrote: > > On Wed, 2020-09-16 at 06:39 -0700, Michael STFC wrote: >> Our openstack env automatically injects SSH keys) and already does that >> with all other images I have downloaded to deployed e.g fedora cloud images >> and ceros cloud image. >> >> However core os is different and I have tried to edit grub added >> coreos.autologin=tty1 >> but nothing. >> >> Also tried to do this via cloud-config >> >> #cloud-config >> >> coreos: >> units: >> - name: etcd.service >> command: start >> >> users: >> - name: core >> passwd: coreos >> ssh_authorized_keys: >> - "ssh-rsa xxxxx" >> >> >> And not luck - when vm boots it hangs. > Coreos does not use cloud config by default it uses ignition. > i belive you can still configure it with cloud init but you have to do it > slightly differnet then normal. > https://coreos.com/os/docs/latest/booting-on-openstack.html#container-linux-configs > has the detail you need. basically you have to either pass an ignition script as the user > data or Container Linux Config format. > > cloud init wont work. > > e.g. > nova boot \ > --user-data ./config.ign \ > --image cdf3874c-c27f-4816-bc8c-046b240e0edd \ > --key-name coreos \ > --flavor m1.medium \ > --min-count 3 \ > --security-groups default,coreos > > were ./config.ign is an ignition file. > >> >> On 16 Sep 2020 at 13:31:10, Florian Rommel wrote: >> >>> Hi Michael. >>> So, if I remember coreOS correctly, its the same as all of the cloud based >>> images. It uses SSH keys to authenticate. If you have a an SSH public key >>> in there where you do no longer have the private key for, you can “easily” >>> reset it by 2 ways. >>> 1. If its volume based instance, delete the instance but not the volume. >>> Create the instance again by adding your own ssh key into the boot process. >>> This will ADD the ssh key, but not overwrite the existing one in the >>> authorized_key file >>> 2. If it is normal ephermal disk based instance, make a snapshot and >>> create a new instance from the snapshot, adding your own ssh key into it. >>> >>> Either or, if they are ssh key authenticated (which they should be), there >>> isn’t really an EASY way unless you want to have the volume directly. >>> >>> Best regards, >>> //Florian >>> >>> On 16. Sep 2020, at 13.53, Michael STFC wrote: >>> >>> >>> Hi >>> >>> >>> New to openstack and wanting to know how to get boot core os and reset >>> user core password. >>> >>> >>> Please advise. >>> >>> >>> Michael >>> >>> >>> >>> > > From smooney at redhat.com Wed Sep 16 14:34:09 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 16 Sep 2020 15:34:09 +0100 Subject: core os password reset In-Reply-To: <69CEE84A-7BCA-4920-B161-683F2856E4A9@datalounges.com> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <6478bb0818a2edcf3cfd68e3ed68436428567927.camel@redhat.com> <69CEE84A-7BCA-4920-B161-683F2856E4A9@datalounges.com> Message-ID: <67a107feee8376e0093e13a112bbf9bba95b9145.camel@redhat.com> On Wed, 2020-09-16 at 17:25 +0300, Florian Rommel wrote: > Just as a side question, is there a benefit of ignition over cloud-init? (Not trying to start a flame war.. genuinely > interested, and not trying to hijack the thread either) im not really sure. i think core os created ignition because of some limitation with cloud init they use it for a lot more then jsut basic first boot setup. its used for all system configtion in container linux so i imagin they hit edgecases and developed ignition to adress those. most cloud image like ubuntu or fedora i think dont ship with ignition support out of the box. i did not have anything in my history on this topic but i did fine this https://coreos.com/ignition/docs/latest/what-is-ignition.html coreos are generally pretty good at documenting there deision are thigns like this also https://coreos.com/ignition/docs/latest/rationale.html i really have not had much interaction with it however so in practis i dont know which is "better" in general. > > //florian > > > On 16. Sep 2020, at 16.45, Sean Mooney wrote: > > > > On Wed, 2020-09-16 at 06:39 -0700, Michael STFC wrote: > > > Our openstack env automatically injects SSH keys) and already does that > > > with all other images I have downloaded to deployed e.g fedora cloud images > > > and ceros cloud image. > > > > > > However core os is different and I have tried to edit grub added > > > coreos.autologin=tty1 > > > but nothing. > > > > > > Also tried to do this via cloud-config > > > > > > #cloud-config > > > > > > coreos: > > > units: > > > - name: etcd.service > > > command: start > > > > > > users: > > > - name: core > > > passwd: coreos > > > ssh_authorized_keys: > > > - "ssh-rsa xxxxx" > > > > > > > > > And not luck - when vm boots it hangs. > > > > Coreos does not use cloud config by default it uses ignition. > > i belive you can still configure it with cloud init but you have to do it > > slightly differnet then normal. > > https://coreos.com/os/docs/latest/booting-on-openstack.html#container-linux-configs > > has the detail you need. basically you have to either pass an ignition script as the user > > data or Container Linux Config format. > > > > cloud init wont work. > > > > e.g. > > nova boot \ > > --user-data ./config.ign \ > > --image cdf3874c-c27f-4816-bc8c-046b240e0edd \ > > --key-name coreos \ > > --flavor m1.medium \ > > --min-count 3 \ > > --security-groups default,coreos > > > > were ./config.ign is an ignition file. > > > > > > > > On 16 Sep 2020 at 13:31:10, Florian Rommel wrote: > > > > > > > Hi Michael. > > > > So, if I remember coreOS correctly, its the same as all of the cloud based > > > > images. It uses SSH keys to authenticate. If you have a an SSH public key > > > > in there where you do no longer have the private key for, you can “easily” > > > > reset it by 2 ways. > > > > 1. If its volume based instance, delete the instance but not the volume. > > > > Create the instance again by adding your own ssh key into the boot process. > > > > This will ADD the ssh key, but not overwrite the existing one in the > > > > authorized_key file > > > > 2. If it is normal ephermal disk based instance, make a snapshot and > > > > create a new instance from the snapshot, adding your own ssh key into it. > > > > > > > > Either or, if they are ssh key authenticated (which they should be), there > > > > isn’t really an EASY way unless you want to have the volume directly. > > > > > > > > Best regards, > > > > //Florian > > > > > > > > On 16. Sep 2020, at 13.53, Michael STFC wrote: > > > > > > > > > > > > Hi > > > > > > > > > > > > New to openstack and wanting to know how to get boot core os and reset > > > > user core password. > > > > > > > > > > > > Please advise. > > > > > > > > > > > > Michael > > > > > > > > > > > > > > > > > > > > > > From fungi at yuggoth.org Wed Sep 16 14:46:01 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Sep 2020 14:46:01 +0000 Subject: [tripleo] distributed-project-leadership RFC In-Reply-To: References: <20200916141024.ajwqeyz7zvwem25n@yuggoth.org> Message-ID: <20200916144600.6puhsmibrjw222fv@yuggoth.org> On 2020-09-16 17:18:35 +0300 (+0300), Marios Andreou wrote: [...] > thanks for clarifying - so I'm in fact describing PTL with > liaisons above. I thought even with distributed PTL there is still > a central figure that is to be held accountable should any of the > liaisons be unable to fulfill their responsibilities. [...] Yep, just to make sure everyone's on the same page about what https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html allows, it's a means of having no PTL at all and instead formally assigning volunteers to at least these duties which a PTL would normally have performed or delegated: Release liaison, TaCT SIG liaison, Security SIG/VMT liaison (also optionally: Events liaison, Project Update/Onboarding liaison, Meeting Facilitator, Bug Deputy, RFE Coordinator). These are all tasks a traditional PTL can already (and is encouraged to) delegate to volunteers on their team if they are so inclined, the 2020-08-03 Distributed Project Leadership resolution merely recognizes a sanctioned way of doing that without any PTL whatsoever. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Sep 16 14:49:18 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Sep 2020 14:49:18 +0000 Subject: core os password reset In-Reply-To: <69CEE84A-7BCA-4920-B161-683F2856E4A9@datalounges.com> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <6478bb0818a2edcf3cfd68e3ed68436428567927.camel@redhat.com> <69CEE84A-7BCA-4920-B161-683F2856E4A9@datalounges.com> Message-ID: <20200916144918.oe2ialfoqqcufrs6@yuggoth.org> On 2020-09-16 17:25:11 +0300 (+0300), Florian Rommel wrote: > Just as a side question, is there a benefit of ignition over > cloud-init? (Not trying to start a flame war.. genuinely > interested, and not trying to hijack the thread either) [...] Throwing fuel on the fire, the OpenDev Collaboratory uses this alternative on their CI nodes, mainly as a reaction to cloud-init's massive dependency sprawl: https://docs.openstack.org/infra/glean/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From florian at datalounges.com Wed Sep 16 14:49:51 2020 From: florian at datalounges.com (Florian Rommel) Date: Wed, 16 Sep 2020 17:49:51 +0300 Subject: core os password reset In-Reply-To: <67a107feee8376e0093e13a112bbf9bba95b9145.camel@redhat.com> References: <67a107feee8376e0093e13a112bbf9bba95b9145.camel@redhat.com> Message-ID: <2C209110-996B-4C48-9407-0B853DFEBC65@datalounges.com> Thank you Sean, appreciate it. Learned something new !:) //Florian > On 16. Sep 2020, at 17.36, Sean Mooney wrote: > > On Wed, 2020-09-16 at 17:25 +0300, Florian Rommel wrote: >> Just as a side question, is there a benefit of ignition over cloud-init? (Not trying to start a flame war.. genuinely >> interested, and not trying to hijack the thread either) > im not really sure. i think core os created ignition because of some limitation with cloud init > > they use it for a lot more then jsut basic first boot setup. its used for all system configtion in > container linux so i imagin they hit edgecases and developed ignition to adress those. > most cloud image like ubuntu or fedora i think dont ship with ignition support out of the box. > i did not have anything in my history on this topic but i did fine this > https://coreos.com/ignition/docs/latest/what-is-ignition.html > coreos are generally pretty good at documenting there deision are thigns like this > also https://coreos.com/ignition/docs/latest/rationale.html > > i really have not had much interaction with it however so in practis i dont know which is "better" in general. >> >> //florian >> >>>> On 16. Sep 2020, at 16.45, Sean Mooney wrote: >>> >>> On Wed, 2020-09-16 at 06:39 -0700, Michael STFC wrote: >>>> Our openstack env automatically injects SSH keys) and already does that >>>> with all other images I have downloaded to deployed e.g fedora cloud images >>>> and ceros cloud image. >>>> >>>> However core os is different and I have tried to edit grub added >>>> coreos.autologin=tty1 >>>> but nothing. >>>> >>>> Also tried to do this via cloud-config >>>> >>>> #cloud-config >>>> >>>> coreos: >>>> units: >>>> - name: etcd.service >>>> command: start >>>> >>>> users: >>>> - name: core >>>> passwd: coreos >>>> ssh_authorized_keys: >>>> - "ssh-rsa xxxxx" >>>> >>>> >>>> And not luck - when vm boots it hangs. >>> >>> Coreos does not use cloud config by default it uses ignition. >>> i belive you can still configure it with cloud init but you have to do it >>> slightly differnet then normal. >>> https://coreos.com/os/docs/latest/booting-on-openstack.html#container-linux-configs >>> has the detail you need. basically you have to either pass an ignition script as the user >>> data or Container Linux Config format. >>> >>> cloud init wont work. >>> >>> e.g. >>> nova boot \ >>> --user-data ./config.ign \ >>> --image cdf3874c-c27f-4816-bc8c-046b240e0edd \ >>> --key-name coreos \ >>> --flavor m1.medium \ >>> --min-count 3 \ >>> --security-groups default,coreos >>> >>> were ./config.ign is an ignition file. >>> >>>> >>>> On 16 Sep 2020 at 13:31:10, Florian Rommel wrote: >>>> >>>>> Hi Michael. >>>>> So, if I remember coreOS correctly, its the same as all of the cloud based >>>>> images. It uses SSH keys to authenticate. If you have a an SSH public key >>>>> in there where you do no longer have the private key for, you can “easily” >>>>> reset it by 2 ways. >>>>> 1. If its volume based instance, delete the instance but not the volume. >>>>> Create the instance again by adding your own ssh key into the boot process. >>>>> This will ADD the ssh key, but not overwrite the existing one in the >>>>> authorized_key file >>>>> 2. If it is normal ephermal disk based instance, make a snapshot and >>>>> create a new instance from the snapshot, adding your own ssh key into it. >>>>> >>>>> Either or, if they are ssh key authenticated (which they should be), there >>>>> isn’t really an EASY way unless you want to have the volume directly. >>>>> >>>>> Best regards, >>>>> //Florian >>>>> >>>>> On 16. Sep 2020, at 13.53, Michael STFC wrote: >>>>> >>>>> >>>>> Hi >>>>> >>>>> >>>>> New to openstack and wanting to know how to get boot core os and reset >>>>> user core password. >>>>> >>>>> >>>>> Please advise. >>>>> >>>>> >>>>> Michael >>>>> >>>>> >>>>> >>>>> >>> >>> >> >> > > From ignaziocassano at gmail.com Wed Sep 16 15:01:53 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 16 Sep 2020 17:01:53 +0200 Subject: [openstack][cinder] volume revert to snapshot on stein In-Reply-To: References: Message-ID: Hello Lucio, it works fine. Thank you Ignazio Il giorno mer 16 set 2020 alle ore 15:05 Lucio Seki ha scritto: > Hi Ignazio, > > The feature was introduced in Pike release [0], so it should work. > Please specify the Volume API version >= 3.40 either by: > a) passing `--os-volume-api-version` parameter to the command line: > > cinder --os-volume-api-version 3.40 revert-to-snapshot snap1 > > > b) exporting the environment variable OS_VOLUME_API_VERSION: > > export OS_VOLUME_API_VERSION=3.40 > > cinder revert-to-snapshot snap1 > > > Please let us know if it works or not. > > [0] > https://docs.openstack.org/releasenotes/cinder/pike.html#relnotes-11-0-0-stable-pike-new-features > > Lucio > > On Wed, 16 Sep 2020 at 09:52, Ignazio Cassano > wrote: > >> Hello Stackers, >> does cinder revert to snapshot works on stein ? >> The command cinder revert-to-snapshot returns: >> >> error: argument : invalid choice: u'revert-to-snapshot' >> >> *I also tried using cinder v3 api* post /v3/{project_id}/volumes/ >> {volume_id}/action >> but it returns : >> {"itemNotFound": {"message": "The resource could not be found.", "code": >> 404}} >> >> Please, anyone could help me ? >> Ignazio >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 16 15:50:51 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 16 Sep 2020 10:50:51 -0500 Subject: vPTG October 2020 Team Signup Reminder In-Reply-To: References: <5F13B10F-C0C5-4761-8AD2-9B3A55F67441@openstack.org> Message-ID: <174979c8581.106aa3e3163305.7375182827390736305@ghanshyammann.com> Hi Kendal, Sorry for the late, Is it possible to book a slot for Policy popup team now? While discussing with amotoki on the possible way for Horizon to adopt the new policy, I thought of having a PTG session will good to speed up the process even for other projects also. -gmann ---- On Wed, 09 Sep 2020 13:28:18 -0500 Kendall Nelson wrote ---- > Hello Everyone! > This is your final reminder! You have until September 11th at 7:00 UTC to sign up your team for the PTG! You must complete BOTH the survey[1] AND reserve time in the ethercalc[2] to sign up your team. > And don't forget to register! [3] > - TheKendalls (diablo_rojo & wendallkaters) > [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey[2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb[3] PTG Registration: https://october2020ptg.eventbrite.com > On Mon, Aug 31, 2020 at 10:39 AM Kendall Waters wrote: > Hello Everyone! > Wanted to give you all a reminder that the deadline for signing up teams for the PTG is approaching! > The virtual PTG will be held from Monday October 26th to Friday October 30th, 2020. > To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by September 11th at 7:00 UTC. > We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions in with 4 rules/guidelines. > 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first.2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. > Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but since it will be how we contact you with passwords, event details, etc. it is still important! > If you have any questions, please let us know. > -The Kendalls (diablo_rojo & wendallkaters) > [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey[2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb[3] PTG Registration: https://october2020ptg.eventbrite.com > > > > > From mthode at mthode.org Wed Sep 16 15:55:42 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 16 Sep 2020 10:55:42 -0500 Subject: [Requirements] [FFE] python-troveclient 5.1.1 In-Reply-To: References: Message-ID: <20200916155542.sdqhlmnhhekhi3xq@mthode.org> On 20-09-17 01:29:36, Lingxian Kong wrote: > Hi, > > I'd like to ask FFE for bumping python-troveclient to 5.1.1[1] in Victoria. > > The patch version 5.1.1 includes a fix for multi-region support for > python-troveclient[2] OSC plugin. > > [1]: https://review.opendev.org/#/c/752122/ > [2]: > https://review.opendev.org/#/q/Ia0580a599fc2385d54def4e18e0780209b82eff7 > > --- > Lingxian Kong > Senior Software Engineer > Catalyst Cloud > www.catalystcloud.nz Looks fine to me (reqs hat) -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kendall at openstack.org Wed Sep 16 15:59:14 2020 From: kendall at openstack.org (Kendall Waters) Date: Wed, 16 Sep 2020 10:59:14 -0500 Subject: vPTG October 2020 Team Signup Reminder In-Reply-To: <174979c8581.106aa3e3163305.7375182827390736305@ghanshyammann.com> References: <5F13B10F-C0C5-4761-8AD2-9B3A55F67441@openstack.org> <174979c8581.106aa3e3163305.7375182827390736305@ghanshyammann.com> Message-ID: Hey gmann, Definitely! Please book the slot in the ethercalc. Cheers, Kendall Kendall Waters Perez OpenStack Marketing & Events kendall at openstack.org > On Sep 16, 2020, at 10:50 AM, Ghanshyam Mann wrote: > > Hi Kendal, > > Sorry for the late, Is it possible to book a slot for Policy popup team now? > While discussing with amotoki on the possible way for Horizon to adopt the new policy, I thought > of having a PTG session will good to speed up the process even for other projects also. > > -gmann > > > ---- On Wed, 09 Sep 2020 13:28:18 -0500 Kendall Nelson wrote ---- >> Hello Everyone! >> This is your final reminder! You have until September 11th at 7:00 UTC to sign up your team for the PTG! You must complete BOTH the survey[1] AND reserve time in the ethercalc[2] to sign up your team. >> And don't forget to register! [3] >> - TheKendalls (diablo_rojo & wendallkaters) >> [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey[2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb[3] PTG Registration: https://october2020ptg.eventbrite.com >> On Mon, Aug 31, 2020 at 10:39 AM Kendall Waters wrote: >> Hello Everyone! >> Wanted to give you all a reminder that the deadline for signing up teams for the PTG is approaching! >> The virtual PTG will be held from Monday October 26th to Friday October 30th, 2020. >> To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by September 11th at 7:00 UTC. >> We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions in with 4 rules/guidelines. >> 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first.2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. >> Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but since it will be how we contact you with passwords, event details, etc. it is still important! >> If you have any questions, please let us know. >> -The Kendalls (diablo_rojo & wendallkaters) >> [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey[2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb[3] PTG Registration: https://october2020ptg.eventbrite.com >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 16 16:21:04 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 16 Sep 2020 11:21:04 -0500 Subject: vPTG October 2020 Team Signup Reminder In-Reply-To: References: <5F13B10F-C0C5-4761-8AD2-9B3A55F67441@openstack.org> <174979c8581.106aa3e3163305.7375182827390736305@ghanshyammann.com> Message-ID: <17497b82df4.eac2c07364729.7051020562530278629@ghanshyammann.com> ---- On Wed, 16 Sep 2020 10:59:14 -0500 Kendall Waters wrote ---- > Hey gmann, > Definitely! Please book the slot in the ethercalc. Thanks Kendal. Filled the survey and booked the slot. -gmann > Cheers,Kendall > Kendall Waters Perez > OpenStack Marketing & Events > kendall at openstack.org > > > > On Sep 16, 2020, at 10:50 AM, Ghanshyam Mann wrote: > Hi Kendal, > > Sorry for the late, Is it possible to book a slot for Policy popup team now? > While discussing with amotoki on the possible way for Horizon to adopt the new policy, I thought > of having a PTG session will good to speed up the process even for other projects also. > > -gmann > > > ---- On Wed, 09 Sep 2020 13:28:18 -0500 Kendall Nelson wrote ---- > Hello Everyone! > This is your final reminder! You have until September 11th at 7:00 UTC to sign up your team for the PTG! You must complete BOTH the survey[1] AND reserve time in the ethercalc[2] to sign up your team. > And don't forget to register! [3] > - TheKendalls (diablo_rojo & wendallkaters) > [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey[2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb[3] PTG Registration: https://october2020ptg.eventbrite.com > On Mon, Aug 31, 2020 at 10:39 AM Kendall Waters wrote: > Hello Everyone! > Wanted to give you all a reminder that the deadline for signing up teams for the PTG is approaching! > The virtual PTG will be held from Monday October 26th to Friday October 30th, 2020. > To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by September 11th at 7:00 UTC. > We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions in with 4 rules/guidelines. > 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first.2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. > Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but since it will be how we contact you with passwords, event details, etc. it is still important! > If you have any questions, please let us know. > -The Kendalls (diablo_rojo & wendallkaters) > [1] Team Survey: https://openstackfoundation.formstack.com/forms/oct2020_vptg_survey[2] Ethercalc Signup: https://ethercalc.openstack.org/7xp2pcbh1ncb[3] PTG Registration: https://october2020ptg.eventbrite.com > > > > > > > From radoslaw.piliszek at gmail.com Wed Sep 16 17:07:29 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 16 Sep 2020 19:07:29 +0200 Subject: [masakari] Wallaby PTG planning Message-ID: Hello, Masakarians*! *Anyone interested in Masakari. It's again this time of the century when a new OpenStack release is up on the horizon. As always, it means there will be a PTG (Project Team Gathering) to discuss the new cycle (and summarise the previous). This time it's going to be fully virtual so we call it a vPTG. I've booked us some time for the Wallaby PTG. [1] There are 4 (four) 2-hour sessions to accommodate both APAC and EMEA/Americas: Thursday October 29, 2020 06:00 - 08:00 (UTC) Thursday October 29, 2020 13:00 - 15:00 (UTC) Friday October 30, 2020 06:00 - 08:00 (UTC) Friday October 30, 2020 13:00 - 15:00 (UTC) I have also created an initial PTG page on Etherpad. [2] Please fill in your names if you are going to participate - with session times that suit you (please consider being flexible, it seems we are spanning the globe now). Add your proposals even if you can't participate directly (add your names to proposals). Please register for the event itself as well, to let the organisers know how many people are going to be there. [3] See YOU there! [1] https://ethercalc.openstack.org/7xp2pcbh1ncb [2] https://etherpad.opendev.org/p/masakari-wallaby-vptg [3] https://www.eventbrite.com/e/project-teams-gathering-october-2020-tickets-116136313841 -yoctozepto From thomas.king at gmail.com Tue Sep 15 19:27:50 2020 From: thomas.king at gmail.com (Thomas King) Date: Tue, 15 Sep 2020 13:27:50 -0600 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay - continued In-Reply-To: References: Message-ID: I got the metadata network accessible over a routed subnet! The breakthrough was making sure I pointed 169.254.169.254 to the Neutron DHCP address rather than the physical or bridge IP address, this route added to the physical router (L3 switch) that serves the subnet hosting the Ironic controller. Also, I made sure to add a host route for 169.254.169.254 to the baremetal remote subnet in Neutron so the baremetal nodes will have a proper route to the metadata subnet. There is still some network tweaking left to do. We have a separate subnet for ESXi mgmt vmkernels since they use virtual MAC addresses and neutron doesn't recognize them. However, enabling DHCP relay for that separate subnet breaks routing which also breaks cleaning and deploying, and I need to find out where. This is huge for our metal-as-a-service offering! Thanks, Tom King On Sat, Aug 22, 2020 at 12:53 AM Thomas King wrote: > Ok, thanks. > > On Fri, Aug 21, 2020, 11:39 PM Ruslanas Gžibovskis > wrote: > >> No, I didn't. In the beginning I wanted to say "yes" but that is for >> overcloud. >> >> Now I am thinking, maybe even I do no have metadata on undercloud.... >> >> Will need to check on Monday. >> >> 2020-08-22, št 00:51, Thomas King rašė: >> >>> Finally got it worked out except for reaching the metadata service IP >>> (169.254.169.254) from a remote network. Did you put specific routes in >>> place for that IP address on your physical network? >>> >>> Tom >>> >>> On Wed, Jul 22, 2020 at 3:19 AM Ruslanas Gžibovskis >>> wrote: >>> >>>> Ok >>>> >>>> here is a small copy paste: >>>> http://paste.openstack.org/show/o4Uay0DYbAdkcfUJGOLV/ >>>> >>>> also have relaunched previous commands, as I have redeployed overcloud >>>> :) >>>> >>>> and maybe you will be interested, in undercloud containers running: >>>> >>>> (undercloud) [stack at remote-u overcloud]$ sudo podman ps >>>> CONTAINER ID IMAGE >>>> COMMAND CREATED STATUS >>>> PORTS NAMES >>>> 39c684172ce7 >>>> docker.io/tripleomaster/centos-binary-neutron-dhcp-agent:current-tripleo >>>> /usr/sbin/dnsmasq... 7 days ago Up 7 days ago >>>> neutron-dnsmasq-qdhcp-e6d6c50b-fb69-4375-bb7d-0f6e0cfed5cb >>>> 30ccb169e666 >>>> docker.io/tripleomaster/centos-binary-ironic-pxe:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> ironic_pxe_http >>>> 6a62e08e8e10 >>>> docker.io/tripleomaster/centos-binary-ironic-pxe:current-tripleo >>>> /bin/bash -c BIND... 7 days ago Up 7 days ago >>>> ironic_pxe_tftp >>>> e4cbad7f7488 >>>> docker.io/tripleomaster/centos-binary-neutron-l3-agent:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> neutron_l3_agent >>>> d78d3828c420 >>>> docker.io/tripleomaster/centos-binary-neutron-openvswitch-agent:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago neutron_ovs_agent >>>> 3b67b6a69a87 >>>> docker.io/tripleomaster/centos-binary-neutron-dhcp-agent:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> neutron_dhcp >>>> 46d403293b54 >>>> docker.io/tripleomaster/centos-binary-swift-proxy-server:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_proxy >>>> 15a9792d6f62 >>>> docker.io/tripleomaster/centos-binary-swift-object:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_rsync >>>> 0e6cd094d6dc >>>> docker.io/tripleomaster/centos-binary-swift-object:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_object_updater >>>> 80b7f26e742d >>>> docker.io/tripleomaster/centos-binary-swift-object:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_object_server >>>> 765f79633499 >>>> docker.io/tripleomaster/centos-binary-swift-proxy-server:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_object_expirer >>>> 0e53da61af88 >>>> docker.io/tripleomaster/centos-binary-swift-container:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_container_updater >>>> 8c14cef58eb3 >>>> docker.io/tripleomaster/centos-binary-swift-account:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_account_server >>>> 12930a63dc12 >>>> docker.io/tripleomaster/centos-binary-swift-container:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_container_server >>>> 8a5cd45208c0 >>>> docker.io/tripleomaster/centos-binary-swift-account:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> swift_account_reaper >>>> b871400bfdd9 >>>> docker.io/tripleomaster/centos-binary-mistral-executor:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> mistral_executor >>>> 3e98e5f83f09 >>>> docker.io/tripleomaster/centos-binary-mistral-event-engine:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> mistral_event_engine >>>> 52d711dffca3 >>>> docker.io/tripleomaster/centos-binary-mistral-engine:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> mistral_engine >>>> 23dec12e650f >>>> docker.io/tripleomaster/centos-binary-iscsid:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago iscsid >>>> bafff227d9d7 >>>> docker.io/tripleomaster/centos-binary-haproxy:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago haproxy >>>> 4d17cbd60698 >>>> docker.io/tripleomaster/centos-binary-ironic-inspector:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> ironic_inspector_dnsmasq >>>> f77e49246c35 >>>> docker.io/tripleomaster/centos-binary-ironic-inspector:current-tripleo >>>> kolla_start 7 days ago Up 7 days ago >>>> ironic_inspector >>>> ec70f40bba04 >>>> docker.io/tripleomaster/centos-binary-rabbitmq:current-tripleo >>>> kolla_start 5 weeks ago Up 4 weeks ago rabbitmq >>>> f09c9a129d85 >>>> docker.io/tripleomaster/centos-binary-memcached:current-tripleo >>>> kolla_start 5 weeks ago Up 4 weeks ago >>>> memcached >>>> 0eb1e953dcaa >>>> docker.io/tripleomaster/centos-binary-nova-compute-ironic:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> nova_compute >>>> ac78778d2cb0 >>>> docker.io/tripleomaster/centos-binary-ironic-neutron-agent:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> ironic_neutron_agent >>>> 8660bf80fd9c >>>> docker.io/tripleomaster/centos-binary-ironic-conductor:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> ironic_conductor >>>> 9dead5068168 >>>> docker.io/tripleomaster/centos-binary-mistral-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> mistral_api >>>> f5c1c9d6166c >>>> docker.io/tripleomaster/centos-binary-ironic-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> ironic_api >>>> 48899d782dd5 >>>> docker.io/tripleomaster/centos-binary-nova-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> nova_api_cron >>>> 1b19c94834e8 >>>> docker.io/tripleomaster/centos-binary-nova-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago nova_api >>>> 5b66d128c930 >>>> docker.io/tripleomaster/centos-binary-glance-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> glance_api >>>> 7e63cdb8d6b4 >>>> docker.io/tripleomaster/centos-binary-placement-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> placement_api >>>> 2db105676f8b >>>> docker.io/tripleomaster/centos-binary-zaqar-wsgi:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> zaqar_websocket >>>> d3e5ae7368e6 >>>> docker.io/tripleomaster/centos-binary-zaqar-wsgi:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago zaqar >>>> 6332e26b40ec >>>> docker.io/tripleomaster/centos-binary-nova-scheduler:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> nova_scheduler >>>> a5b0e0904f0c >>>> docker.io/tripleomaster/centos-binary-nova-conductor:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> nova_conductor >>>> 4460302d97c8 >>>> docker.io/tripleomaster/centos-binary-neutron-server:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> neutron_api >>>> d418deb9ea13 >>>> docker.io/tripleomaster/centos-binary-cron:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> logrotate_crond >>>> 7225ff80b26d >>>> docker.io/tripleomaster/centos-binary-heat-engine:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> heat_engine >>>> 84d22b3b1663 >>>> docker.io/tripleomaster/centos-binary-heat-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago >>>> heat_api_cron >>>> 226e0d839772 >>>> docker.io/tripleomaster/centos-binary-heat-api:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago heat_api >>>> 6f7871a64325 >>>> docker.io/tripleomaster/centos-binary-keystone:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago keystone >>>> 8402ff823012 >>>> docker.io/tripleomaster/centos-binary-mariadb:current-tripleo >>>> kolla_start 7 weeks ago Up 4 weeks ago mysql >>>> (undercloud) [stack at remote-u overcloud]$ >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From 417142204 at qq.com Wed Sep 16 06:46:43 2020 From: 417142204 at qq.com (=?utf-8?B?5a6L6LS6?=) Date: Wed, 16 Sep 2020 14:46:43 +0800 Subject: neutron-l3-agent:l3-agent has been creating the "keepalived -P -f..." process Message-ID: we create a '--distributed=True,--ha=True" router in OpenStack Newton. We found that l3-agent cannot read the pid file of keepalived, thus creating the keepalived process again. As a result, a large number of keepalived processes appear on the host node.As picture 1 in the attachment. This problem can be solved when we execute the following content and modify the permissions of the pid file [root at fusion0 ~]# chown neutron:neutron /var/lib/neutron/ha_confs/2a20ab37-ffe8-4955-8148-cefdcb5b6be3.pid [root at fusion0 ~]# chown neutron:neutron /var/lib/neutron/ha_confs/2a20ab37-ffe8-4955-8148-cefdcb5b6be3.pid-vrrp I want to know how to solve this problem completely from the perspective of neutron. neutron-l3-agent.log file is in the attachment. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0916_1.jpg Type: application/octet-stream Size: 51333 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0916_2.jpg Type: application/octet-stream Size: 262777 bytes Desc: not available URL: From stefan.bujack at desy.de Wed Sep 16 17:49:23 2020 From: stefan.bujack at desy.de (Bujack, Stefan) Date: Wed, 16 Sep 2020 19:49:23 +0200 (CEST) Subject: [Octavia] Please help with deployment of octavia unbound lb port when creating LB Message-ID: <1941612305.56654725.1600278563865.JavaMail.zimbra@desy.de> Hello, I am a little lost here. Hopefully some of you nice people could help me with this issue please. We have an Openstack Ussuri deployment on Ubuntu 20.04. Our network is configured in an "Open vSwitch: High availability using VRRP" way. I have gone through the official Install and configure procedure on "https://docs.openstack.org/octavia/ussuri/install/install-ubuntu.html" We have one public network. When I want to "Deploy a basic HTTP load balancer" like described in the official documentation "https://docs.openstack.org/octavia/ussuri/user/guides/basic-cookbook.html" I see a problem with the created lb port. The port is down and unbound and the VIP is not reachable. root at keystone04:~# openstack loadbalancer create --name lb1 --vip-subnet-id DESY-VLAN-46 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2020-09-16T17:19:37 | | description | | | flavor_id | None | | id | cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | listeners | | | name | lb1 | | operating_status | OFFLINE | | pools | | | project_id | 0c6318a1c2414c9f805059788db47bb6 | | provider | amphora | | provisioning_status | PENDING_CREATE | | updated_at | None | | vip_address | 131.169.46.214 | | vip_network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | | vip_port_id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | | vip_qos_policy_id | None | | vip_subnet_id | f2a2d8d2-363e-45e7-80f8-f751a24eed8c | +---------------------+--------------------------------------+ root at keystone04:~# openstack loadbalancer show cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2020-09-16T17:19:37 | | description | | | flavor_id | None | | id | cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | listeners | | | name | lb1 | | operating_status | OFFLINE | | pools | | | project_id | 0c6318a1c2414c9f805059788db47bb6 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2020-09-16T17:20:22 | | vip_address | 131.169.46.214 | | vip_network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | | vip_port_id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | | vip_qos_policy_id | None | | vip_subnet_id | f2a2d8d2-363e-45e7-80f8-f751a24eed8c | +---------------------+--------------------------------------+ root at keystone04:~# openstack port list +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ | 020210e3-546a-4372-a91b-cc3e7a5cbab0 | HA port tenant 0c6318a1c2414c9f805059788db47bb6 | fa:16:3e:0b:d4:a9 | ip_address='169.254.192.26', subnet_id='4de6a91e-bb53-4869-976b-67815769bb12' | ACTIVE | | 20fe9c50-6c89-4ebd-bbfa-25bdf0e716fd | | fa:16:3e:f5:c3:a4 | ip_address='131.169.46.201', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | N/A | | 2ae5a87f-803a-4e1d-9e7c-e874f200a3f4 | | fa:16:3e:57:57:ef | ip_address='131.169.46.31', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | | 6948989b-40e8-40fe-9216-16f82d8071cd | | fa:16:3e:8b:59:0c | ip_address='172.16.1.1', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | | 784fa499-2f64-4026-a26b-732acd2f328c | | fa:16:3e:57:ec:23 | ip_address='131.169.46.128', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | | 8baf7abb-fa03-446b-8ca2-6d026cce75d6 | octavia-lb-vrrp-e50c5b05-69eb-45c4-a670-dc34331443f5 | fa:16:3e:1b:c1:7d | ip_address='131.169.46.40', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | | 8fa76adf-0a4b-400d-ae29-874cbd055f88 | | fa:16:3e:3f:92:14 | ip_address='172.16.0.100', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | | 906f5713-c2b6-4d05-8c89-b084e09c744c | | fa:16:3e:ba:d7:74 | ip_address='172.16.1.112', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | | a08d5c5f-dacb-4a96-b0f4-7e1a3fd1c536 | | fa:16:3e:86:f9:d6 | ip_address='172.16.0.219', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | | b5ad6738-8805-4f20-8084-a94ffacfff89 | | fa:16:3e:00:80:79 | ip_address='131.169.46.60', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | fa:16:3e:bb:0f:f3 | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | DOWN | | bf1476d0-0327-4c4f-8b79-d767c8a7dba5 | | fa:16:3e:24:79:cb | ip_address='131.169.46.126', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | | c15b142f-c06c-426a-83db-46e98e4839d6 | | fa:16:3e:c7:60:d1 | ip_address='172.16.1.141', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | | cb75004a-aa57-4250-93be-1bb03bdc2a1b | | fa:16:3e:7e:9c:9f | ip_address='131.169.46.84', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | | dc956e9a-a905-417a-b234-14782bf182d3 | HA port tenant 0c6318a1c2414c9f805059788db47bb6 | fa:16:3e:40:87:e3 | ip_address='169.254.194.172', subnet_id='4de6a91e-bb53-4869-976b-67815769bb12' | ACTIVE | | dd48e315-2cb1-4716-8bc5-e892a948cb5f | | fa:16:3e:b0:4a:eb | ip_address='172.16.1.2', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | | e25ee538-7938-4992-a4f7-51f35f6831b5 | octavia-health-manager-listen-port | fa:16:3e:5c:b3:2f | ip_address='172.16.0.2', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | | e91a5135-b076-4043-add4-21073109a730 | | fa:16:3e:4d:b8:56 | ip_address='131.169.46.102', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ root at keystone04:~# openstack port show bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | DOWN | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2020-09-16T17:19:37Z | | data_plane_status | None | | description | | | device_id | lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | device_owner | Octavia | | dns_assignment | None | | dns_domain | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | | id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | | ip_allocation | None | | location | cloud='', project.domain_id=, project.domain_name=, project.id='0c6318a1c2414c9f805059788db47bb6', project.name=, region_name='', zone= | | mac_address | fa:16:3e:bb:0f:f3 | | name | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | | port_security_enabled | True | | project_id | 0c6318a1c2414c9f805059788db47bb6 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 2 | | security_group_ids | 0964090c-0299-401a-9156-bafbb040e345 | | status | DOWN | | tags | | | trunk_details | None | | updated_at | 2020-09-16T17:19:39Z | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ I also keep getting this error on the octavia node: Sep 16 19:41:46 octavia04.desy.de octavia-health-manager[3009]: 2020-09-16 19:41:46.217 3009 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.0.219', 8660). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' My security groups look like this: root at octavia04:~# openstack security group list +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ | ID | Name | Description | Project | Tags | +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ | 0964090c-0299-401a-9156-bafbb040e345 | lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | f89517ee676f4618bd55849477442aca | [] | | 0cda6134-0574-430b-9250-f71b81587a53 | default | Default security group | | [] | | 2236e82c-13fe-42e3-9fcf-bea43917f231 | lb-mgmt-sec-grp | lb-mgmt-sec-grp | f89517ee676f4618bd55849477442aca | [] | | 85ab9c91-9241-4ab4-ad01-368518ab1a51 | default | Default security group | 35609e3390ce45be83a31cac47057efb | [] | | e4f59cd4-75c6-4abf-9ab6-b97b4ae199b4 | lb-health-mgr-sec-grp | lb-health-mgr-sec-grp | f89517ee676f4618bd55849477442aca | [] | | ef91fcfb-fe20-4d45-bfe8-dfb7375462a3 | default | Default security group | f89517ee676f4618bd55849477442aca | [] | | efff8138-bffd-4e96-8318-2b13b4294f0b | default | Default security group | 0c6318a1c2414c9f805059788db47bb6 | [] | +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ root at octavia04:~# openstack security group rule list e4f59cd4-75c6-4abf-9ab6-b97b4ae199b4 +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ | 20ef3407-0df0-4dcc-96cc-2693b9cdc6aa | udp | IPv4 | 0.0.0.0/0 | 5555:5555 | None | | 3e9feb44-c548-4889-aa30-1792ea89d675 | None | IPv4 | 0.0.0.0/0 | | None | | 6cfd295f-6544-4bb6-bb51-00960e4753bb | None | IPv6 | ::/0 | | None | +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ root at octavia04:~# openstack security group rule list 2236e82c-13fe-42e3-9fcf-bea43917f231 +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ | 29e20b2b-6626-48c4-a06c-85d9dd6e6d61 | tcp | IPv4 | 0.0.0.0/0 | 22:22 | None | | 419ab26c-9cdf-4fda-bec3-95501f6bfa7d | icmp | IPv4 | 0.0.0.0/0 | | None | | a4c70060-3580-46a6-8735-bca7046298f1 | None | IPv6 | ::/0 | | None | | b1122fa8-1699-434f-b810-36abc0ea4ab8 | tcp | IPv4 | 0.0.0.0/0 | 9443:9443 | None | | cdc91572-afa9-4401-9212-a46414ea01ae | None | IPv4 | 0.0.0.0/0 | | None | +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ root at octavia04:~# openstack security group rule list 0964090c-0299-401a-9156-bafbb040e345 +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ | 07529aae-7732-409f-af37-c9b5287bbb16 | None | IPv6 | ::/0 | | None | | 35701c1b-f739-4a44-a8c6-1d8f9ca82a7e | None | IPv4 | 0.0.0.0/0 | | None | +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ My network agents lokk like this root at keystone04:~# openstack network agent list +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ | 0b3fd449-c123-4d82-994e-adf4aa588292 | Open vSwitch agent | neutron04-node1.desy.de | None | :-) | UP | neutron-openvswitch-agent | | 195b08ff-0b89-48d8-9ada-b59b5ff2b8ab | Open vSwitch agent | openstack04.desy.de | None | :-) | UP | neutron-openvswitch-agent | | 3346b86a-80f9-4397-8f55-9d1ff28285dd | L3 agent | neutron04-node1.desy.de | nova | :-) | UP | neutron-l3-agent | | 36547753-59d7-4184-9a76-5317abf9a3aa | DHCP agent | openstack04.desy.de | nova | :-) | UP | neutron-dhcp-agent | | 56ae1056-72b6-4a65-8bab-7f837c264777 | Metadata agent | openstack04.desy.de | None | :-) | UP | neutron-metadata-agent | | 6678b278-6acb-439a-92a8-e2c7f932607c | L3 agent | octavia04.desy.de | nova | :-) | UP | neutron-l3-agent | | 6681247b-3633-45cd-9017-e548fbd13e73 | Open vSwitch agent | neutron04.desy.de | None | :-) | UP | neutron-openvswitch-agent | | 6d4ed4ed-5a8f-42ee-9052-ff9279a9dada | L3 agent | openstack04.desy.de | nova | :-) | UP | neutron-l3-agent | | 8254d653-aff1-40e3-ade6-890d0a6b0617 | L3 agent | neutron04.desy.de | nova | :-) | UP | neutron-l3-agent | | c4ce7df7-a682-4e2d-b841-73577f0abe80 | Open vSwitch agent | octavia04.desy.de | None | :-) | UP | neutron-openvswitch-agent | +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ Thanks in advance, Stefan Bujack From tbishop at liquidweb.com Wed Sep 16 18:23:04 2020 From: tbishop at liquidweb.com (Tyler Bishop) Date: Wed, 16 Sep 2020 14:23:04 -0400 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> Message-ID: <63c115dd-4521-4563-af5d-841d419a8974@Spark> Normally yes but I am having the PXE added to NON provision ports as well. I tore down the dnsmasq and inspector containers, rediscovered the hosts and it hasn’t came back.. but that still doesn’t answer how that could happen. On Sep 16, 2020, 3:53 AM -0400, Mark Goddard , wrote: > On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > > > Hi, > > > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 > > Hi Tyler, Ironic adds DHCP options to the neutron port on the > provisioning network. Specifically, the boot interface in ironic is > responsible for adding DHCP options. See the PXEBaseMixin class. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Sep 16 18:52:23 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 16 Sep 2020 11:52:23 -0700 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: <63c115dd-4521-4563-af5d-841d419a8974@Spark> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <63c115dd-4521-4563-af5d-841d419a8974@Spark> Message-ID: I guess we need to understand if your machines are set to network boot by default in ironic's configuration? If it is set to the flat network_interface and the instances are configured for network booting? If so, I'd expect this to happen for a deployed instance. Out of curiosity, is this master branch code? Ussuri? Are the other environments the same? -Julia On Wed, Sep 16, 2020 at 11:33 AM Tyler Bishop wrote: > > Normally yes but I am having the PXE added to NON provision ports as well. > > I tore down the dnsmasq and inspector containers, rediscovered the hosts and it hasn’t came back.. but that still doesn’t answer how that could happen. > On Sep 16, 2020, 3:53 AM -0400, Mark Goddard , wrote: > > On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > > Hi, > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 > > > Hi Tyler, Ironic adds DHCP options to the neutron port on the > provisioning network. Specifically, the boot interface in ironic is > responsible for adding DHCP options. See the PXEBaseMixin class. From yasufum.o at gmail.com Wed Sep 16 19:06:09 2020 From: yasufum.o at gmail.com (yasufum) Date: Thu, 17 Sep 2020 04:06:09 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core Message-ID: Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, fixing bugs and answering questions in the recent releases [1][2] and  had several sessions on summits for Tacker. In addition, he is now well distinguished as one of the responsibility from ETSI-NFV standard community as a contributor between the standard and implementation for the recent contributions for both of OpenStack and ETSI. I'd appreciate if we add Toshiaki to the core team. [1] https://www.stackalytics.com/?company=nec&module=tacker [2] https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks Regards, Yasufumi From skaplons at redhat.com Wed Sep 16 20:21:21 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 16 Sep 2020 22:21:21 +0200 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> Message-ID: <20200916202121.GA232945@p1> Hi, For me personally it seems ok to merge approve this FFE as this change isn't very big and is limited only to the QoS service plugin. So IMHO risk of merging that isn't very big. There is also scenario test proposed for that feature in [1] so we can ensure that it is working fine. On Wed, Sep 16, 2020 at 02:39:27PM +0200, Lajos Katona wrote: > Hi > The neutron-lib patch (https://review.opendev.org/750349 ) is a bug fix > (see [1]) which as do not touch db or API can be backported later in the > worst case. Is ther neutron-lib patch necessary to make all of that working so that without backporting this fix and releasing new version feature in neutron will not work at all? > The fix itself doesn't affect other Neutron features, so no harm. > > Thanks for your help. > Regards > Lajos Katona (lajoskatona) > > [1] https://launchpad.net/bugs/1894825 > > Sean McGinnis ezt írta (időpont: 2020. szept. 15., > K, 18:05): > > > > I would like to ask for FFE for the RFE "allow replacing the QoS > > > policy of bound port", [1]. > > > This feature adds the extra step to port update operation to change > > > the allocation in Placement to the min_kbps values of the new QoS > > > policy, if the port has a QoS policy with minimum_bandwidth rule and > > > is bound and used by a server. > > > > > > In neutron there's one open patch: > > > https://review.opendev.org/747774 > > > > > > There's an open bug report for the neutron-lib side: > > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix for that: > > > https://review.opendev.org/750349 > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > > > Since this requires an update to neutron-lib, adding [requirements] to > > the subject. Non-client library freeze was two weeks ago now, so it's a > > bit late. > > > > The fix looks fairly minor, but I don't know that code. Can you comment > > on the potential risks of this change? We should be stabilizing as much > > as possible at this point as we approach the final victoria release date. > > > > Sean > > > > > > > > [1] https://review.opendev.org/#/c/743695 -- Slawek Kaplonski Senior software engineer Red Hat From rafaelweingartner at gmail.com Wed Sep 16 20:39:29 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Wed, 16 Sep 2020 17:39:29 -0300 Subject: [Neutron][python-openstackclient][openstacksdk][FFE] Add source_ip_prefix and destination_ip_prefix to metering label rules Message-ID: Hello guys, I would like to ask for FFE for the RFE "Add source_ip_prefix and destination_ip_prefix to metering label rules", [1]. This feature adds source and destination filtering options to Neutron metering label rules. Most of the patches (PRs) relating to this feature have already been reviewed, and are ready to be merged [2]. Moreover, the feature has already been partially merged into Neutron and Neutron-lib. Therefore, it might be interesting to finish the merging process and get the feature into Victoria. [1] https://bugs.launchpad.net/neutron/+bug/1889431 [2] https://review.opendev.org/#/q/topic:bug/1889431+(status:open+OR+status:merged) Thanks and Regards, -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Sep 16 20:50:32 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 16 Sep 2020 13:50:32 -0700 Subject: [Octavia] Please help with deployment of octavia unbound lb port when creating LB In-Reply-To: <1941612305.56654725.1600278563865.JavaMail.zimbra@desy.de> References: <1941612305.56654725.1600278563865.JavaMail.zimbra@desy.de> Message-ID: Hi Stefan, The ports look ok in your output. The VIP is configured as an "allowed address pair" in neutron to allow failovers. "allowed address pair" ports in neutron is how you can have a secondary IP on a port. Each load balancer (In standalone topology) will show two ports in neutron. A "base" port, which is a normal neutron port, and a VRRP/VIP port which is the "allowed address pair" port. In the output above, your base port is: | 8baf7abb-fa03-446b-8ca2-6d026cce75d6 | octavia-lb-vrrp-e50c5b05-69eb-45c4-a670-dc34331443f5 | fa:16:3e:1b:c1:7d | ip_address='131.169.46.40', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | And your VRRP/VIP port is: | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | fa:16:3e:bb:0f:f3 | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | DOWN | If you do a "openstack port show 8baf7abb-fa03-446b-8ca2-6d026cce75d6" (the base port) you will see at the top the allowed address pairs configuration that points to the other port. The allowed address pairs port will never show as ACTIVE as it is not a "real" neutron port. Octavia also manages the security groups for you, so I don't think security groups are likely an issue here. I see on the load balancer output that you do not have a listener configured on the load balancer. The VIP port will not respond to any requests until a listener has been configured (The listener defines the TCP/UDP port to accept connections on). This is also why the load balancer is reporting operating_status as OFFLINE. If you create an HTTP listener on port 80, once the load balancer becomes ACTIVE, you should be able to curl to the VIP and get back an HTTP 503 response. This is because there is no pool or members configured to service the request. Let me know if that doesn't solve your issue and we can debug it further. Michael On Wed, Sep 16, 2020 at 11:00 AM Bujack, Stefan wrote: > > Hello, > > I am a little lost here. Hopefully some of you nice people could help me with this issue please. > > We have an Openstack Ussuri deployment on Ubuntu 20.04. > > Our network is configured in an "Open vSwitch: High availability using VRRP" way. > > I have gone through the official Install and configure procedure on "https://docs.openstack.org/octavia/ussuri/install/install-ubuntu.html" > > We have one public network. > > When I want to "Deploy a basic HTTP load balancer" like described in the official documentation "https://docs.openstack.org/octavia/ussuri/user/guides/basic-cookbook.html" > > I see a problem with the created lb port. The port is down and unbound and the VIP is not reachable. > > root at keystone04:~# openstack loadbalancer create --name lb1 --vip-subnet-id DESY-VLAN-46 > +---------------------+--------------------------------------+ > | Field | Value | > +---------------------+--------------------------------------+ > | admin_state_up | True | > | availability_zone | None | > | created_at | 2020-09-16T17:19:37 | > | description | | > | flavor_id | None | > | id | cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | listeners | | > | name | lb1 | > | operating_status | OFFLINE | > | pools | | > | project_id | 0c6318a1c2414c9f805059788db47bb6 | > | provider | amphora | > | provisioning_status | PENDING_CREATE | > | updated_at | None | > | vip_address | 131.169.46.214 | > | vip_network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | > | vip_port_id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | > | vip_qos_policy_id | None | > | vip_subnet_id | f2a2d8d2-363e-45e7-80f8-f751a24eed8c | > +---------------------+--------------------------------------+ > > root at keystone04:~# openstack loadbalancer show cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 > +---------------------+--------------------------------------+ > | Field | Value | > +---------------------+--------------------------------------+ > | admin_state_up | True | > | availability_zone | None | > | created_at | 2020-09-16T17:19:37 | > | description | | > | flavor_id | None | > | id | cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | listeners | | > | name | lb1 | > | operating_status | OFFLINE | > | pools | | > | project_id | 0c6318a1c2414c9f805059788db47bb6 | > | provider | amphora | > | provisioning_status | ACTIVE | > | updated_at | 2020-09-16T17:20:22 | > | vip_address | 131.169.46.214 | > | vip_network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | > | vip_port_id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | > | vip_qos_policy_id | None | > | vip_subnet_id | f2a2d8d2-363e-45e7-80f8-f751a24eed8c | > +---------------------+--------------------------------------+ > > root at keystone04:~# openstack port list > +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ > | ID | Name | MAC Address | Fixed IP Addresses | Status | > +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ > | 020210e3-546a-4372-a91b-cc3e7a5cbab0 | HA port tenant 0c6318a1c2414c9f805059788db47bb6 | fa:16:3e:0b:d4:a9 | ip_address='169.254.192.26', subnet_id='4de6a91e-bb53-4869-976b-67815769bb12' | ACTIVE | > | 20fe9c50-6c89-4ebd-bbfa-25bdf0e716fd | | fa:16:3e:f5:c3:a4 | ip_address='131.169.46.201', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | N/A | > | 2ae5a87f-803a-4e1d-9e7c-e874f200a3f4 | | fa:16:3e:57:57:ef | ip_address='131.169.46.31', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | 6948989b-40e8-40fe-9216-16f82d8071cd | | fa:16:3e:8b:59:0c | ip_address='172.16.1.1', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | 784fa499-2f64-4026-a26b-732acd2f328c | | fa:16:3e:57:ec:23 | ip_address='131.169.46.128', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | 8baf7abb-fa03-446b-8ca2-6d026cce75d6 | octavia-lb-vrrp-e50c5b05-69eb-45c4-a670-dc34331443f5 | fa:16:3e:1b:c1:7d | ip_address='131.169.46.40', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | 8fa76adf-0a4b-400d-ae29-874cbd055f88 | | fa:16:3e:3f:92:14 | ip_address='172.16.0.100', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | > | 906f5713-c2b6-4d05-8c89-b084e09c744c | | fa:16:3e:ba:d7:74 | ip_address='172.16.1.112', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | a08d5c5f-dacb-4a96-b0f4-7e1a3fd1c536 | | fa:16:3e:86:f9:d6 | ip_address='172.16.0.219', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | > | b5ad6738-8805-4f20-8084-a94ffacfff89 | | fa:16:3e:00:80:79 | ip_address='131.169.46.60', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | fa:16:3e:bb:0f:f3 | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | DOWN | > | bf1476d0-0327-4c4f-8b79-d767c8a7dba5 | | fa:16:3e:24:79:cb | ip_address='131.169.46.126', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | c15b142f-c06c-426a-83db-46e98e4839d6 | | fa:16:3e:c7:60:d1 | ip_address='172.16.1.141', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | cb75004a-aa57-4250-93be-1bb03bdc2a1b | | fa:16:3e:7e:9c:9f | ip_address='131.169.46.84', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | dc956e9a-a905-417a-b234-14782bf182d3 | HA port tenant 0c6318a1c2414c9f805059788db47bb6 | fa:16:3e:40:87:e3 | ip_address='169.254.194.172', subnet_id='4de6a91e-bb53-4869-976b-67815769bb12' | ACTIVE | > | dd48e315-2cb1-4716-8bc5-e892a948cb5f | | fa:16:3e:b0:4a:eb | ip_address='172.16.1.2', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | e25ee538-7938-4992-a4f7-51f35f6831b5 | octavia-health-manager-listen-port | fa:16:3e:5c:b3:2f | ip_address='172.16.0.2', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | > | e91a5135-b076-4043-add4-21073109a730 | | fa:16:3e:4d:b8:56 | ip_address='131.169.46.102', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ > root at keystone04:~# openstack port show bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b > +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value | > +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ > | admin_state_up | DOWN | > | allowed_address_pairs | | > | binding_host_id | | > | binding_profile | | > | binding_vif_details | | > | binding_vif_type | unbound | > | binding_vnic_type | normal | > | created_at | 2020-09-16T17:19:37Z | > | data_plane_status | None | > | description | | > | device_id | lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | device_owner | Octavia | > | dns_assignment | None | > | dns_domain | None | > | dns_name | None | > | extra_dhcp_opts | | > | fixed_ips | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | > | id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | > | ip_allocation | None | > | location | cloud='', project.domain_id=, project.domain_name=, project.id='0c6318a1c2414c9f805059788db47bb6', project.name=, region_name='', zone= | > | mac_address | fa:16:3e:bb:0f:f3 | > | name | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | > | port_security_enabled | True | > | project_id | 0c6318a1c2414c9f805059788db47bb6 | > | propagate_uplink_status | None | > | qos_network_policy_id | None | > | qos_policy_id | None | > | resource_request | None | > | revision_number | 2 | > | security_group_ids | 0964090c-0299-401a-9156-bafbb040e345 | > | status | DOWN | > | tags | | > | trunk_details | None | > | updated_at | 2020-09-16T17:19:39Z | > +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ > > I also keep getting this error on the octavia node: > > Sep 16 19:41:46 octavia04.desy.de octavia-health-manager[3009]: 2020-09-16 19:41:46.217 3009 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.0.219', 8660). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' > > My security groups look like this: > > root at octavia04:~# openstack security group list > +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ > | ID | Name | Description | Project | Tags | > +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ > | 0964090c-0299-401a-9156-bafbb040e345 | lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | f89517ee676f4618bd55849477442aca | [] | > | 0cda6134-0574-430b-9250-f71b81587a53 | default | Default security group | | [] | > | 2236e82c-13fe-42e3-9fcf-bea43917f231 | lb-mgmt-sec-grp | lb-mgmt-sec-grp | f89517ee676f4618bd55849477442aca | [] | > | 85ab9c91-9241-4ab4-ad01-368518ab1a51 | default | Default security group | 35609e3390ce45be83a31cac47057efb | [] | > | e4f59cd4-75c6-4abf-9ab6-b97b4ae199b4 | lb-health-mgr-sec-grp | lb-health-mgr-sec-grp | f89517ee676f4618bd55849477442aca | [] | > | ef91fcfb-fe20-4d45-bfe8-dfb7375462a3 | default | Default security group | f89517ee676f4618bd55849477442aca | [] | > | efff8138-bffd-4e96-8318-2b13b4294f0b | default | Default security group | 0c6318a1c2414c9f805059788db47bb6 | [] | > +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ > root at octavia04:~# openstack security group rule list e4f59cd4-75c6-4abf-9ab6-b97b4ae199b4 > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | 20ef3407-0df0-4dcc-96cc-2693b9cdc6aa | udp | IPv4 | 0.0.0.0/0 | 5555:5555 | None | > | 3e9feb44-c548-4889-aa30-1792ea89d675 | None | IPv4 | 0.0.0.0/0 | | None | > | 6cfd295f-6544-4bb6-bb51-00960e4753bb | None | IPv6 | ::/0 | | None | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > root at octavia04:~# openstack security group rule list 2236e82c-13fe-42e3-9fcf-bea43917f231 > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | 29e20b2b-6626-48c4-a06c-85d9dd6e6d61 | tcp | IPv4 | 0.0.0.0/0 | 22:22 | None | > | 419ab26c-9cdf-4fda-bec3-95501f6bfa7d | icmp | IPv4 | 0.0.0.0/0 | | None | > | a4c70060-3580-46a6-8735-bca7046298f1 | None | IPv6 | ::/0 | | None | > | b1122fa8-1699-434f-b810-36abc0ea4ab8 | tcp | IPv4 | 0.0.0.0/0 | 9443:9443 | None | > | cdc91572-afa9-4401-9212-a46414ea01ae | None | IPv4 | 0.0.0.0/0 | | None | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > root at octavia04:~# openstack security group rule list 0964090c-0299-401a-9156-bafbb040e345 > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | 07529aae-7732-409f-af37-c9b5287bbb16 | None | IPv6 | ::/0 | | None | > | 35701c1b-f739-4a44-a8c6-1d8f9ca82a7e | None | IPv4 | 0.0.0.0/0 | | None | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > > My network agents lokk like this > > root at keystone04:~# openstack network agent list > +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ > | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | > +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ > | 0b3fd449-c123-4d82-994e-adf4aa588292 | Open vSwitch agent | neutron04-node1.desy.de | None | :-) | UP | neutron-openvswitch-agent | > | 195b08ff-0b89-48d8-9ada-b59b5ff2b8ab | Open vSwitch agent | openstack04.desy.de | None | :-) | UP | neutron-openvswitch-agent | > | 3346b86a-80f9-4397-8f55-9d1ff28285dd | L3 agent | neutron04-node1.desy.de | nova | :-) | UP | neutron-l3-agent | > | 36547753-59d7-4184-9a76-5317abf9a3aa | DHCP agent | openstack04.desy.de | nova | :-) | UP | neutron-dhcp-agent | > | 56ae1056-72b6-4a65-8bab-7f837c264777 | Metadata agent | openstack04.desy.de | None | :-) | UP | neutron-metadata-agent | > | 6678b278-6acb-439a-92a8-e2c7f932607c | L3 agent | octavia04.desy.de | nova | :-) | UP | neutron-l3-agent | > | 6681247b-3633-45cd-9017-e548fbd13e73 | Open vSwitch agent | neutron04.desy.de | None | :-) | UP | neutron-openvswitch-agent | > | 6d4ed4ed-5a8f-42ee-9052-ff9279a9dada | L3 agent | openstack04.desy.de | nova | :-) | UP | neutron-l3-agent | > | 8254d653-aff1-40e3-ade6-890d0a6b0617 | L3 agent | neutron04.desy.de | nova | :-) | UP | neutron-l3-agent | > | c4ce7df7-a682-4e2d-b841-73577f0abe80 | Open vSwitch agent | octavia04.desy.de | None | :-) | UP | neutron-openvswitch-agent | > +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ > > Thanks in advance, > > Stefan Bujack > From juliaashleykreger at gmail.com Wed Sep 16 22:38:46 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 16 Sep 2020 15:38:46 -0700 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: <4a748376-e7e3-4e28-b70f-99f0fb6dfb7a@Spark> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <63c115dd-4521-4563-af5d-841d419a8974@Spark> <4a748376-e7e3-4e28-b70f-99f0fb6dfb7a@Spark> Message-ID: (Resending after stripping the image out for the mailing list, hopefully!) Well, I <3 that your using the ironic horizon plugin! Can you confirm the contents of the flavors you are using for scheduling, specifically the capabilities field. Are you using whole disk images, or are you using partition images? On Wed, Sep 16, 2020 at 2:13 PM Tyler Bishop wrote: > > Hi Julia, > > All of these are latest stable train using Kolla-ansible. > > I am using local disk booting for all deployed instances and we utilize neutron networking with plans. > > Attached screenshot of driver config. > > On Sep 16, 2020, 2:53 PM -0400, Julia Kreger , wrote: > > I guess we need to understand if your machines are set to network boot > by default in ironic's configuration? If it is set to the flat > network_interface and the instances are configured for network > booting? If so, I'd expect this to happen for a deployed instance. > > Out of curiosity, is this master branch code? Ussuri? Are the other > environments the same? > > -Julia > > On Wed, Sep 16, 2020 at 11:33 AM Tyler Bishop wrote: > > > Normally yes but I am having the PXE added to NON provision ports as well. > > I tore down the dnsmasq and inspector containers, rediscovered the hosts and it hasn’t came back.. but that still doesn’t answer how that could happen. > On Sep 16, 2020, 3:53 AM -0400, Mark Goddard , wrote: > > On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > > Hi, > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 > > > Hi Tyler, Ironic adds DHCP options to the neutron port on the > provisioning network. Specifically, the boot interface in ironic is > responsible for adding DHCP options. See the PXEBaseMixin class. From tbishop at liquidweb.com Thu Sep 17 00:16:46 2020 From: tbishop at liquidweb.com (Tyler Bishop) Date: Wed, 16 Sep 2020 20:16:46 -0400 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <63c115dd-4521-4563-af5d-841d419a8974@Spark> <4a748376-e7e3-4e28-b70f-99f0fb6dfb7a@Spark> Message-ID: <171abb8c-3c3f-441b-b857-945e9001471b@Spark> Thanks Julia While investigating this further I found that if I delete the host then re-discovery it the issue goes away.  So something I’ve done in my deployment has broken this on the other hosts.   I need to open up the database and start digging around the tables to figure out if there is any differences in the two enrolled host. Yes I use kolla-ansible which natively deploys the ironic dashboard.  It seems to work pretty good but its very noisy with errors during host state change constantly throwing errors on the page even though things are progressing as normal. On Sep 16, 2020, 6:38 PM -0400, Julia Kreger , wrote: > (Resending after stripping the image out for the mailing list, hopefully!) > > Well, I <3 that your using the ironic horizon plugin! > > Can you confirm the contents of the flavors you are using for > scheduling, specifically the capabilities field. > > Are you using whole disk images, or are you using partition images? > > On Wed, Sep 16, 2020 at 2:13 PM Tyler Bishop wrote: > > > > Hi Julia, > > > > All of these are latest stable train using Kolla-ansible. > > > > I am using local disk booting for all deployed instances and we utilize neutron networking with plans. > > > > Attached screenshot of driver config. > > > > On Sep 16, 2020, 2:53 PM -0400, Julia Kreger , wrote: > > > > I guess we need to understand if your machines are set to network boot > > by default in ironic's configuration? If it is set to the flat > > network_interface and the instances are configured for network > > booting? If so, I'd expect this to happen for a deployed instance. > > > > Out of curiosity, is this master branch code? Ussuri? Are the other > > environments the same? > > > > -Julia > > > > On Wed, Sep 16, 2020 at 11:33 AM Tyler Bishop wrote: > > > > > > Normally yes but I am having the PXE added to NON provision ports as well. > > > > I tore down the dnsmasq and inspector containers, rediscovered the hosts and it hasn’t came back.. but that still doesn’t answer how that could happen. > > On Sep 16, 2020, 3:53 AM -0400, Mark Goddard , wrote: > > > > On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > > > > > Hi, > > > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 > > > > > > Hi Tyler, Ironic adds DHCP options to the neutron port on the > > provisioning network. Specifically, the boot interface in ironic is > > responsible for adding DHCP options. See the PXEBaseMixin class. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Sep 17 00:56:34 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 16 Sep 2020 19:56:34 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> Message-ID: <1749990229b.bfd8d46673892.9090899423267334607@ghanshyammann.com> ---- On Tue, 08 Sep 2020 17:56:05 -0500 Ghanshyam Mann wrote ---- > Updates: > After working more on failing one today and listing the blocking one, I think we are good to switch tox based testing today > and discuss the integration testing switch tomorrow in TC office hours. > > > * Part1: Migrating tox base job tomorrow (8th Sept): This is done and almost all the projects are fixed or at least fixes are up to merge. cinder and keystone l-c job on Focal also working fine and ready to merge. Few python clients have not yet merged the fixes so I have backported those to Victoria to fix master as well as Victoria gate. [...] > > * Part2: Migrating devstack/tempest base job on 10th sept: > We have three blocking open bugs here so I would like to discuss it in tomorrow's TC office hour also about how to proceed on this. > 1. Nova: https://bugs.launchpad.net/nova/+bug/1882521 (https://bugs.launchpad.net/qemu/+bug/1894804) As of now, no work needed on Nova side and after QEMU fix, we will see if tests pass. > 2. Barbican: https://storyboard.openstack.org/#!/story/2007732 The only blocker left for Focal migration. > 3. Ceilometer: https://storyboard.openstack.org/#!/story/2008121 This worked fine with mariadb - https://review.opendev.org/#/c/752294/ There are many fixes still not merged yet and their gate is also failing, I request to merge the fixes on priority: - https://review.opendev.org/#/q/topic:migrate-to-focal+status:open -gmann > > > -gmann > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > break the projects gate if not yet taken care of. Read below for the plan. > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > Progress: > > ======= > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > plan. > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > ** Bug#1882521 > > ** DB migration issues, > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > Testing Till now: > > ============ > > * ~200 repos gate have been tested or fixed till now. > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > project repos if I am late to fix them): > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > * ~30repos fixes ready to merge: > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > Bugs Report: > > ========== > > > > 1. Bug#1882521. (IN-PROGRESS) > > There is open bug for nova/cinder where three tempest tests are failing for > > volume detach operation. There is no clear root cause found yet > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > We have skipped the tests in tempest base patch to proceed with the other > > projects testing but this is blocking things for the migration. > > > > 2. DB migration issues (IN-PROGRESS) > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > 4. Bug#1886296. (IN-PROGRESS) > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > nd will release a new hacking version. After that project can move to new hacking and do not need > > to maintain pyflakes version compatibility. > > > > 5. Bug#1886298. (IN-PROGRESS) > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > What work to be done on the project side: > > ================================ > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > 1. Start a patch in your repo by making depends-on on either of below: > > devstack base patch if you are using only devstack base jobs not tempest: > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > OR > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > Example: https://review.opendev.org/#/c/738126/ > > > > 2. If none of your project jobs override the nodeset then above patch will be > > testing patch(do not merge) otherwise change the nodeset to focal. > > Example: https://review.opendev.org/#/c/737370/ > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > this. > > Example: https://review.opendev.org/#/c/744056/2 > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > this migration. > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > base patches. > > > > > > Important things to note: > > =================== > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > * Use gerrit topic 'migrate-to-focal' > > * Do not backport any of the patches. > > > > > > References: > > ========= > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > [2] https://review.opendev.org/#/c/739315/ > > [3] https://review.opendev.org/#/c/739334/ > > [4] https://github.com/pallets/markupsafe/issues/116 > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > -gmann > > > > > From masayuki.igawa at gmail.com Thu Sep 17 01:15:13 2020 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Thu, 17 Sep 2020 10:15:13 +0900 Subject: [qa] Wallaby PTG planning In-Reply-To: References: Message-ID: <287b218b-ed1f-446b-87dc-c2c218c41de3@www.fastmail.com> Hi yoctozepto, Oh, I'm sorry to hear that. But unfortunately, I don't see any other good slots for us due to the timezone and the other projects conflict.. So, I think we have to keep the current time slots at this moment. -- Masayuki Igawa On Wed, Sep 16, 2020, at 16:30, Radosław Piliszek wrote: > Hey, Masayuki et al, > > I just noticed QA got in the same time slots as Kolla this time again. > Could we move at least one session not to conflict? > > -yoctozepto > > On Tue, Aug 25, 2020 at 1:45 PM Radosław Piliszek > wrote: > > > > Thanks, Masayuki. > > I added myself. > > > > I hope we can get it non-colliding with Kolla meetings this time. > > I'll try to do a better job at early collision detection. :-) > > > > -yoctozepto > > > > On Tue, Aug 25, 2020 at 1:16 PM Masayuki Igawa wrote: > > > > > > Hi, > > > > > > We need to start thinking about the next cycle already. > > > As you probably know, next virtual PTG will be held in October 26-30[0]. > > > > > > I prepared an etherpad[1] to discuss and track our topics. So, please add > > > your name if you are going to attend the PTG session. And also, please add > > > your proposals of the topics which you want to discuss during the PTG. > > > > > > I also made a doodle[2] with possible time slots. Please put your best days and hours > > > so that we can try to schedule and book our sessions in the time slots. > > > > > > [0] https://www.openstack.org/ptg/ > > > [1] https://etherpad.opendev.org/p/qa-wallaby-ptg > > > [2] https://doodle.com/poll/qqd7ayz3i4ubnsbb > > > > > > Best Regards, > > > -- Masayuki Igawa > > > Key fingerprint = C27C 2F00 3A2A 999A 903A 753D 290F 53ED C899 BF89 > > > > From gmann at ghanshyammann.com Thu Sep 17 01:36:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 16 Sep 2020 20:36:05 -0500 Subject: [qa] Wallaby PTG planning In-Reply-To: <287b218b-ed1f-446b-87dc-c2c218c41de3@www.fastmail.com> References: <287b218b-ed1f-446b-87dc-c2c218c41de3@www.fastmail.com> Message-ID: <17499b450a9.b5b357df73988.6020931887716432673@ghanshyammann.com> ---- On Wed, 16 Sep 2020 20:15:13 -0500 Masayuki Igawa wrote ---- > Hi yoctozepto, > > Oh, I'm sorry to hear that. > But unfortunately, I don't see any other good slots for us > due to the timezone and the other projects conflict.. Or let's move one slot to Thursday 13-14 UTC. -gmann > > So, I think we have to keep the current time slots at this moment. > > -- Masayuki Igawa > > On Wed, Sep 16, 2020, at 16:30, Radosław Piliszek wrote: > > Hey, Masayuki et al, > > > > I just noticed QA got in the same time slots as Kolla this time again. > > Could we move at least one session not to conflict? > > > > -yoctozepto > > > > On Tue, Aug 25, 2020 at 1:45 PM Radosław Piliszek > > wrote: > > > > > > Thanks, Masayuki. > > > I added myself. > > > > > > I hope we can get it non-colliding with Kolla meetings this time. > > > I'll try to do a better job at early collision detection. :-) > > > > > > -yoctozepto > > > > > > On Tue, Aug 25, 2020 at 1:16 PM Masayuki Igawa wrote: > > > > > > > > Hi, > > > > > > > > We need to start thinking about the next cycle already. > > > > As you probably know, next virtual PTG will be held in October 26-30[0]. > > > > > > > > I prepared an etherpad[1] to discuss and track our topics. So, please add > > > > your name if you are going to attend the PTG session. And also, please add > > > > your proposals of the topics which you want to discuss during the PTG. > > > > > > > > I also made a doodle[2] with possible time slots. Please put your best days and hours > > > > so that we can try to schedule and book our sessions in the time slots. > > > > > > > > [0] https://www.openstack.org/ptg/ > > > > [1] https://etherpad.opendev.org/p/qa-wallaby-ptg > > > > [2] https://doodle.com/poll/qqd7ayz3i4ubnsbb > > > > > > > > Best Regards, > > > > -- Masayuki Igawa > > > > Key fingerprint = C27C 2F00 3A2A 999A 903A 753D 290F 53ED C899 BF89 > > > > > > > > From yoshito.itou.dr at hco.ntt.co.jp Thu Sep 17 04:52:55 2020 From: yoshito.itou.dr at hco.ntt.co.jp (Yoshito Ito) Date: Thu, 17 Sep 2020 13:52:55 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: References: Message-ID: <9943505f-ed55-0081-5e07-50c4e17877c6@hco.ntt.co.jp_1> +1. Toshiaki has been active to give great contributions to Tacker. Regards, Yoshito Ito On 2020/09/17 4:06, yasufum wrote: > Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, > fixing bugs and answering questions in the recent releases [1][2] and > had several sessions on summits for Tacker. In addition, he is now well > distinguished as one of the responsibility from ETSI-NFV standard > community as a contributor between the standard and implementation for > the recent contributions for both of OpenStack and ETSI. > > I'd appreciate if we add Toshiaki to the core team. > > [1] https://www.stackalytics.com/?company=nec&module=tacker > [2] > https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks > > > Regards, > Yasufumi > > From radoslaw.piliszek at gmail.com Thu Sep 17 06:46:34 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 17 Sep 2020 08:46:34 +0200 Subject: [qa] Wallaby PTG planning In-Reply-To: <17499b450a9.b5b357df73988.6020931887716432673@ghanshyammann.com> References: <287b218b-ed1f-446b-87dc-c2c218c41de3@www.fastmail.com> <17499b450a9.b5b357df73988.6020931887716432673@ghanshyammann.com> Message-ID: Hey Ghanshyam, that would be great if only I had not scheduled Masakari for that slot already... :-) Let's not worry about me. I can catch up later and we can have some discussion/decisions happening asynchronously on IRC. -yoctozepto On Thu, Sep 17, 2020 at 3:36 AM Ghanshyam Mann wrote: > > ---- On Wed, 16 Sep 2020 20:15:13 -0500 Masayuki Igawa wrote ---- > > Hi yoctozepto, > > > > Oh, I'm sorry to hear that. > > But unfortunately, I don't see any other good slots for us > > due to the timezone and the other projects conflict.. > > Or let's move one slot to Thursday 13-14 UTC. > > -gmann > > > > > So, I think we have to keep the current time slots at this moment. > > > > -- Masayuki Igawa > > > > On Wed, Sep 16, 2020, at 16:30, Radosław Piliszek wrote: > > > Hey, Masayuki et al, > > > > > > I just noticed QA got in the same time slots as Kolla this time again. > > > Could we move at least one session not to conflict? > > > > > > -yoctozepto > > > > > > On Tue, Aug 25, 2020 at 1:45 PM Radosław Piliszek > > > wrote: > > > > > > > > Thanks, Masayuki. > > > > I added myself. > > > > > > > > I hope we can get it non-colliding with Kolla meetings this time. > > > > I'll try to do a better job at early collision detection. :-) > > > > > > > > -yoctozepto > > > > > > > > On Tue, Aug 25, 2020 at 1:16 PM Masayuki Igawa wrote: > > > > > > > > > > Hi, > > > > > > > > > > We need to start thinking about the next cycle already. > > > > > As you probably know, next virtual PTG will be held in October 26-30[0]. > > > > > > > > > > I prepared an etherpad[1] to discuss and track our topics. So, please add > > > > > your name if you are going to attend the PTG session. And also, please add > > > > > your proposals of the topics which you want to discuss during the PTG. > > > > > > > > > > I also made a doodle[2] with possible time slots. Please put your best days and hours > > > > > so that we can try to schedule and book our sessions in the time slots. > > > > > > > > > > [0] https://www.openstack.org/ptg/ > > > > > [1] https://etherpad.opendev.org/p/qa-wallaby-ptg > > > > > [2] https://doodle.com/poll/qqd7ayz3i4ubnsbb > > > > > > > > > > Best Regards, > > > > > -- Masayuki Igawa > > > > > Key fingerprint = C27C 2F00 3A2A 999A 903A 753D 290F 53ED C899 BF89 > > > > > > > > > > > > From katonalala at gmail.com Thu Sep 17 06:49:30 2020 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 17 Sep 2020 08:49:30 +0200 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: <20200916202121.GA232945@p1> References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> <20200916202121.GA232945@p1> Message-ID: Hi, The neutron-lib patch is necessary for the use case when the new QoS min_kbps value for the port is 0. So would be good to have that on Victoria as well. Regards Lajos Slawek Kaplonski ezt írta (időpont: 2020. szept. 16., Sze, 22:21): > Hi, > > For me personally it seems ok to merge approve this FFE as this change > isn't > very big and is limited only to the QoS service plugin. So IMHO risk of > merging > that isn't very big. > There is also scenario test proposed for that feature in [1] so we can > ensure > that it is working fine. > > On Wed, Sep 16, 2020 at 02:39:27PM +0200, Lajos Katona wrote: > > Hi > > The neutron-lib patch (https://review.opendev.org/750349 ) is a bug fix > > (see [1]) which as do not touch db or API can be backported later in the > > worst case. > > Is ther neutron-lib patch necessary to make all of that working so that > without > backporting this fix and releasing new version feature in neutron will not > work > at all? > > > The fix itself doesn't affect other Neutron features, so no harm. > > > > Thanks for your help. > > Regards > > Lajos Katona (lajoskatona) > > > > [1] https://launchpad.net/bugs/1894825 > > > > Sean McGinnis ezt írta (időpont: 2020. szept. > 15., > > K, 18:05): > > > > > > I would like to ask for FFE for the RFE "allow replacing the QoS > > > > policy of bound port", [1]. > > > > This feature adds the extra step to port update operation to change > > > > the allocation in Placement to the min_kbps values of the new QoS > > > > policy, if the port has a QoS policy with minimum_bandwidth rule and > > > > is bound and used by a server. > > > > > > > > In neutron there's one open patch: > > > > https://review.opendev.org/747774 > > > > > > > > There's an open bug report for the neutron-lib side: > > > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > > > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix for > that: > > > > https://review.opendev.org/750349 > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > > > > > Since this requires an update to neutron-lib, adding [requirements] to > > > the subject. Non-client library freeze was two weeks ago now, so it's a > > > bit late. > > > > > > The fix looks fairly minor, but I don't know that code. Can you comment > > > on the potential risks of this change? We should be stabilizing as much > > > as possible at this point as we approach the final victoria release > date. > > > > > > Sean > > > > > > > > > > > > > > [1] https://review.opendev.org/#/c/743695 > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Sep 17 08:22:31 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 17 Sep 2020 10:22:31 +0200 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> <20200916202121.GA232945@p1> Message-ID: <20200917082231.GA18767@p1.internet.domowy> Hi, On Thu, Sep 17, 2020 at 08:49:30AM +0200, Lajos Katona wrote: > Hi, > The neutron-lib patch is necessary for the use case when the new QoS > min_kbps value for the port is 0. > So would be good to have that on Victoria as well. Ok, so I personally think that it would be still good to merge it now, backport neutron-lib fix and make bugfix release of neutron-lib for Victoria. > > Regards > Lajos > > Slawek Kaplonski ezt írta (időpont: 2020. szept. 16., > Sze, 22:21): > > > Hi, > > > > For me personally it seems ok to merge approve this FFE as this change > > isn't > > very big and is limited only to the QoS service plugin. So IMHO risk of > > merging > > that isn't very big. > > There is also scenario test proposed for that feature in [1] so we can > > ensure > > that it is working fine. > > > > On Wed, Sep 16, 2020 at 02:39:27PM +0200, Lajos Katona wrote: > > > Hi > > > The neutron-lib patch (https://review.opendev.org/750349 ) is a bug fix > > > (see [1]) which as do not touch db or API can be backported later in the > > > worst case. > > > > Is ther neutron-lib patch necessary to make all of that working so that > > without > > backporting this fix and releasing new version feature in neutron will not > > work > > at all? > > > > > The fix itself doesn't affect other Neutron features, so no harm. > > > > > > Thanks for your help. > > > Regards > > > Lajos Katona (lajoskatona) > > > > > > [1] https://launchpad.net/bugs/1894825 > > > > > > Sean McGinnis ezt írta (időpont: 2020. szept. > > 15., > > > K, 18:05): > > > > > > > > I would like to ask for FFE for the RFE "allow replacing the QoS > > > > > policy of bound port", [1]. > > > > > This feature adds the extra step to port update operation to change > > > > > the allocation in Placement to the min_kbps values of the new QoS > > > > > policy, if the port has a QoS policy with minimum_bandwidth rule and > > > > > is bound and used by a server. > > > > > > > > > > In neutron there's one open patch: > > > > > https://review.opendev.org/747774 > > > > > > > > > > There's an open bug report for the neutron-lib side: > > > > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement story: > > > > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix for > > that: > > > > > https://review.opendev.org/750349 > > > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > > > > > > > Since this requires an update to neutron-lib, adding [requirements] to > > > > the subject. Non-client library freeze was two weeks ago now, so it's a > > > > bit late. > > > > > > > > The fix looks fairly minor, but I don't know that code. Can you comment > > > > on the potential risks of this change? We should be stabilizing as much > > > > as possible at this point as we approach the final victoria release > > date. > > > > > > > > Sean > > > > > > > > > > > > > > > > > > > > [1] https://review.opendev.org/#/c/743695 > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > -- Slawek Kaplonski Senior software engineer Red Hat From masazumi.oota.ds at hco.ntt.co.jp Thu Sep 17 09:37:17 2020 From: masazumi.oota.ds at hco.ntt.co.jp (=?UTF-8?B?5aSq55Sw5q2j57SU?=) Date: Thu, 17 Sep 2020 18:37:17 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: References: Message-ID: <5f43ced2-c6e3-e763-1f1e-b1c0107a2941@hco.ntt.co.jp_1> +1 from me. Toshiaki Takahashi does a lot of good contributions to Tacker. Regards, Masazumi OTA On 2020/09/17 4:06, yasufum wrote: > Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, > fixing bugs and answering questions in the recent releases [1][2] and > had several sessions on summits for Tacker. In addition, he is now well > distinguished as one of the responsibility from ETSI-NFV standard > community as a contributor between the standard and implementation for > the recent contributions for both of OpenStack and ETSI. > > I'd appreciate if we add Toshiaki to the core team. > > [1] https://www.stackalytics.com/?company=nec&module=tacker > [2] > https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks > > > Regards, > Yasufumi > > -- --------------------------------- Masazumi OTA NTT Network Service System lab. masazumi.oota.ds at hco.ntt.co.jp 0422-59-3396 From beagles at redhat.com Thu Sep 17 11:57:38 2020 From: beagles at redhat.com (Brent Eagles) Date: Thu, 17 Sep 2020 09:27:38 -0230 Subject: [tripleo] stepping down as PTL In-Reply-To: References: Message-ID: Hi Wes, On Wed, Sep 16, 2020 at 9:54 AM Wesley Hayutin wrote: > Greetings, > > Thank you for the opportunity to be the TripleO PTL. This has been a > great learning opportunity to work with a pure upstream community, other > projects and the OpenStack leadership. Thank you to the TripleO team for > your help and dedication in adding features, fixing bugs, and responding to > whatever has come our way upstream. Lastly.. Thank you to Alex and Emilien > for all the assistance throughout!! > > Managing the work required here with Covid-19, home schooling is a little > much for me at this time, I would like to encourage others to volunteer for > the opportunity in Wallaby. > > 0/ > Thanks so much for your hard work and support as PTL! Cheers, Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Sep 17 12:42:53 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 17 Sep 2020 14:42:53 +0200 Subject: [Neutron][python-openstackclient][openstacksdk][FFE] Add source_ip_prefix and destination_ip_prefix to metering label rules In-Reply-To: References: Message-ID: <20200917124253.GA4086@p1.internet.domowy> Hi, On Wed, Sep 16, 2020 at 05:39:29PM -0300, Rafael Weingärtner wrote: > Hello guys, > > I would like to ask for FFE for the RFE "Add source_ip_prefix and > destination_ip_prefix to metering label rules", [1]. > This feature adds source and destination filtering options to Neutron > metering label rules. Most of the patches (PRs) relating to this feature > have already been reviewed, and are ready to be merged [2]. Moreover, the > feature has already been partially merged into Neutron and Neutron-lib. > Therefore, it might be interesting to finish the merging process and get > the feature into Victoria. Based on the fact that we already merged e.g. https://review.opendev.org/#/c/746203/ which deprecated old parameters I think that we should accept this FFE and merge this RFE in this cycle. Also it is limited to metering agent only and is "just" adding new parameters so old ones can be still used as before. > > [1] https://bugs.launchpad.net/neutron/+bug/1889431 > [2] > https://review.opendev.org/#/q/topic:bug/1889431+(status:open+OR+status:merged) > > Thanks and Regards, > > -- > Rafael Weingärtner -- Slawek Kaplonski Senior software engineer Red Hat From mnaser at vexxhost.com Thu Sep 17 13:03:06 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 17 Sep 2020 09:03:06 -0400 Subject: [tc] weekly meeting time Message-ID: Hi folks: Given that we've landed the change to start having weekly meetings again, it's time to start picking a time: https://doodle.com/poll/xw2wiebm2ayqxvki A few people also mentioned they wouldn't mind taking the current office hours time towards that, I guess we can try and come to an agreement if we want to do that here (or simply by voting that time on the Doodle). Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. From gmann at ghanshyammann.com Thu Sep 17 13:29:36 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 Sep 2020 08:29:36 -0500 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: <9943505f-ed55-0081-5e07-50c4e17877c6@hco.ntt.co.jp_1> References: <9943505f-ed55-0081-5e07-50c4e17877c6@hco.ntt.co.jp_1> Message-ID: <1749c41904a.10500e4b2107105.2819584261008238588@ghanshyammann.com> I am not Tacker Core but +1for Takahashi , he has been active in Tacker community since a long. -gmann ---- On Wed, 16 Sep 2020 23:52:55 -0500 Yoshito Ito wrote ---- > +1. Toshiaki has been active to give great contributions to Tacker. > > > Regards, > > Yoshito Ito > > > On 2020/09/17 4:06, yasufum wrote: > > Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, > > fixing bugs and answering questions in the recent releases [1][2] and > > had several sessions on summits for Tacker. In addition, he is now well > > distinguished as one of the responsibility from ETSI-NFV standard > > community as a contributor between the standard and implementation for > > the recent contributions for both of OpenStack and ETSI. > > > > I'd appreciate if we add Toshiaki to the core team. > > > > [1] https://www.stackalytics.com/?company=nec&module=tacker > > [2] > > https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks > > > > > > Regards, > > Yasufumi > > > > > > > From katonalala at gmail.com Thu Sep 17 13:29:37 2020 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 17 Sep 2020 15:29:37 +0200 Subject: [Neutron][FFE][requirements] request for QoS policy update for bound ports feature In-Reply-To: <20200917082231.GA18767@p1.internet.domowy> References: <0e0641c7-75d3-f4c7-1334-aa6710e369c5@gmx.com> <20200916202121.GA232945@p1> <20200917082231.GA18767@p1.internet.domowy> Message-ID: Thank you Slawek Slawek Kaplonski ezt írta (időpont: 2020. szept. 17., Cs, 10:22): > Hi, > > On Thu, Sep 17, 2020 at 08:49:30AM +0200, Lajos Katona wrote: > > Hi, > > The neutron-lib patch is necessary for the use case when the new QoS > > min_kbps value for the port is 0. > > So would be good to have that on Victoria as well. > > Ok, so I personally think that it would be still good to merge it now, > backport > neutron-lib fix and make bugfix release of neutron-lib for Victoria. > > > > > Regards > > Lajos > > > > Slawek Kaplonski ezt írta (időpont: 2020. szept. > 16., > > Sze, 22:21): > > > > > Hi, > > > > > > For me personally it seems ok to merge approve this FFE as this change > > > isn't > > > very big and is limited only to the QoS service plugin. So IMHO risk of > > > merging > > > that isn't very big. > > > There is also scenario test proposed for that feature in [1] so we can > > > ensure > > > that it is working fine. > > > > > > On Wed, Sep 16, 2020 at 02:39:27PM +0200, Lajos Katona wrote: > > > > Hi > > > > The neutron-lib patch (https://review.opendev.org/750349 ) is a bug > fix > > > > (see [1]) which as do not touch db or API can be backported later in > the > > > > worst case. > > > > > > Is ther neutron-lib patch necessary to make all of that working so that > > > without > > > backporting this fix and releasing new version feature in neutron will > not > > > work > > > at all? > > > > > > > The fix itself doesn't affect other Neutron features, so no harm. > > > > > > > > Thanks for your help. > > > > Regards > > > > Lajos Katona (lajoskatona) > > > > > > > > [1] https://launchpad.net/bugs/1894825 > > > > > > > > Sean McGinnis ezt írta (időpont: 2020. > szept. > > > 15., > > > > K, 18:05): > > > > > > > > > > I would like to ask for FFE for the RFE "allow replacing the QoS > > > > > > policy of bound port", [1]. > > > > > > This feature adds the extra step to port update operation to > change > > > > > > the allocation in Placement to the min_kbps values of the new QoS > > > > > > policy, if the port has a QoS policy with minimum_bandwidth rule > and > > > > > > is bound and used by a server. > > > > > > > > > > > > In neutron there's one open patch: > > > > > > https://review.opendev.org/747774 > > > > > > > > > > > > There's an open bug report for the neutron-lib side: > > > > > > https://bugs.launchpad.net/neutron/+bug/1894825 (placement > story: > > > > > > https://storyboard.openstack.org/#!/story/2008111 ) and a fix > for > > > that: > > > > > > https://review.opendev.org/750349 > > > > > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > > > > > > > > > > Since this requires an update to neutron-lib, adding > [requirements] to > > > > > the subject. Non-client library freeze was two weeks ago now, so > it's a > > > > > bit late. > > > > > > > > > > The fix looks fairly minor, but I don't know that code. Can you > comment > > > > > on the potential risks of this change? We should be stabilizing as > much > > > > > as possible at this point as we approach the final victoria release > > > date. > > > > > > > > > > Sean > > > > > > > > > > > > > > > > > > > > > > > > > > [1] https://review.opendev.org/#/c/743695 > > > > > > -- > > > Slawek Kaplonski > > > Senior software engineer > > > Red Hat > > > > > > > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Thu Sep 17 13:40:52 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 17 Sep 2020 15:40:52 +0200 Subject: [baremetal-sig][ironic] Future work and regular meetings In-Reply-To: <4f6c5ffd-0929-f516-4299-f69892b1d434@cern.ch> References: <4f6c5ffd-0929-f516-4299-f69892b1d434@cern.ch> Message-ID: <7a0a1248-efe3-61e3-b8b5-65204265b07e@cern.ch> There is a tie between Tue and Wed at 2pm UTC, so we need some more votes to decide :) If you're interested, please add your preferred time by the end of this week. Thanks! Arne On 26.08.20 10:30, Arne Wiebalck wrote: > Dear all, > > With the release of the bare metal white paper [0] the bare metal > SIG has completed its first target and is now ready to tackle new > challenges. > > A number of potential topics the SIG could work on were raised during > the recent opendev events. The suggestions are summarised on the bare > metal etherpad [1]. > > To select and organise the future work, we feel that it may be better to > start with regular meetings, though: the current idea is once a month, > for one hour, on zoom. > > Based on the experience with the ad-hoc meetings we had so far I have > set up a doodle to pick the exact slot: > > https://doodle.com/poll/3hpypw73455t2g24 > > If interested, please respond by the end of this week. > > Equally, if you have additional suggestions for the next focus of the > SIG, do not hesitate to add them to [1]. > > Thanks! >  Arne > > [0] > https://www.openstack.org/use-cases/bare-metal/how-ironic-delivers-abstraction-and-automation-using-open-source-infrastructure > > [1] https://etherpad.opendev.org/p/bare-metal-sig > From kklimonda at syntaxhighlighted.com Thu Sep 17 15:56:49 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Thu, 17 Sep 2020 17:56:49 +0200 Subject: =?UTF-8?Q?[neutron][ovn]_Logical_flow_scaling_(flow_explosion_in_lr=5Fin?= =?UTF-8?Q?=5Farp=5Fresolve)?= Message-ID: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> Hi, We're running some tests of ussuri deployment with ovn ML2 driver and seeing some worrying numbers of logical flows generated for our test deployment. As a test, we create 400 routes, 400 private networks and connect each network to its own routers. We also connect each router to an external network. After doing that a dump of logical flows shows almost 800k logical flows, most of them in lr_in_arp_resolve table: --8<--8<--8<-- # cat lflows.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 |sort | uniq -c |sort -n | tail -10 3264 lr_in_learn_neighbor 3386 ls_out_port_sec_l2 4112 lr_in_admission 4202 ls_in_port_sec_l2 4898 lr_in_lookup_neighbor 4900 lr_in_ip_routing 9144 ls_in_l2_lkup 9160 ls_in_arp_rsp 22136 lr_in_ip_input 671656 lr_in_arp_resolve # --8<--8<--8<-- ovn: 20.06.2 + patch for SNAT IP ARP reply issue openvswitch: 2.13.0 neutron: 16.1.0 I've seen some discussion about similar issue at OVS mailing lists: https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg07014.html - is this relevant to neutron, and not just kubernetes? -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From sean.mcginnis at gmx.com Thu Sep 17 16:09:30 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 17 Sep 2020 11:09:30 -0500 Subject: [release][heat][karbor][swift][vitrage] Cycle With Intermediary Unreleased Deliverables In-Reply-To: References: Message-ID: Bumping for visibility. A couple of the teams below may want to get a final release in before we wrap up Victoria. Thanks! Sean On 9/9/20 4:19 PM, Kendall Nelson wrote: > Hello! > > Quick reminder that we'll need a release very soon for a number of > deliverables following a cycle-with-intermediary release model but > which have not done *any* release yet in the Victoriacycle: > > heat-agents > karbor-dashboard > karbor > swift > vitrage-dashboard > vitrage > > Those should be released ASAP, and in all cases before $rc1-deadline, > sothat we have a release to include in the final $series release. > > -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbishop at liquidweb.com Wed Sep 16 21:13:44 2020 From: tbishop at liquidweb.com (Tyler Bishop) Date: Wed, 16 Sep 2020 17:13:44 -0400 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <63c115dd-4521-4563-af5d-841d419a8974@Spark> Message-ID: <4a748376-e7e3-4e28-b70f-99f0fb6dfb7a@Spark> Hi Julia, All of these are latest stable train using Kolla-ansible. I am using local disk booting for all deployed instances and we utilize neutron networking with plans. Attached screenshot of driver config. On Sep 16, 2020, 2:53 PM -0400, Julia Kreger , wrote: > I guess we need to understand if your machines are set to network boot > by default in ironic's configuration? If it is set to the flat > network_interface and the instances are configured for network > booting? If so, I'd expect this to happen for a deployed instance. > > Out of curiosity, is this master branch code? Ussuri? Are the other > environments the same? > > -Julia > > On Wed, Sep 16, 2020 at 11:33 AM Tyler Bishop wrote: > > > > Normally yes but I am having the PXE added to NON provision ports as well. > > > > I tore down the dnsmasq and inspector containers, rediscovered the hosts and it hasn’t came back.. but that still doesn’t answer how that could happen. > > On Sep 16, 2020, 3:53 AM -0400, Mark Goddard , wrote: > > > > On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > > > > > Hi, > > > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 > > > > > > Hi Tyler, Ironic adds DHCP options to the neutron port on the > > provisioning network. Specifically, the boot interface in ironic is > > responsible for adding DHCP options. See the PXEBaseMixin class. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Attachment.png Type: image/png Size: 140105 bytes Desc: not available URL: From stefan.bujack at desy.de Thu Sep 17 06:40:48 2020 From: stefan.bujack at desy.de (Bujack, Stefan) Date: Thu, 17 Sep 2020 08:40:48 +0200 (CEST) Subject: [Octavia] Please help with deployment of octavia unbound lb port when creating LB In-Reply-To: References: <1941612305.56654725.1600278563865.JavaMail.zimbra@desy.de> Message-ID: <210953849.59798255.1600324848870.JavaMail.zimbra@desy.de> Hello, thanks for your quick help. When I create the listener, the pool and add the members to it the loadbalancer works indeed. Thank you Greets Stefan Bujack ----- Original Message ----- From: "Michael Johnson" To: "Stefan Bujack" Cc: "openstack-discuss" Sent: Wednesday, 16 September, 2020 22:50:32 Subject: Re: [Octavia] Please help with deployment of octavia unbound lb port when creating LB Hi Stefan, The ports look ok in your output. The VIP is configured as an "allowed address pair" in neutron to allow failovers. "allowed address pair" ports in neutron is how you can have a secondary IP on a port. Each load balancer (In standalone topology) will show two ports in neutron. A "base" port, which is a normal neutron port, and a VRRP/VIP port which is the "allowed address pair" port. In the output above, your base port is: | 8baf7abb-fa03-446b-8ca2-6d026cce75d6 | octavia-lb-vrrp-e50c5b05-69eb-45c4-a670-dc34331443f5 | fa:16:3e:1b:c1:7d | ip_address='131.169.46.40', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | And your VRRP/VIP port is: | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | fa:16:3e:bb:0f:f3 | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | DOWN | If you do a "openstack port show 8baf7abb-fa03-446b-8ca2-6d026cce75d6" (the base port) you will see at the top the allowed address pairs configuration that points to the other port. The allowed address pairs port will never show as ACTIVE as it is not a "real" neutron port. Octavia also manages the security groups for you, so I don't think security groups are likely an issue here. I see on the load balancer output that you do not have a listener configured on the load balancer. The VIP port will not respond to any requests until a listener has been configured (The listener defines the TCP/UDP port to accept connections on). This is also why the load balancer is reporting operating_status as OFFLINE. If you create an HTTP listener on port 80, once the load balancer becomes ACTIVE, you should be able to curl to the VIP and get back an HTTP 503 response. This is because there is no pool or members configured to service the request. Let me know if that doesn't solve your issue and we can debug it further. Michael On Wed, Sep 16, 2020 at 11:00 AM Bujack, Stefan wrote: > > Hello, > > I am a little lost here. Hopefully some of you nice people could help me with this issue please. > > We have an Openstack Ussuri deployment on Ubuntu 20.04. > > Our network is configured in an "Open vSwitch: High availability using VRRP" way. > > I have gone through the official Install and configure procedure on "https://docs.openstack.org/octavia/ussuri/install/install-ubuntu.html" > > We have one public network. > > When I want to "Deploy a basic HTTP load balancer" like described in the official documentation "https://docs.openstack.org/octavia/ussuri/user/guides/basic-cookbook.html" > > I see a problem with the created lb port. The port is down and unbound and the VIP is not reachable. > > root at keystone04:~# openstack loadbalancer create --name lb1 --vip-subnet-id DESY-VLAN-46 > +---------------------+--------------------------------------+ > | Field | Value | > +---------------------+--------------------------------------+ > | admin_state_up | True | > | availability_zone | None | > | created_at | 2020-09-16T17:19:37 | > | description | | > | flavor_id | None | > | id | cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | listeners | | > | name | lb1 | > | operating_status | OFFLINE | > | pools | | > | project_id | 0c6318a1c2414c9f805059788db47bb6 | > | provider | amphora | > | provisioning_status | PENDING_CREATE | > | updated_at | None | > | vip_address | 131.169.46.214 | > | vip_network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | > | vip_port_id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | > | vip_qos_policy_id | None | > | vip_subnet_id | f2a2d8d2-363e-45e7-80f8-f751a24eed8c | > +---------------------+--------------------------------------+ > > root at keystone04:~# openstack loadbalancer show cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 > +---------------------+--------------------------------------+ > | Field | Value | > +---------------------+--------------------------------------+ > | admin_state_up | True | > | availability_zone | None | > | created_at | 2020-09-16T17:19:37 | > | description | | > | flavor_id | None | > | id | cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | listeners | | > | name | lb1 | > | operating_status | OFFLINE | > | pools | | > | project_id | 0c6318a1c2414c9f805059788db47bb6 | > | provider | amphora | > | provisioning_status | ACTIVE | > | updated_at | 2020-09-16T17:20:22 | > | vip_address | 131.169.46.214 | > | vip_network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | > | vip_port_id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | > | vip_qos_policy_id | None | > | vip_subnet_id | f2a2d8d2-363e-45e7-80f8-f751a24eed8c | > +---------------------+--------------------------------------+ > > root at keystone04:~# openstack port list > +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ > | ID | Name | MAC Address | Fixed IP Addresses | Status | > +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ > | 020210e3-546a-4372-a91b-cc3e7a5cbab0 | HA port tenant 0c6318a1c2414c9f805059788db47bb6 | fa:16:3e:0b:d4:a9 | ip_address='169.254.192.26', subnet_id='4de6a91e-bb53-4869-976b-67815769bb12' | ACTIVE | > | 20fe9c50-6c89-4ebd-bbfa-25bdf0e716fd | | fa:16:3e:f5:c3:a4 | ip_address='131.169.46.201', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | N/A | > | 2ae5a87f-803a-4e1d-9e7c-e874f200a3f4 | | fa:16:3e:57:57:ef | ip_address='131.169.46.31', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | 6948989b-40e8-40fe-9216-16f82d8071cd | | fa:16:3e:8b:59:0c | ip_address='172.16.1.1', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | 784fa499-2f64-4026-a26b-732acd2f328c | | fa:16:3e:57:ec:23 | ip_address='131.169.46.128', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | 8baf7abb-fa03-446b-8ca2-6d026cce75d6 | octavia-lb-vrrp-e50c5b05-69eb-45c4-a670-dc34331443f5 | fa:16:3e:1b:c1:7d | ip_address='131.169.46.40', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | 8fa76adf-0a4b-400d-ae29-874cbd055f88 | | fa:16:3e:3f:92:14 | ip_address='172.16.0.100', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | > | 906f5713-c2b6-4d05-8c89-b084e09c744c | | fa:16:3e:ba:d7:74 | ip_address='172.16.1.112', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | a08d5c5f-dacb-4a96-b0f4-7e1a3fd1c536 | | fa:16:3e:86:f9:d6 | ip_address='172.16.0.219', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | > | b5ad6738-8805-4f20-8084-a94ffacfff89 | | fa:16:3e:00:80:79 | ip_address='131.169.46.60', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | fa:16:3e:bb:0f:f3 | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | DOWN | > | bf1476d0-0327-4c4f-8b79-d767c8a7dba5 | | fa:16:3e:24:79:cb | ip_address='131.169.46.126', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | c15b142f-c06c-426a-83db-46e98e4839d6 | | fa:16:3e:c7:60:d1 | ip_address='172.16.1.141', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | cb75004a-aa57-4250-93be-1bb03bdc2a1b | | fa:16:3e:7e:9c:9f | ip_address='131.169.46.84', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > | dc956e9a-a905-417a-b234-14782bf182d3 | HA port tenant 0c6318a1c2414c9f805059788db47bb6 | fa:16:3e:40:87:e3 | ip_address='169.254.194.172', subnet_id='4de6a91e-bb53-4869-976b-67815769bb12' | ACTIVE | > | dd48e315-2cb1-4716-8bc5-e892a948cb5f | | fa:16:3e:b0:4a:eb | ip_address='172.16.1.2', subnet_id='2ed9de2d-ea68-4f25-a925-fdfe6c4d5fd8' | ACTIVE | > | e25ee538-7938-4992-a4f7-51f35f6831b5 | octavia-health-manager-listen-port | fa:16:3e:5c:b3:2f | ip_address='172.16.0.2', subnet_id='5443e5a0-996f-465c-acb8-14128f423b1d' | ACTIVE | > | e91a5135-b076-4043-add4-21073109a730 | | fa:16:3e:4d:b8:56 | ip_address='131.169.46.102', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | ACTIVE | > +--------------------------------------+------------------------------------------------------+-------------------+--------------------------------------------------------------------------------+--------+ > root at keystone04:~# openstack port show bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b > +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value | > +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ > | admin_state_up | DOWN | > | allowed_address_pairs | | > | binding_host_id | | > | binding_profile | | > | binding_vif_details | | > | binding_vif_type | unbound | > | binding_vnic_type | normal | > | created_at | 2020-09-16T17:19:37Z | > | data_plane_status | None | > | description | | > | device_id | lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | device_owner | Octavia | > | dns_assignment | None | > | dns_domain | None | > | dns_name | None | > | extra_dhcp_opts | | > | fixed_ips | ip_address='131.169.46.214', subnet_id='f2a2d8d2-363e-45e7-80f8-f751a24eed8c' | > | id | bae4ffe6-a1dc-4a8a-9b1e-cc727a4b763b | > | ip_allocation | None | > | location | cloud='', project.domain_id=, project.domain_name=, project.id='0c6318a1c2414c9f805059788db47bb6', project.name=, region_name='', zone= | > | mac_address | fa:16:3e:bb:0f:f3 | > | name | octavia-lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | > | network_id | 94b6986f-7035-4b35-bee9-739451fa1871 | > | port_security_enabled | True | > | project_id | 0c6318a1c2414c9f805059788db47bb6 | > | propagate_uplink_status | None | > | qos_network_policy_id | None | > | qos_policy_id | None | > | resource_request | None | > | revision_number | 2 | > | security_group_ids | 0964090c-0299-401a-9156-bafbb040e345 | > | status | DOWN | > | tags | | > | trunk_details | None | > | updated_at | 2020-09-16T17:19:39Z | > +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ > > I also keep getting this error on the octavia node: > > Sep 16 19:41:46 octavia04.desy.de octavia-health-manager[3009]: 2020-09-16 19:41:46.217 3009 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.0.219', 8660). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' > > My security groups look like this: > > root at octavia04:~# openstack security group list > +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ > | ID | Name | Description | Project | Tags | > +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ > | 0964090c-0299-401a-9156-bafbb040e345 | lb-cd3b28f4-62f6-48e0-bc3a-b52fcb36e073 | | f89517ee676f4618bd55849477442aca | [] | > | 0cda6134-0574-430b-9250-f71b81587a53 | default | Default security group | | [] | > | 2236e82c-13fe-42e3-9fcf-bea43917f231 | lb-mgmt-sec-grp | lb-mgmt-sec-grp | f89517ee676f4618bd55849477442aca | [] | > | 85ab9c91-9241-4ab4-ad01-368518ab1a51 | default | Default security group | 35609e3390ce45be83a31cac47057efb | [] | > | e4f59cd4-75c6-4abf-9ab6-b97b4ae199b4 | lb-health-mgr-sec-grp | lb-health-mgr-sec-grp | f89517ee676f4618bd55849477442aca | [] | > | ef91fcfb-fe20-4d45-bfe8-dfb7375462a3 | default | Default security group | f89517ee676f4618bd55849477442aca | [] | > | efff8138-bffd-4e96-8318-2b13b4294f0b | default | Default security group | 0c6318a1c2414c9f805059788db47bb6 | [] | > +--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+ > root at octavia04:~# openstack security group rule list e4f59cd4-75c6-4abf-9ab6-b97b4ae199b4 > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | 20ef3407-0df0-4dcc-96cc-2693b9cdc6aa | udp | IPv4 | 0.0.0.0/0 | 5555:5555 | None | > | 3e9feb44-c548-4889-aa30-1792ea89d675 | None | IPv4 | 0.0.0.0/0 | | None | > | 6cfd295f-6544-4bb6-bb51-00960e4753bb | None | IPv6 | ::/0 | | None | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > root at octavia04:~# openstack security group rule list 2236e82c-13fe-42e3-9fcf-bea43917f231 > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | 29e20b2b-6626-48c4-a06c-85d9dd6e6d61 | tcp | IPv4 | 0.0.0.0/0 | 22:22 | None | > | 419ab26c-9cdf-4fda-bec3-95501f6bfa7d | icmp | IPv4 | 0.0.0.0/0 | | None | > | a4c70060-3580-46a6-8735-bca7046298f1 | None | IPv6 | ::/0 | | None | > | b1122fa8-1699-434f-b810-36abc0ea4ab8 | tcp | IPv4 | 0.0.0.0/0 | 9443:9443 | None | > | cdc91572-afa9-4401-9212-a46414ea01ae | None | IPv4 | 0.0.0.0/0 | | None | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > root at octavia04:~# openstack security group rule list 0964090c-0299-401a-9156-bafbb040e345 > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > | 07529aae-7732-409f-af37-c9b5287bbb16 | None | IPv6 | ::/0 | | None | > | 35701c1b-f739-4a44-a8c6-1d8f9ca82a7e | None | IPv4 | 0.0.0.0/0 | | None | > +--------------------------------------+-------------+-----------+-----------+------------+-----------------------+ > > My network agents lokk like this > > root at keystone04:~# openstack network agent list > +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ > | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | > +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ > | 0b3fd449-c123-4d82-994e-adf4aa588292 | Open vSwitch agent | neutron04-node1.desy.de | None | :-) | UP | neutron-openvswitch-agent | > | 195b08ff-0b89-48d8-9ada-b59b5ff2b8ab | Open vSwitch agent | openstack04.desy.de | None | :-) | UP | neutron-openvswitch-agent | > | 3346b86a-80f9-4397-8f55-9d1ff28285dd | L3 agent | neutron04-node1.desy.de | nova | :-) | UP | neutron-l3-agent | > | 36547753-59d7-4184-9a76-5317abf9a3aa | DHCP agent | openstack04.desy.de | nova | :-) | UP | neutron-dhcp-agent | > | 56ae1056-72b6-4a65-8bab-7f837c264777 | Metadata agent | openstack04.desy.de | None | :-) | UP | neutron-metadata-agent | > | 6678b278-6acb-439a-92a8-e2c7f932607c | L3 agent | octavia04.desy.de | nova | :-) | UP | neutron-l3-agent | > | 6681247b-3633-45cd-9017-e548fbd13e73 | Open vSwitch agent | neutron04.desy.de | None | :-) | UP | neutron-openvswitch-agent | > | 6d4ed4ed-5a8f-42ee-9052-ff9279a9dada | L3 agent | openstack04.desy.de | nova | :-) | UP | neutron-l3-agent | > | 8254d653-aff1-40e3-ade6-890d0a6b0617 | L3 agent | neutron04.desy.de | nova | :-) | UP | neutron-l3-agent | > | c4ce7df7-a682-4e2d-b841-73577f0abe80 | Open vSwitch agent | octavia04.desy.de | None | :-) | UP | neutron-openvswitch-agent | > +--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+ > > Thanks in advance, > > Stefan Bujack > From tonyliu0592 at hotmail.com Thu Sep 17 18:31:35 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 17 Sep 2020 18:31:35 +0000 Subject: [neutron][ovn] Logical flow scaling (flow explosion in lr_in_arp_resolve) In-Reply-To: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> References: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> Message-ID: I am trying to reach 5000. The problem I hit is that northd is stuck in translating from NB to SB when connect router to external network. I assume all your 400 routers connect to the same subnet in that external network. I am trying another approach where one subnet is created for each router in external network. That may help to reduce the ARP flow? Thanks! Tony > -----Original Message----- > From: Krzysztof Klimonda > Sent: Thursday, September 17, 2020 8:57 AM > To: openstack-discuss at lists.openstack.org > Subject: [neutron][ovn] Logical flow scaling (flow explosion in > lr_in_arp_resolve) > > Hi, > > We're running some tests of ussuri deployment with ovn ML2 driver and > seeing some worrying numbers of logical flows generated for our test > deployment. > > As a test, we create 400 routes, 400 private networks and connect each > network to its own routers. We also connect each router to an external > network. After doing that a dump of logical flows shows almost 800k > logical flows, most of them in lr_in_arp_resolve table: > > --8<--8<--8<-- > # cat lflows.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 > |sort | uniq -c |sort -n | tail -10 > 3264 lr_in_learn_neighbor > 3386 ls_out_port_sec_l2 > 4112 lr_in_admission > 4202 ls_in_port_sec_l2 > 4898 lr_in_lookup_neighbor > 4900 lr_in_ip_routing > 9144 ls_in_l2_lkup > 9160 ls_in_arp_rsp > 22136 lr_in_ip_input > 671656 lr_in_arp_resolve > # > --8<--8<--8<-- > > ovn: 20.06.2 + patch for SNAT IP ARP reply issue > openvswitch: 2.13.0 > neutron: 16.1.0 > > I've seen some discussion about similar issue at OVS mailing lists: > https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg07014.html - > is this relevant to neutron, and not just kubernetes? > > -- > Krzysztof Klimonda > kklimonda at syntaxhighlighted.com From amy at demarco.com Thu Sep 17 18:50:18 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 17 Sep 2020 13:50:18 -0500 Subject: [Diversity] Diversity & Inclusion WG Meeting 9/21 - Removing Divisive Language Message-ID: The Diversity and Inclusion WG will be holding a meeting to continue drafting the OSF's stance on the removal of Divisive Language within the OSF projects. The WG invites members of all OSF projects to participate in this effort and to join us at our next meeting Monday, September 21, at 17:00 UTC which will be held at https://meetpad.opendev.org/osf-diversity-and-inclusion. Drafts for the stance can be found at https://etherpad.opendev.org/p/divisivelanguage If you have any questions please let me and the team know here, on #openstack-diversity on IRC, or you can email me directly. Thanks, Amy Marrich (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Sep 17 19:10:54 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 17 Sep 2020 12:10:54 -0700 Subject: [tc] weekly meeting time In-Reply-To: References: Message-ID: I am working on filling this out, but I had a thought: perhaps you want to wait to close the poll till the new TC is seated since they might have different schedules? Just a thought :) I'll finish filling out the poll now. -Kendall (diablo_rojo) On Thu, Sep 17, 2020 at 6:05 AM Mohammed Naser wrote: > Hi folks: > > Given that we've landed the change to start having weekly meetings > again, it's time to start picking a time: > > https://doodle.com/poll/xw2wiebm2ayqxvki > > A few people also mentioned they wouldn't mind taking the current > office hours time towards that, I guess we can try and come to an > agreement if we want to do that here (or simply by voting that time on > the Doodle). > > Thanks, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Thu Sep 17 19:14:51 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Thu, 17 Sep 2020 21:14:51 +0200 Subject: =?UTF-8?Q?Re:_[neutron][ovn]_Logical_flow_scaling_(flow_explosion_in_lr=5F?= =?UTF-8?Q?in=5Farp=5Fresolve)?= In-Reply-To: References: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> Message-ID: <6a8fca0d-65ee-4c5f-8170-b533913b872a@www.fastmail.com> Hi Tony, Indeed I forgot to mention that all routers are using the same external network (and subnet) for the external gateway. Creating separate external networks per router wouldn't really work for us, and I'm not even quite sure what the setup would look like in that case. -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com On Thu, Sep 17, 2020, at 20:31, Tony Liu wrote: > I am trying to reach 5000. The problem I hit is that northd is > stuck in translating from NB to SB when connect router to external > network. > > I assume all your 400 routers connect to the same subnet in that > external network. I am trying another approach where one subnet > is created for each router in external network. That may help to > reduce the ARP flow? > > Thanks! > Tony > > -----Original Message----- > > From: Krzysztof Klimonda > > Sent: Thursday, September 17, 2020 8:57 AM > > To: openstack-discuss at lists.openstack.org > > Subject: [neutron][ovn] Logical flow scaling (flow explosion in > > lr_in_arp_resolve) > > > > Hi, > > > > We're running some tests of ussuri deployment with ovn ML2 driver and > > seeing some worrying numbers of logical flows generated for our test > > deployment. > > > > As a test, we create 400 routes, 400 private networks and connect each > > network to its own routers. We also connect each router to an external > > network. After doing that a dump of logical flows shows almost 800k > > logical flows, most of them in lr_in_arp_resolve table: > > > > --8<--8<--8<-- > > # cat lflows.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 > > |sort | uniq -c |sort -n | tail -10 > > 3264 lr_in_learn_neighbor > > 3386 ls_out_port_sec_l2 > > 4112 lr_in_admission > > 4202 ls_in_port_sec_l2 > > 4898 lr_in_lookup_neighbor > > 4900 lr_in_ip_routing > > 9144 ls_in_l2_lkup > > 9160 ls_in_arp_rsp > > 22136 lr_in_ip_input > > 671656 lr_in_arp_resolve > > # > > --8<--8<--8<-- > > > > ovn: 20.06.2 + patch for SNAT IP ARP reply issue > > openvswitch: 2.13.0 > > neutron: 16.1.0 > > > > I've seen some discussion about similar issue at OVS mailing lists: > > https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg07014.html - > > is this relevant to neutron, and not just kubernetes? > > > > -- > > Krzysztof Klimonda > > kklimonda at syntaxhighlighted.com > > From kennelson11 at gmail.com Thu Sep 17 19:38:00 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 17 Sep 2020 12:38:00 -0700 Subject: [TC] vPTG Details Message-ID: Hello! As you might or might not have seen, I signed up the TC for two chunks of time (based on poll results). The times are: - 15-17 UTC Tuesday October 27th (2 hours; Grizzly Room) - 13-17 UTC Friday October 30 (4 hours; Grizzly Room Hope to see you all there! Also, please add your discussion topics to our brainstorming etherpad[1]! -Kendall Nelson (diablo_rojo) [1] https://etherpad.opendev.org/p/tc-wallaby-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Sep 17 21:04:54 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 17 Sep 2020 23:04:54 +0200 Subject: [neutron] Drivers meeting cancelled Message-ID: <20200917210454.GA575830@p1> Hi, I know that it's been already a while since we had our last drivers meeting but recently we don't have any new RFEs to discuss and because of that lets cancel it this week too. If You have any ideas or topics which You would like to discuss on the drivers meeting, please add it to the "On Demand" section at [1]. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -- Slawek Kaplonski Senior software engineer Red Hat From dharmendra.kushwaha at gmail.com Thu Sep 17 21:30:10 2020 From: dharmendra.kushwaha at gmail.com (Dharmendra Kushwaha) Date: Fri, 18 Sep 2020 06:30:10 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: <1749c41904a.10500e4b2107105.2819584261008238588@ghanshyammann.com> References: <9943505f-ed55-0081-5e07-50c4e17877c6@hco.ntt.co.jp_1> <1749c41904a.10500e4b2107105.2819584261008238588@ghanshyammann.com> Message-ID: <83F187EE-792A-4F35-AB94-01D3F538CBEB@gmail.com> +1 Thanks & Regards Dharmendra Kushwaha > On 17-Sep-2020, at 10:29 PM, Ghanshyam Mann wrote: > > I am not Tacker Core but +1for Takahashi , he has been active in Tacker community since a long. > > -gmann > > > ---- On Wed, 16 Sep 2020 23:52:55 -0500 Yoshito Ito wrote ---- >> +1. Toshiaki has been active to give great contributions to Tacker. >> >> >> Regards, >> >> Yoshito Ito >> >> >>> On 2020/09/17 4:06, yasufum wrote: >>> Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, >>> fixing bugs and answering questions in the recent releases [1][2] and >>> had several sessions on summits for Tacker. In addition, he is now well >>> distinguished as one of the responsibility from ETSI-NFV standard >>> community as a contributor between the standard and implementation for >>> the recent contributions for both of OpenStack and ETSI. >>> >>> I'd appreciate if we add Toshiaki to the core team. >>> >>> [1] https://www.stackalytics.com/?company=nec&module=tacker >>> [2] >>> https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks >>> >>> >>> Regards, >>> Yasufumi >>> >>> >> >> >> > From gmann at ghanshyammann.com Thu Sep 17 21:48:41 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 Sep 2020 16:48:41 -0500 Subject: [policy] Wallaby PTG planning Message-ID: <1749e0a7b57.101bfea9a8055.4439663812366981422@ghanshyammann.com> Hello Everyone, We have booked 2 hrs slot for the policy popup team in the Wallaby vPTG. Time is Monday 23.00-00 UTC and Tuesday 00.-01:00 UTC Tuesday in Havana room (Both are continuous slots). I have created the etherpad to collect the discussion point. Please add the topic you would like to discuss - https://etherpad.opendev.org/p/policy-popup-wallaby-ptg We request more and more projects to participate in this and discuss the various queries or plan to switch to the new policy. -gmann & raildo From ltoscano at redhat.com Thu Sep 17 21:52:34 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 17 Sep 2020 17:52:34 -0400 (EDT) Subject: [all][goals] Switch legacy Zuul jobs to native - update #4 In-Reply-To: <643518500.50758815.1600379483985.JavaMail.zimbra@redhat.com> Message-ID: <1344607023.50758996.1600379554953.JavaMail.zimbra@redhat.com> Hi, another round of updates. The list of the remaining legacy jobs is shortening but the Victoria branching will happen next week, so we need to speed up a bit. Status ====== As usual, the etherpad [3] lists the full status and the links to the patches. Here is a more detailed report for each remaining project. cinder ------ The only legacy job left is a bit unusual, but there is a WIP review now. It needs some more work to fix the reporting. designate --------- The patch for only legacy job has been approved, but there is an unrelated issue which requires input from the designate team. heat ------ Only one legacy job left, the heat cores are aware of it. infra ----- Only one devstack-gate job in the os-loganalyze repository, which should be probably retired. There are 2 other legacy jobs, but not devstack-gate, so less urgent. ironic ------ The port of the remaining legacy job is going to be merged soon (even though it's failing, but the legacy job is failing as well). manila ------ There is only one legacy-base job (not devstack-gate), so less urgent, but there is a patch for it. monasca ------- The monasca-transform contains a devstack-gate legacy job, but the repository is almost retired. There are 3 legacy (non devstack-gate) jobs in other repositories. murano ------ There are two legacy jobs left (murano-apps-refstackclient-unittest and murano-dashboard-sanity-check) which require some input from the team. neutron ------- Only two legacy grenade jobs are left and they are being worked on. nova ---- The work is ongoing on the remaining legacy jobs (a new job which will replace a legacy one, and a grenade job)). zaqar ----- Mostly done, it only requires a python-zaqarclient backport to be merged in Victoria. References ========== [1] the goal: https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html [2] the up-to-date Zuul v3 porting guide: https://docs.openstack.org/project-team-guide/zuulv3.html [3] the etherpad which tracks the current status: https://etherpad.opendev.org/p/goal-victoria-native-zuulv3-migration [4] the previous reports: http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016058.html http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016561.html http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016892.html Ciao -- Luigi From tonyliu0592 at hotmail.com Fri Sep 18 03:40:54 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 18 Sep 2020 03:40:54 +0000 Subject: [Neutron] Not create .2 port Message-ID: Hi, When create a subnet, by default, the first address is the gateway and Neutron also allocates an address for serving DHCP and DNS. Is there any way to NOT create such port when creating subnet? Thanks! Tony From arne.wiebalck at cern.ch Fri Sep 18 06:21:44 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 18 Sep 2020 08:21:44 +0200 Subject: [baremetal-sig][ironic] Redfish interoperability profiles: status meeting Message-ID: Dear all, This cycle, the Ironic community has started to work on Redfish interoperability profiles [0]. These profiles allow to describe requirements for a Redfish endpoint and to automatically verify them via tools like [1]. The aim of this activity is to have an easy way for both vendors and operators to validate that specific hardware is compatible with the Redfish support in Ironic. We will be meeting next week to discuss the current status of this activity. A representative from the DMTF (as the defining body for the Redfish standard) has kindly agreed to join us. Everybody interested is welcome, of course: Thu Sep 24 at 12pm UTC https://cern.zoom.us/j/94808950339 Any question, let us know! Cheers, Richard & Arne [0] https://www.dmtf.org/sites/default/files/standards/documents/DSP0272_1.0.0_0.pdf [1] https://github.com/DMTF/Redfish-Interop-Validator From skaplons at redhat.com Fri Sep 18 07:49:04 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 18 Sep 2020 09:49:04 +0200 Subject: [Neutron] Not create .2 port In-Reply-To: References: Message-ID: <20200918074904.GB701072@p1> Hi, On Fri, Sep 18, 2020 at 03:40:54AM +0000, Tony Liu wrote: > Hi, > > When create a subnet, by default, the first address is the > gateway and Neutron also allocates an address for serving > DHCP and DNS. Is there any way to NOT create such port when > creating subnet? You can specify "--gateway None" if You don't want to have gateway configured in Your subnet. And for dhcp ports, You can set "--no-dhcp" for subnet so it will not create dhcp ports in such subnet also. > > > Thanks! > Tony > > -- Slawek Kaplonski Senior software engineer Red Hat From kklimonda at syntaxhighlighted.com Fri Sep 18 08:31:50 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Fri, 18 Sep 2020 10:31:50 +0200 Subject: =?UTF-8?Q?Re:_[neutron][ovn]_Logical_flow_scaling_(flow_explosion_in_lr=5F?= =?UTF-8?Q?in=5Farp=5Fresolve)?= In-Reply-To: <6a8fca0d-65ee-4c5f-8170-b533913b872a@www.fastmail.com> References: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> <6a8fca0d-65ee-4c5f-8170-b533913b872a@www.fastmail.com> Message-ID: So just for testing I've applied this patch to our neutron-server: --8<--8<--8<-- diff --git a/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py index 23a841d7a1..41200786f1 100644 --- a/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py +++ b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py @@ -1141,11 +1141,15 @@ class OVNClient(object): enabled = router.get('admin_state_up') lrouter_name = utils.ovn_name(router['id']) added_gw_port = None + options = { + "always_learn_from_arp_request": "false", + "dynamic_neigh_routers": "true" + } with self._nb_idl.transaction(check_error=True) as txn: txn.add(self._nb_idl.create_lrouter(lrouter_name, external_ids=external_ids, enabled=enabled, - options={})) + options=options)) # TODO(lucasagomes): add_external_gateway is being only used # by the ovn_db_sync.py script, remove it after the database # synchronization work --8<--8<--8<-- and also executed that for each logical router in OVN: # ovn-nbctl set Logical_Router $router options=dynamic_neigh_routers=true,always_learn_from_arp_request=false This had a huge impact on both a number of logical flows and a number of ovs flows on chassis nodes: --8<--8<--8<-- # cat lflows-new.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 |sort | uniq -c |sort -n | tail -10 2170 ls_out_port_sec_l2 2172 lr_in_learn_neighbor 2666 lr_in_admission 2690 ls_in_port_sec_l2 3190 lr_in_ip_routing 4276 lr_in_lookup_neighbor 4873 lr_in_arp_resolve 5864 ls_in_arp_rsp 5873 ls_in_l2_lkup 14343 lr_in_ip_input # ovn-sbctl --timeout=120 lflow-list > lflows-new.txt --8<--8<--8<-- (and this is even more routers than before - 500 vs 400). I'll have to read what impact do those options have on ARP activity though. -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com On Thu, Sep 17, 2020, at 21:14, Krzysztof Klimonda wrote: > Hi Tony, > > Indeed I forgot to mention that all routers are using the same external > network (and subnet) for the external gateway. > > Creating separate external networks per router wouldn't really work for > us, and I'm not even quite sure what the setup would look like in that > case. > > -- > Krzysztof Klimonda > kklimonda at syntaxhighlighted.com > > On Thu, Sep 17, 2020, at 20:31, Tony Liu wrote: > > I am trying to reach 5000. The problem I hit is that northd is > > stuck in translating from NB to SB when connect router to external > > network. > > > > I assume all your 400 routers connect to the same subnet in that > > external network. I am trying another approach where one subnet > > is created for each router in external network. That may help to > > reduce the ARP flow? > > > > Thanks! > > Tony > > > -----Original Message----- > > > From: Krzysztof Klimonda > > > Sent: Thursday, September 17, 2020 8:57 AM > > > To: openstack-discuss at lists.openstack.org > > > Subject: [neutron][ovn] Logical flow scaling (flow explosion in > > > lr_in_arp_resolve) > > > > > > Hi, > > > > > > We're running some tests of ussuri deployment with ovn ML2 driver and > > > seeing some worrying numbers of logical flows generated for our test > > > deployment. > > > > > > As a test, we create 400 routes, 400 private networks and connect each > > > network to its own routers. We also connect each router to an external > > > network. After doing that a dump of logical flows shows almost 800k > > > logical flows, most of them in lr_in_arp_resolve table: > > > > > > --8<--8<--8<-- > > > # cat lflows.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 > > > |sort | uniq -c |sort -n | tail -10 > > > 3264 lr_in_learn_neighbor > > > 3386 ls_out_port_sec_l2 > > > 4112 lr_in_admission > > > 4202 ls_in_port_sec_l2 > > > 4898 lr_in_lookup_neighbor > > > 4900 lr_in_ip_routing > > > 9144 ls_in_l2_lkup > > > 9160 ls_in_arp_rsp > > > 22136 lr_in_ip_input > > > 671656 lr_in_arp_resolve > > > # > > > --8<--8<--8<-- > > > > > > ovn: 20.06.2 + patch for SNAT IP ARP reply issue > > > openvswitch: 2.13.0 > > > neutron: 16.1.0 > > > > > > I've seen some discussion about similar issue at OVS mailing lists: > > > https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg07014.html - > > > is this relevant to neutron, and not just kubernetes? > > > > > > -- > > > Krzysztof Klimonda > > > kklimonda at syntaxhighlighted.com > > > > > > From dalvarez at redhat.com Fri Sep 18 09:17:01 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Fri, 18 Sep 2020 11:17:01 +0200 Subject: [neutron][ovn] Logical flow scaling (flow explosion in lr_in_arp_resolve) In-Reply-To: References: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> <6a8fca0d-65ee-4c5f-8170-b533913b872a@www.fastmail.com> Message-ID: Hey folks, thanks for bringing this up! On Fri, Sep 18, 2020 at 10:39 AM Krzysztof Klimonda < kklimonda at syntaxhighlighted.com> wrote: > So just for testing I've applied this patch to our neutron-server: > > --8<--8<--8<-- > diff --git > a/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py > b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py > index 23a841d7a1..41200786f1 100644 > --- a/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py > +++ b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py > @@ -1141,11 +1141,15 @@ class OVNClient(object): > enabled = router.get('admin_state_up') > lrouter_name = utils.ovn_name(router['id']) > added_gw_port = None > + options = { > + "always_learn_from_arp_request": "false", > + "dynamic_neigh_routers": "true" > + } > with self._nb_idl.transaction(check_error=True) as txn: > txn.add(self._nb_idl.create_lrouter(lrouter_name, > external_ids=external_ids, > enabled=enabled, > - options={})) > + options=options)) > # TODO(lucasagomes): add_external_gateway is being only used > # by the ovn_db_sync.py script, remove it after the database > # synchronization work > --8<--8<--8<-- > > Do you want to propose a formal patch with this change and some test? I think we may want this in. The question is if we want to make it configurable and, if so, how to expose it to an admin... When enabling this, we're going to have ARP requests between the logical routers and, depending on the topology, this can create other scaling issues due to MAC_Binding entries in the Southbound database. Also, these entries do not age out [0] so we might end up with a very large amount of rows here. This is probably an unlikely pattern in OpenStack where usually projects connect their routers to a localnet Logical Switch but routers across projects are not interconnected. Hence, the amount of E/W traffic is not a full mesh and, probably, even in the worst case it's going to be still better than what we have right now in terms of scaling. On the ARP traffic side, it should not be a big deal, they'll be sent between routers when needed and, since the entries do not age out, they're not going to be sent again. Again, thanks for bringing this up! I'd +1 this change :) [0] https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg05917.html and also executed that for each logical router in OVN: > > # ovn-nbctl set Logical_Router $router > options=dynamic_neigh_routers=true,always_learn_from_arp_request=false > > This had a huge impact on both a number of logical flows and a number of > ovs flows on chassis nodes: > > --8<--8<--8<-- > # cat lflows-new.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 > |sort | uniq -c |sort -n | tail -10 > 2170 ls_out_port_sec_l2 > 2172 lr_in_learn_neighbor > 2666 lr_in_admission > 2690 ls_in_port_sec_l2 > 3190 lr_in_ip_routing > 4276 lr_in_lookup_neighbor > 4873 lr_in_arp_resolve > 5864 ls_in_arp_rsp > 5873 ls_in_l2_lkup > 14343 lr_in_ip_input > # ovn-sbctl --timeout=120 lflow-list > lflows-new.txt > --8<--8<--8<-- > > (and this is even more routers than before - 500 vs 400). I'll have to > read what impact do those options have on ARP activity though. > > -- > Krzysztof Klimonda > kklimonda at syntaxhighlighted.com > > On Thu, Sep 17, 2020, at 21:14, Krzysztof Klimonda wrote: > > Hi Tony, > > > > Indeed I forgot to mention that all routers are using the same external > > network (and subnet) for the external gateway. > > > > Creating separate external networks per router wouldn't really work for > > us, and I'm not even quite sure what the setup would look like in that > > case. > > > > -- > > Krzysztof Klimonda > > kklimonda at syntaxhighlighted.com > > > > On Thu, Sep 17, 2020, at 20:31, Tony Liu wrote: > > > I am trying to reach 5000. The problem I hit is that northd is > > > stuck in translating from NB to SB when connect router to external > > > network. > > > > > > I assume all your 400 routers connect to the same subnet in that > > > external network. I am trying another approach where one subnet > > > is created for each router in external network. That may help to > > > reduce the ARP flow? > > > > > > Thanks! > > > Tony > > > > -----Original Message----- > > > > From: Krzysztof Klimonda > > > > Sent: Thursday, September 17, 2020 8:57 AM > > > > To: openstack-discuss at lists.openstack.org > > > > Subject: [neutron][ovn] Logical flow scaling (flow explosion in > > > > lr_in_arp_resolve) > > > > > > > > Hi, > > > > > > > > We're running some tests of ussuri deployment with ovn ML2 driver and > > > > seeing some worrying numbers of logical flows generated for our test > > > > deployment. > > > > > > > > As a test, we create 400 routes, 400 private networks and connect > each > > > > network to its own routers. We also connect each router to an > external > > > > network. After doing that a dump of logical flows shows almost 800k > > > > logical flows, most of them in lr_in_arp_resolve table: > > > > > > > > --8<--8<--8<-- > > > > # cat lflows.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 > > > > |sort | uniq -c |sort -n | tail -10 > > > > 3264 lr_in_learn_neighbor > > > > 3386 ls_out_port_sec_l2 > > > > 4112 lr_in_admission > > > > 4202 ls_in_port_sec_l2 > > > > 4898 lr_in_lookup_neighbor > > > > 4900 lr_in_ip_routing > > > > 9144 ls_in_l2_lkup > > > > 9160 ls_in_arp_rsp > > > > 22136 lr_in_ip_input > > > > 671656 lr_in_arp_resolve > > > > # > > > > --8<--8<--8<-- > > > > > > > > ovn: 20.06.2 + patch for SNAT IP ARP reply issue > > > > openvswitch: 2.13.0 > > > > neutron: 16.1.0 > > > > > > > > I've seen some discussion about similar issue at OVS mailing lists: > > > > > https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg07014.html - > > > > is this relevant to neutron, and not just kubernetes? > > > > > > > > -- > > > > Krzysztof Klimonda > > > > kklimonda at syntaxhighlighted.com > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpena at redhat.com Fri Sep 18 09:59:15 2020 From: jpena at redhat.com (Javier Pena) Date: Fri, 18 Sep 2020 05:59:15 -0400 (EDT) Subject: =?utf-8?Q?[rpm-packaging]_Proposing_Her?= =?utf-8?Q?v=C3=A9_Beraud_as_new_core_reviewer?= In-Reply-To: <1110488948.50802218.1600423119788.JavaMail.zimbra@redhat.com> Message-ID: <545442464.50802244.1600423155901.JavaMail.zimbra@redhat.com> Hi, I would like to propose Hervé Beraud as a new core reviewer for the rpm-packaging project. Hervé has been providing consistently good reviews over the last few months, and I think he would be a great addition to the core reviewer team. Existing cores, please vote on the thread! Regards, Javier Peña From jean-philippe at evrard.me Fri Sep 18 10:05:14 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 18 Sep 2020 12:05:14 +0200 Subject: [all][elections][tc] Stepping down from the TC Message-ID: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Hello everyone, This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! Thanks to all of you. It was nice. Regards, Jean-Philippe Evrard (evrardjp) From balazs.gibizer at est.tech Fri Sep 18 10:08:22 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 18 Sep 2020 12:08:22 +0200 Subject: [TC][nova] Failing device detachments on Focal Message-ID: Hi, I want to raise a potential conflict between the fact that OpenStack Victoria officially supports Ubuntu Focal[3] and a high severity regression in Nova [1] caused by a qemu bug [2] in the qemu version that is shipped with Focal. The Nova team documented[4] it as a limitation without a known workaround in Nova. Also as far as I know Canonical does not support downgrading the qemu version so I don't see a viable workaround on Focal side either. Is the Nova reno[4] enough from the TC perspective to go forward with the switch to Focal and with the Victoria release? cheers, gibi [1] https://bugs.launchpad.net/nova/+bug/1882521 [2] https://bugs.launchpad.net/qemu/+bug/1894804 [3] https://governance.openstack.org/tc/reference/runtimes/victoria.html [4] https://review.opendev.org/#/c/752654 From radoslaw.piliszek at gmail.com Fri Sep 18 10:20:35 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 18 Sep 2020 12:20:35 +0200 Subject: [TC][nova] Failing device detachments on Focal In-Reply-To: References: Message-ID: Is there a version of qemu that works on Focal that deployment projects could use now? -yoctozepto On Fri, Sep 18, 2020 at 12:16 PM Balázs Gibizer wrote: > > Hi, > > I want to raise a potential conflict between the fact that OpenStack > Victoria officially supports Ubuntu Focal[3] and a high severity > regression in Nova [1] caused by a qemu bug [2] in the qemu version > that is shipped with Focal. > > The Nova team documented[4] it as a limitation without a known > workaround in Nova. Also as far as I know Canonical does not support > downgrading the qemu version so I don't see a viable workaround on > Focal side either. > > Is the Nova reno[4] enough from the TC perspective to go forward with > the switch to Focal and with the Victoria release? > > cheers, > gibi > > [1] https://bugs.launchpad.net/nova/+bug/1882521 > [2] https://bugs.launchpad.net/qemu/+bug/1894804 > [3] https://governance.openstack.org/tc/reference/runtimes/victoria.html > [4] https://review.opendev.org/#/c/752654 > > > From jean-philippe at evrard.me Fri Sep 18 10:23:07 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 18 Sep 2020 12:23:07 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG Message-ID: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> Hello folks, It's maybe a little late, as the elections process already have started, but I think it's worth re-opening the conversation about migrating the rpm-packaging project to a SIG. Last time we tried this, we were stopped by bureaucracy and technical details (IIRC?) which shouldn't be there anymore. I don't see a reason why this team couldn't move to a SIG now, which would be a lightweight governance model for you. You would keep your repos, and can still propose releases (if I am not mistaken). Things shouldn't change much. You can decide to change/add chair people when you want, and don't have to sign off a PTL for 6 months at elections, which was the most concerning matter for your projects IIRC (which lead to missed elections deadlines in the past). I hope that this SIG can evolve in the future, to group not only RPM packaging, but also other packaging mechanism that want to follow under that new banner. That's why I proposed the SIG to be named "Packaging" instead. It's up to the community now to gather around the same banner :) So if you're interested with this change, please vote on here: [1] [2]. Regards, Jean-Philippe Evrard (evrardjp) [1]: https://review.opendev.org/752659 [2]: https://review.opendev.org/752661 From balazs.gibizer at est.tech Fri Sep 18 11:01:07 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 18 Sep 2020 13:01:07 +0200 Subject: [TC][nova] Failing device detachments on Focal In-Reply-To: References: Message-ID: On Fri, Sep 18, 2020 at 12:20, Radosław Piliszek wrote: > Is there a version of qemu that works on Focal that deployment > projects could use now? We use the train ubuntu cloud archive to get an unaffected qemu version, which is now 4.0 cheers, gibi > > -yoctozepto > > On Fri, Sep 18, 2020 at 12:16 PM Balázs Gibizer > wrote: >> >> Hi, >> >> I want to raise a potential conflict between the fact that OpenStack >> Victoria officially supports Ubuntu Focal[3] and a high severity >> regression in Nova [1] caused by a qemu bug [2] in the qemu version >> that is shipped with Focal. >> >> The Nova team documented[4] it as a limitation without a known >> workaround in Nova. Also as far as I know Canonical does not support >> downgrading the qemu version so I don't see a viable workaround on >> Focal side either. >> >> Is the Nova reno[4] enough from the TC perspective to go forward >> with >> the switch to Focal and with the Victoria release? >> >> cheers, >> gibi >> >> [1] https://bugs.launchpad.net/nova/+bug/1882521 >> [2] https://bugs.launchpad.net/qemu/+bug/1894804 >> [3] >> https://governance.openstack.org/tc/reference/runtimes/victoria.html >> [4] https://review.opendev.org/#/c/752654 >> >> >> From jungleboyj at gmail.com Fri Sep 18 11:07:32 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 18 Sep 2020 06:07:32 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <8c3cc964-c640-dd79-d27d-105feaed5395@gmail.com> JP, Thank you for all your efforts as part of the TC!  Been a pleasure to work with you! Jay On 9/18/2020 5:05 AM, Jean-Philippe Evrard wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! > > Thanks to all of you. It was nice. > > Regards, > Jean-Philippe Evrard (evrardjp) > From eblock at nde.ag Fri Sep 18 11:09:26 2020 From: eblock at nde.ag (Eugen Block) Date: Fri, 18 Sep 2020 11:09:26 +0000 Subject: [Neutron] Not create .2 port In-Reply-To: Message-ID: <20200918110926.Horde.0iwwPpGUsNMZcfdsfctbqCy@webmail.nde.ag> Hi Tony, I'm not sure if that's what you mean, but during subnet creation you can disable the gateway in two ways: - Horizon: check "Disable Gateway" in "Create subnet" dialog - CLI: openstack subnet create --gateway none ... This doesn't create any ports in the subnet. Or do you mean something different? Regards, Eugen Zitat von Tony Liu : > Hi, > > When create a subnet, by default, the first address is the > gateway and Neutron also allocates an address for serving > DHCP and DNS. Is there any way to NOT create such port when > creating subnet? > > > Thanks! > Tony From smooney at redhat.com Fri Sep 18 11:11:24 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 18 Sep 2020 12:11:24 +0100 Subject: [TC][nova] Failing device detachments on Focal In-Reply-To: References: Message-ID: On Fri, 2020-09-18 at 12:20 +0200, Radosław Piliszek wrote: > Is there a version of qemu that works on Focal that deployment > projects could use now? not that we are aware of. there are some PPAs we coudl try like https://launchpad.net/~jacob/+archive/ubuntu/virtualisation it looks like that is proving 5.0 which will be what ships in 20.10 so the victora cloud archive shoudl be updated with that eventually currenlty it does not have libvirt/qemu in the repo but it should once 20.10 is out. https://ubuntu-cloud.archive.canonical.com/ubuntu/dists/focal-proposed/victoria/main/binary-amd64/Packages thats our best bet to resolve this short term on focal i think. > > -yoctozepto > > On Fri, Sep 18, 2020 at 12:16 PM Balázs Gibizer wrote: > > > > Hi, > > > > I want to raise a potential conflict between the fact that OpenStack > > Victoria officially supports Ubuntu Focal[3] and a high severity > > regression in Nova [1] caused by a qemu bug [2] in the qemu version > > that is shipped with Focal. > > > > The Nova team documented[4] it as a limitation without a known > > workaround in Nova. Also as far as I know Canonical does not support > > downgrading the qemu version so I don't see a viable workaround on > > Focal side either. > > > > Is the Nova reno[4] enough from the TC perspective to go forward with > > the switch to Focal and with the Victoria release? > > > > cheers, > > gibi > > > > [1] https://bugs.launchpad.net/nova/+bug/1882521 > > [2] https://bugs.launchpad.net/qemu/+bug/1894804 > > [3] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > [4] https://review.opendev.org/#/c/752654 > > > > > > > > From smooney at redhat.com Fri Sep 18 11:18:08 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 18 Sep 2020 12:18:08 +0100 Subject: [TC][nova] Failing device detachments on Focal In-Reply-To: References: Message-ID: <6697571c0b2cc1990c5ba993fbbbedd849f0b978.camel@redhat.com> On Fri, 2020-09-18 at 13:01 +0200, Balázs Gibizer wrote: > > On Fri, Sep 18, 2020 at 12:20, Radosław Piliszek > wrote: > > Is there a version of qemu that works on Focal that deployment > > projects could use now? > > We use the train ubuntu cloud archive to get an unaffected qemu > version, which is now 4.0 that is what we are going to do for bionic. we cant do that for focal. there is no focal package for train. the cloud archive can only be used for backprots of new packages form new releases. we shoudl be abel to get 5.0 form the victoria cloud archive when 20.10 is released which hopefully wont have the issue we are seeing 4.3? > > cheers, > gibi > > > > > -yoctozepto > > > > On Fri, Sep 18, 2020 at 12:16 PM Balázs Gibizer > > wrote: > > > > > > Hi, > > > > > > I want to raise a potential conflict between the fact that OpenStack > > > Victoria officially supports Ubuntu Focal[3] and a high severity > > > regression in Nova [1] caused by a qemu bug [2] in the qemu version > > > that is shipped with Focal. > > > > > > The Nova team documented[4] it as a limitation without a known > > > workaround in Nova. Also as far as I know Canonical does not support > > > downgrading the qemu version so I don't see a viable workaround on > > > Focal side either. > > > > > > Is the Nova reno[4] enough from the TC perspective to go forward > > > with > > > the switch to Focal and with the Victoria release? > > > > > > cheers, > > > gibi > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1882521 > > > [2] https://bugs.launchpad.net/qemu/+bug/1894804 > > > [3] > > > https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > [4] https://review.opendev.org/#/c/752654 > > > > > > > > > > > > From lyarwood at redhat.com Fri Sep 18 11:19:28 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Fri, 18 Sep 2020 12:19:28 +0100 Subject: [TC][nova] Failing device detachments on Focal In-Reply-To: References: Message-ID: <20200918111928.wxwfzzigecztxt25@lyarwood.usersys.redhat.com> On 18-09-20 13:01:07, Balázs Gibizer wrote: > On Fri, Sep 18, 2020 at 12:20, Radosław Piliszek > wrote: > > Is there a version of qemu that works on Focal that deployment > > projects could use now? Not that I'm aware of, I don't believe Canonical have a UCA for Focal yet, let alone one that has a version of QEMU that doesn't hit this issue. It's worth noting that we haven't been able to reproduce this outside of OpenStack CI *and* similar sized nested test environments running the tox -e full Tempest target on Focal or Bionic with the Ussuri UCA. I've personally been unable to reproduce this on Fedora 32 with or without the virt-preview repo. > We use the train ubuntu cloud archive to get an unaffected qemu version, > which is now 4.0 We are proposing doing this on our current Bionic based CI hosts so we can still go ahead with a planned min version bump in V: https://review.opendev.org/#/q/status:open+topic:bump-libvirt-qemu-victoria > > On Fri, Sep 18, 2020 at 12:16 PM Balázs Gibizer > > wrote: > > > > > > Hi, > > > > > > I want to raise a potential conflict between the fact that OpenStack > > > Victoria officially supports Ubuntu Focal[3] and a high severity > > > regression in Nova [1] caused by a qemu bug [2] in the qemu version > > > that is shipped with Focal. > > > > > > The Nova team documented[4] it as a limitation without a known > > > workaround in Nova. Also as far as I know Canonical does not support > > > downgrading the qemu version so I don't see a viable workaround on > > > Focal side either. > > > > > > Is the Nova reno[4] enough from the TC perspective to go forward > > > with > > > the switch to Focal and with the Victoria release? > > > > > > cheers, > > > gibi > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1882521 > > > [2] https://bugs.launchpad.net/qemu/+bug/1894804 > > > [3] > > > https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > [4] https://review.opendev.org/#/c/752654 > > > > > > > > > > > > -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Sep 18 11:49:20 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 18 Sep 2020 06:49:20 -0500 Subject: [release] Release countdown for week R-3 Sept 21 - 25 Message-ID: <20200918114920.GA187376@sm-workstation> Development Focus ----------------- The Release Candidate (RC) deadline is next Thursday, September 24. Work should be focused on fixing any release-critical bugs. General Information ------------------- All deliverables released under a cycle-with-rc model should have a first release candidate by the end of the week, from which a stable/victoria branch will be cut. This branch will track the victoria release. Once stable/victoria has been created, master will be ready to switch to wallaby development. While master will no longer be feature-frozen, please prioritize any work necessary for completing victoria plans. Release-critical bugfixes will need to be merged in the master branch first, then backported to the stable/victoria branch before a new release candidate can be proposed. Actions ------- Early in the week, the release team will be proposing RC1 patches for all cycle-with-rc projects, using the latest commit from master. If your team is ready to go for cutting RC1, please let us know by leaving a +1 on these patches. If there are still a few more patches needed before RC1, you can -1 the patch and update it later in the week with the new commit hash you would like to use. Remember, stable/victoria branches will be created with this, so you will want to make sure you have what you need included to avoid needing to backport changes from master (which will technically then be wallaby) to this stable branch for any additional RCs before the final release. The release team will also be proposing releases for any deliverable following a cycle-with-intermediary model that has not produced any victoria release so far. Finally, if you haven't submitted any yet, now is a good time to finalize release highlights. Release highlights help shape the messaging around the release and make sure that your work is properly represented. Upcoming Deadlines & Dates -------------------------- RC1 deadline: September 24 (R-3) Final Victoria release: October 14 Open Infra Summit: October 19-23 Wallaby PTG: October 26-30 From sean.mcginnis at gmx.com Fri Sep 18 11:53:09 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 18 Sep 2020 06:53:09 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <20200918115309.GB187376@sm-workstation> On Fri, Sep 18, 2020 at 12:05:14PM +0200, Jean-Philippe Evrard wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > Thanks JP, for all you've done. From whayutin at redhat.com Fri Sep 18 11:55:12 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 18 Sep 2020 05:55:12 -0600 Subject: [tripleo][ci] collectd-mysql Message-ID: Greetings, More fun as usual. Error: Problem: cannot install the best candidate for the job - nothing provides libmysqlclient.so.21()(64bit) needed by collectd-mysql-5.11.0-2.el8.x86_64 - nothing provides libmysqlclient.so.21(libmysqlclient_21.0)(64bit) needed by collectd-mysql-5.11.0-2.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) https://bugs.launchpad.net/tripleo/+bug/1896178 https://review.rdoproject.org/r/#/c/29472/ 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Sep 18 12:07:44 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 18 Sep 2020 14:07:44 +0200 Subject: [tripleo][ci] collectd-mysql In-Reply-To: References: Message-ID: On 18/09/2020 13:55, Wesley Hayutin wrote: > Greetings, > > More fun as usual.  > > Error: > Problem: cannot install the best candidate for the job > - nothing provides libmysqlclient.so.21()(64bit) needed by collectd-mysql-5.11.0-2.el8.x86_64 > - nothing provides libmysqlclient.so.21(libmysqlclient_21.0)(64bit) needed by collectd-mysql-5.11.0-2.el8.x86_64 > (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) > > > https://bugs.launchpad.net/tripleo/+bug/1896178 > > https://review.rdoproject.org/r/#/c/29472/ > > The build was done in April, and from CentOS Opstools nothing changed recently. Meanwhile, I've proposed a patch to remove the package from the container: https://review.opendev.org/#/c/752621/ Matthias From noonedeadpunk at ya.ru Fri Sep 18 12:20:35 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Fri, 18 Sep 2020 15:20:35 +0300 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <65041600430808@mail.yandex.ru> An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Fri Sep 18 12:22:28 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Fri, 18 Sep 2020 14:22:28 +0200 Subject: =?UTF-8?Q?Re:_[neutron][ovn]_Logical_flow_scaling_(flow_explosion_in_lr=5F?= =?UTF-8?Q?in=5Farp=5Fresolve)?= In-Reply-To: References: <1b20cc56-3c3d-4ca9-80f6-5e0e7a8f2983@www.fastmail.com> <6a8fca0d-65ee-4c5f-8170-b533913b872a@www.fastmail.com> Message-ID: <41e3f89a-3082-4191-a4d6-999c9e2a1efa@www.fastmail.com> Hi Daniel, Sure, I've opened https://review.opendev.org/#/c/752678/ so we can move the discussion there - I'll add tests next week. -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com On Fri, Sep 18, 2020, at 11:17, Daniel Alvarez Sanchez wrote: > Hey folks, thanks for bringing this up! > > On Fri, Sep 18, 2020 at 10:39 AM Krzysztof Klimonda wrote: >> So just for testing I've applied this patch to our neutron-server: >> >> --8<--8<--8<-- >> diff --git a/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py >> index 23a841d7a1..41200786f1 100644 >> --- a/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py >> +++ b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py >> @@ -1141,11 +1141,15 @@ class OVNClient(object): >> enabled = router.get('admin_state_up') >> lrouter_name = utils.ovn_name(router['id']) >> added_gw_port = None >> + options = { >> + "always_learn_from_arp_request": "false", >> + "dynamic_neigh_routers": "true" >> + } >> with self._nb_idl.transaction(check_error=True) as txn: >> txn.add(self._nb_idl.create_lrouter(lrouter_name, >> external_ids=external_ids, >> enabled=enabled, >> - options={})) >> + options=options)) >> # TODO(lucasagomes): add_external_gateway is being only used >> # by the ovn_db_sync.py script, remove it after the database >> # synchronization work >> --8<--8<--8<-- >> > > Do you want to propose a formal patch with this change and some test? I think we may want this in. > The question is if we want to make it configurable and, if so, how to expose it to an admin... > > When enabling this, we're going to have ARP requests between the logical routers and, depending on the topology, this can create other scaling issues due to MAC_Binding entries in the Southbound database. Also, these entries do not age out [0] so we might end up with a very large amount of rows here. > > This is probably an unlikely pattern in OpenStack where usually projects connect their routers to a localnet Logical Switch but routers across projects are not interconnected. Hence, the amount of E/W traffic is not a full mesh and, probably, even in the worst case it's going to be still better than what we have right now in terms of scaling. > > On the ARP traffic side, it should not be a big deal, they'll be sent between routers when needed and, since the entries do not age out, they're not going to be sent again. > > Again, thanks for bringing this up! I'd +1 this change :) > > [0] https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg05917.html > > >> and also executed that for each logical router in OVN: >> >> # ovn-nbctl set Logical_Router $router options=dynamic_neigh_routers=true,always_learn_from_arp_request=false >> >> This had a huge impact on both a number of logical flows and a number of ovs flows on chassis nodes: >> >> --8<--8<--8<-- >> # cat lflows-new.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 |sort | uniq -c |sort -n | tail -10 >> 2170 ls_out_port_sec_l2 >> 2172 lr_in_learn_neighbor >> 2666 lr_in_admission >> 2690 ls_in_port_sec_l2 >> 3190 lr_in_ip_routing >> 4276 lr_in_lookup_neighbor >> 4873 lr_in_arp_resolve >> 5864 ls_in_arp_rsp >> 5873 ls_in_l2_lkup >> 14343 lr_in_ip_input >> # ovn-sbctl --timeout=120 lflow-list > lflows-new.txt >> --8<--8<--8<-- >> >> (and this is even more routers than before - 500 vs 400). I'll have to read what impact do those options have on ARP activity though. >> >> -- >> Krzysztof Klimonda >> kklimonda at syntaxhighlighted.com >> >> On Thu, Sep 17, 2020, at 21:14, Krzysztof Klimonda wrote: >> > Hi Tony, >> > >> > Indeed I forgot to mention that all routers are using the same external >> > network (and subnet) for the external gateway. >> > >> > Creating separate external networks per router wouldn't really work for >> > us, and I'm not even quite sure what the setup would look like in that >> > case. >> > >> > -- >> > Krzysztof Klimonda >> > kklimonda at syntaxhighlighted.com >> > >> > On Thu, Sep 17, 2020, at 20:31, Tony Liu wrote: >> > > I am trying to reach 5000. The problem I hit is that northd is >> > > stuck in translating from NB to SB when connect router to external >> > > network. >> > > >> > > I assume all your 400 routers connect to the same subnet in that >> > > external network. I am trying another approach where one subnet >> > > is created for each router in external network. That may help to >> > > reduce the ARP flow? >> > > >> > > Thanks! >> > > Tony >> > > > -----Original Message----- >> > > > From: Krzysztof Klimonda >> > > > Sent: Thursday, September 17, 2020 8:57 AM >> > > > To: openstack-discuss at lists.openstack.org >> > > > Subject: [neutron][ovn] Logical flow scaling (flow explosion in >> > > > lr_in_arp_resolve) >> > > > >> > > > Hi, >> > > > >> > > > We're running some tests of ussuri deployment with ovn ML2 driver and >> > > > seeing some worrying numbers of logical flows generated for our test >> > > > deployment. >> > > > >> > > > As a test, we create 400 routes, 400 private networks and connect each >> > > > network to its own routers. We also connect each router to an external >> > > > network. After doing that a dump of logical flows shows almost 800k >> > > > logical flows, most of them in lr_in_arp_resolve table: >> > > > >> > > > --8<--8<--8<-- >> > > > # cat lflows.txt |grep -v Datapath |cut -d'(' -f 2 | cut -d ')' -f1 >> > > > |sort | uniq -c |sort -n | tail -10 >> > > > 3264 lr_in_learn_neighbor >> > > > 3386 ls_out_port_sec_l2 >> > > > 4112 lr_in_admission >> > > > 4202 ls_in_port_sec_l2 >> > > > 4898 lr_in_lookup_neighbor >> > > > 4900 lr_in_ip_routing >> > > > 9144 ls_in_l2_lkup >> > > > 9160 ls_in_arp_rsp >> > > > 22136 lr_in_ip_input >> > > > 671656 lr_in_arp_resolve >> > > > # >> > > > --8<--8<--8<-- >> > > > >> > > > ovn: 20.06.2 + patch for SNAT IP ARP reply issue >> > > > openvswitch: 2.13.0 >> > > > neutron: 16.1.0 >> > > > >> > > > I've seen some discussion about similar issue at OVS mailing lists: >> > > > https://www.mail-archive.com/ovs-discuss at openvswitch.org/msg07014.html - >> > > > is this relevant to neutron, and not just kubernetes? >> > > > >> > > > -- >> > > > Krzysztof Klimonda >> > > > kklimonda at syntaxhighlighted.com >> > > >> > > >> > >> > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirk at dmllr.de Fri Sep 18 12:37:11 2020 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 18 Sep 2020 14:37:11 +0200 Subject: =?UTF-8?Q?Re=3A_=5Brpm=2Dpackaging=5D_Proposing_Herv=C3=A9_Beraud_as_new_c?= =?UTF-8?Q?ore_reviewer?= In-Reply-To: <545442464.50802244.1600423155901.JavaMail.zimbra@redhat.com> References: <1110488948.50802218.1600423119788.JavaMail.zimbra@redhat.com> <545442464.50802244.1600423155901.JavaMail.zimbra@redhat.com> Message-ID: Hi Javier, > Hervé has been providing consistently good reviews over the last few months, and I think he would be a great addition to the core reviewer team. +1 happy to have him on board! Greetings, Dirk From nate.johnston at redhat.com Fri Sep 18 12:49:51 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 18 Sep 2020 08:49:51 -0400 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <20200918124951.4y32eqpsbxnpsuai@firewall> JP, It has been a real pleasure to get to know and work with you this past year. Thank you for your many invaluable contributions to the community! Nate On Fri, Sep 18, 2020 at 12:05:14PM +0200, Jean-Philippe Evrard wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! > > Thanks to all of you. It was nice. > > Regards, > Jean-Philippe Evrard (evrardjp) > From dirk at dmllr.de Fri Sep 18 12:52:20 2020 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 18 Sep 2020 14:52:20 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> Message-ID: Hey JP, > Last time we tried this, we were stopped by bureaucracy and technical details (IIRC?) which shouldn't be there anymore. > I don't see a reason why this team couldn't move to a SIG now, which would be a lightweight governance model for you. > You would keep your repos, and can still propose releases (if I am not mistaken). Things shouldn't change much. You can decide to change/add chair people when you want, and don't have to sign off a PTL for 6 months at elections, which was the most concerning matter for your projects IIRC (which lead to missed elections deadlines in the past). This sounds good to me. you said "shouldn't change much". can you clarify if you know what actually changes other than not being in PTL-only activities? what about logistics like irc channel naming etc? > I hope that this SIG can evolve in the future, to group not only RPM packaging, but also other packaging mechanism that want to follow under that new banner. That's why I proposed the SIG to be named "Packaging" instead. It's up to the community now to gather around the same banner :) We always collaborated with Debian-style packaging folks as far as it seemed fit, however the real collaboration overlap has always been very very small. if we have contributors with debian-style packaging interested in joining, then yes, we should use the more generic term. Greetings, Dirk From thierry at openstack.org Fri Sep 18 13:19:47 2020 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 18 Sep 2020 15:19:47 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> Message-ID: <4761402a-c67f-54cc-547e-c3d9e822f5a9@openstack.org> Jean-Philippe Evrard wrote: > [...] You would keep your repos, and can still propose releases (if I am not mistaken). Clarification: you can still do releases (by pushing tags, as described in [1]), but the release management team is no longer responsible for them (so you're not using openstack/releases to propose them). The choice is either: - you go with project team, deliverables considered part of the OpenStack release, you need release liaisons and accountability (PTL), and the release management team has oversight over releases or: - you go with SIG, your deliverables are considered separate from the openstack release (but still count towards "an openstack contribution" as far as governance is concerned), and you organize yourself however you want. -- Thierry Carrez (ttx) From jpena at redhat.com Fri Sep 18 13:19:37 2020 From: jpena at redhat.com (Javier Pena) Date: Fri, 18 Sep 2020 09:19:37 -0400 (EDT) Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> Message-ID: <2054206310.50827835.1600435177215.JavaMail.zimbra@redhat.com> Hi, > Hey JP, > > > Last time we tried this, we were stopped by bureaucracy and technical > > details (IIRC?) which shouldn't be there anymore. > > I don't see a reason why this team couldn't move to a SIG now, which would > > be a lightweight governance model for you. > > You would keep your repos, and can still propose releases (if I am not > > mistaken). Things shouldn't change much. You can decide to change/add > > chair people when you want, and don't have to sign off a PTL for 6 months > > at elections, which was the most concerning matter for your projects IIRC > > (which lead to missed elections deadlines in the past). > > This sounds good to me. you said "shouldn't change much". can you > clarify if you know what actually changes other than not being in > PTL-only activities? what about logistics like irc channel naming etc? > I have to admit I was not very familiar with the differences between a project team and a SIG. After reading [1] and [2], I think this could work for us as a team. A SIG can own repositories, have its own IRC channel, and the accountability is reduced, since their output is not officially considered "part of the OpenStack release". Regards, Javier [1] - https://governance.openstack.org/tc/reference/comparison-of-official-group-structures.html [2] - https://governance.openstack.org/sigs/reference/sig-guideline.html > > I hope that this SIG can evolve in the future, to group not only RPM > > packaging, but also other packaging mechanism that want to follow under > > that new banner. That's why I proposed the SIG to be named "Packaging" > > instead. It's up to the community now to gather around the same banner :) > > We always collaborated with Debian-style packaging folks as far as it > seemed fit, however the real collaboration overlap has always been > very very small. if we have contributors with debian-style packaging > interested in joining, then yes, we should use the more generic term. > > > Greetings, > Dirk > > From jean-philippe at evrard.me Fri Sep 18 13:25:14 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 18 Sep 2020 15:25:14 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: <2054206310.50827835.1600435177215.JavaMail.zimbra@redhat.com> References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> <2054206310.50827835.1600435177215.JavaMail.zimbra@redhat.com> Message-ID: <17b36fe9-32d3-4cc6-8b1d-a70181402349@www.fastmail.com> On Fri, Sep 18, 2020, at 15:19, Javier Pena wrote: > I have to admit I was not very familiar with the differences between a > project team and a SIG. After reading [1] and [2], > I think this could work for us as a team. A SIG can own repositories, > have its own IRC channel, and the > accountability is reduced, since their output is not officially > considered "part of the OpenStack release". Hey, As you wouldn't be "part of the OpenStack release", it will change how you tag, and use the release tooling (you won't anymore). I had the impression that some freedom (and less accountability) was asked in the past (IIRC, Thomas proposed to tag by commit). That could achieve it. Regards, JP From jean-philippe at evrard.me Fri Sep 18 13:26:35 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 18 Sep 2020 15:26:35 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: <4761402a-c67f-54cc-547e-c3d9e822f5a9@openstack.org> References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> <4761402a-c67f-54cc-547e-c3d9e822f5a9@openstack.org> Message-ID: <643e1ce8-3959-4dca-bcff-0403d1289aab@www.fastmail.com> On Fri, Sep 18, 2020, at 15:19, Thierry Carrez wrote: > The choice is either: > - you go with project team, deliverables considered part of the > OpenStack release, you need release liaisons and accountability (PTL), > and the release management team has oversight over releases > or: > - you go with SIG, your deliverables are considered separate from the > openstack release (but still count towards "an openstack contribution" > as far as governance is concerned), and you organize yourself however > you want. > Thanks for the clarification (and confirmation), Thierry! Regards, JP From thierry at openstack.org Fri Sep 18 13:36:47 2020 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 18 Sep 2020 15:36:47 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: <17b36fe9-32d3-4cc6-8b1d-a70181402349@www.fastmail.com> References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> <2054206310.50827835.1600435177215.JavaMail.zimbra@redhat.com> <17b36fe9-32d3-4cc6-8b1d-a70181402349@www.fastmail.com> Message-ID: Jean-Philippe Evrard wrote: > As you wouldn't be "part of the OpenStack release", it will change how you tag, and use the release tooling (you won't anymore). A precision re: release tooling -- you would still very much use the Zuul release jobs and all... You just would not go through openstack/releases and the release management team approval to actually push the tag that triggers those jobs. Hope this clarifies :) -- Thierry From juliaashleykreger at gmail.com Fri Sep 18 13:47:45 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 18 Sep 2020 06:47:45 -0700 Subject: Should ports created by ironic have PXE parameters after deployment? In-Reply-To: <171abb8c-3c3f-441b-b857-945e9001471b@Spark> References: <1bd16422-05f0-42bf-b73d-f08b5bb9d5ec@Spark> <63c115dd-4521-4563-af5d-841d419a8974@Spark> <4a748376-e7e3-4e28-b70f-99f0fb6dfb7a@Spark> <171abb8c-3c3f-441b-b857-945e9001471b@Spark> Message-ID: Well... That is new! (The different port behavior) I wonder if the port data in question was already on the port from a prior introspection, or maybe a service got unexpected in a really unexpected way. I guess without figuring out the exact state things are in nor being able to reproduce it, it is going to be difficult to pin this down. Re: The dashboard. I remember it locks some fields/options on unstable states. Is that the error you're seeing? -Julia On Wed, Sep 16, 2020 at 5:16 PM Tyler Bishop wrote: > > Thanks Julia > > While investigating this further I found that if I delete the host then re-discovery it the issue goes away. So something I’ve done in my deployment has broken this on the other hosts. I need to open up the database and start digging around the tables to figure out if there is any differences in the two enrolled host. > > Yes I use kolla-ansible which natively deploys the ironic dashboard. It seems to work pretty good but its very noisy with errors during host state change constantly throwing errors on the page even though things are progressing as normal. > > On Sep 16, 2020, 6:38 PM -0400, Julia Kreger , wrote: > > (Resending after stripping the image out for the mailing list, hopefully!) > > Well, I <3 that your using the ironic horizon plugin! > > Can you confirm the contents of the flavors you are using for > scheduling, specifically the capabilities field. > > Are you using whole disk images, or are you using partition images? > > On Wed, Sep 16, 2020 at 2:13 PM Tyler Bishop wrote: > > > Hi Julia, > > All of these are latest stable train using Kolla-ansible. > > I am using local disk booting for all deployed instances and we utilize neutron networking with plans. > > Attached screenshot of driver config. > > On Sep 16, 2020, 2:53 PM -0400, Julia Kreger , wrote: > > I guess we need to understand if your machines are set to network boot > by default in ironic's configuration? If it is set to the flat > network_interface and the instances are configured for network > booting? If so, I'd expect this to happen for a deployed instance. > > Out of curiosity, is this master branch code? Ussuri? Are the other > environments the same? > > -Julia > > On Wed, Sep 16, 2020 at 11:33 AM Tyler Bishop wrote: > > > Normally yes but I am having the PXE added to NON provision ports as well. > > I tore down the dnsmasq and inspector containers, rediscovered the hosts and it hasn’t came back.. but that still doesn’t answer how that could happen. > On Sep 16, 2020, 3:53 AM -0400, Mark Goddard , wrote: > > On Tue, 15 Sep 2020 at 20:13, Tyler Bishop wrote: > > > Hi, > > My issue is i have a neutron network (not discovery or cleaning) that is adding the PXE entries for the ironic pxe server and my baremetal host are rebooting into discovery upon successful deployment. > > I am curious how the driver implementation works for adding the PXE options to neutron-dhcp-agent configuration and if that is being done to help non flat networks where no SDN is being used? I have several environments using Kolla-Ansible and this one seems to be the only behaving like this. My neutron-dhcp-agent dnsmasq opt file looks like this after a host is deployed. > > dhcp/7d0b7e78-6506-4f4a-b524-d5c03e4ca4a8/opts cat /var/lib/neutron/dhcp/ffdf5f9b-b4ad-4a53-b154-69eb3b4a81c5/opts > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:dns-server,10.60.3.240,10.60.10.240,10.60.1.240 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:classless-static-route,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,249,169.254.169.254/32,10.60.66.2,0.0.0.0/0,10.60.66.1 > tag:subnet-57b772c1-7878-4458-8c60-cf21eac99ac2,option:router,10.60.66.1 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,150,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,210,/tftpboot/ > tag:port-08908db1-360b-4973-87c7-15049a484ac6,66,10.60.66.11 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,67,pxelinux.0 > tag:port-08908db1-360b-4973-87c7-15049a484ac6,option:server-ip-address,10.60.66.11 > > > Hi Tyler, Ironic adds DHCP options to the neutron port on the > provisioning network. Specifically, the boot interface in ironic is > responsible for adding DHCP options. See the PXEBaseMixin class. From jean-philippe at evrard.me Fri Sep 18 13:52:22 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 18 Sep 2020 15:52:22 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> <2054206310.50827835.1600435177215.JavaMail.zimbra@redhat.com> <17b36fe9-32d3-4cc6-8b1d-a70181402349@www.fastmail.com> Message-ID: <56d09804-6786-4b9c-aef4-5a33a044ea40@www.fastmail.com> On Fri, Sep 18, 2020, at 15:36, Thierry Carrez wrote: > A precision re: release tooling -- you would still very much use the > Zuul release jobs and all... You just would not go through > openstack/releases and the release management team approval to actually > push the tag that triggers those jobs. > Yup! That's what I meant. Do you think I should write something in governance to clarify this further? Regards, JP From tpb at dyncloud.net Fri Sep 18 14:22:56 2020 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 18 Sep 2020 10:22:56 -0400 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <20200918142256.5m7wyi2runkv3ysl@barron.net> On 18/09/20 12:05 +0200, Jean-Philippe Evrard wrote: >Hello everyone, > >This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > >I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) > >For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! > >Thanks to all of you. It was nice. > >Regards, >Jean-Philippe Evrard (evrardjp) > Thanks for all your work and good judgment! From mdulko at redhat.com Fri Sep 18 14:54:36 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Fri, 18 Sep 2020 16:54:36 +0200 Subject: [all][qa][ci][kuryr] Default swap size of gate VMs is 1 GB now Message-ID: <636190de80d383f17c891b9165e3f55456eeb2be.camel@redhat.com> Hi, Last Friday Kuryr team noticed elevated rate of failures of the kuryr- kubernetes gate jobs. Later on we've tracked it down to oom-killer slaying our Amphora instances. We couldn't find a reason why that only started to happen on Friday, until now. So commit [1] reduced default swap size to 1 GB. If your jobs need more memory - you can try overwriting that in your job configs like we do in [2]. I'm posting this to save people from a week of debugging as it wasn't a fun activity. ;) Thanks, Michał [1] https://opendev.org/openstack/openstack-zuul-jobs/commit/45f555fdf036de786b5988213b458b3b12dcef74 [2] https://review.opendev.org/#/c/752233/4/.zuul.d/base.yaml at 35 From victoria at vmartinezdelacruz.com Fri Sep 18 15:10:50 2020 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 18 Sep 2020 12:10:50 -0300 Subject: [manila] Bug squash event coming up next Monday (September 21st) Message-ID: Hi all, After the success of our doc-a-thon last month (details to come soon in the original thread) we decided to repeat the experience, this time, with code bugs. We are organizing a bugs squash next Monday, September 21st, with the main goal of closing several bugs in preparation for the RC (that we are cutting next week). We will gather on our Freenode channel #openstack-manila at 3pm UTC and we will use this Jitsi bridge [0] to go over a curated list (thanks Vida!) of opened bugs we have here [1] *Your* participation is truly valued, being you an already Manila contributor or if you are interested in contributing and you didn't know how, so looking forward to seeing you there :) Cheers, Victoria [0] https://meetpad.opendev.org/ManilaV-ReleaseBugSquashRC [1] http://ethercalc.openstack.org/26utlmzuk6cc -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Fri Sep 18 15:29:33 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 18 Sep 2020 11:29:33 -0400 Subject: [all][elections][tc] Stepping down from the TC Message-ID: <20200918152933.oz2vfzupbbzvqn7k@firewall> All, I have decided not to run for reelection to the TC. Although I remain a part of the OpenStack community my responsibilities have shifted away from engineering, and I strongly believe that the TC should be composed of people who are actively creating or operating OpenStack on a daily basis. I am deeply appreciative for all of the support I have recieved from current and former TC members. I also want to take this chance to tell my story a bit, in hopes that it will encourage others to participate more with the TC. A year ago when I joined the TC I did not have a clear idea what to expect. I had observed a few TC meetings and brought one issue to the TC's attention, but since I did not have background on the workstreams in progress, there was a lot that I did not understand or could not contextualize. So what I did was observe, gathering an understanding of the issues and initiatives and raising my hand to participate when I felt like my efforts could make a difference. I was pleasantly surprised how many times I was able to raise my hand and work on things like community goals or proposals like distributed project leadership. The fact that I have not been around since the beginning - my first significant code contributions were merged in the Mitaka cycle - and I did not already know all the names and histories did not matter much. What mattered was a willingness to actively engage, to participate in thoughtful discernment, and when the opportunity presented itself to put in the work. I feel like I made a difference. And if you don't feel the calling to join the TC, that is fine too. Be a part of the process - join the meetings, discuss the issues that cut across projects, and have your voice heard. If you are a part of creting or using OpenStack then you are a part of the TC's constituency and the meetings are to serve you. You don't have to be a member of the TC to participate in the process. Thanks so much, Nate Johnston From gmann at ghanshyammann.com Fri Sep 18 15:34:32 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Sep 2020 10:34:32 -0500 Subject: [TC][nova] Failing device detachments on Focal In-Reply-To: References: Message-ID: <174a1da4d9d.d579ee5229415.466486132263212923@ghanshyammann.com> ---- On Fri, 18 Sep 2020 05:08:22 -0500 Balázs_Gibizer_ wrote ---- > Hi, > > I want to raise a potential conflict between the fact that OpenStack > Victoria officially supports Ubuntu Focal[3] and a high severity > regression in Nova [1] caused by a qemu bug [2] in the qemu version > that is shipped with Focal. > > The Nova team documented[4] it as a limitation without a known > workaround in Nova. Also as far as I know Canonical does not support > downgrading the qemu version so I don't see a viable workaround on > Focal side either. > > Is the Nova reno[4] enough from the TC perspective to go forward with > the switch to Focal and with the Victoria release? IMO, that is what we can do in this situation. Also, we are going to keep running one job on Bionic so that we do cover the volume detach feature test coverage somewhere. And once qemu bug is fixed we can remove this limitation in W cycle. -gmann > > cheers, > gibi > > [1] https://bugs.launchpad.net/nova/+bug/1882521 > [2] https://bugs.launchpad.net/qemu/+bug/1894804 > [3] https://governance.openstack.org/tc/reference/runtimes/victoria.html > [4] https://review.opendev.org/#/c/752654 > > > > From gmann at ghanshyammann.com Fri Sep 18 15:36:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Sep 2020 10:36:47 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <174a1dc5c01.e18640e629517.5188079261906470281@ghanshyammann.com> ---- On Fri, 18 Sep 2020 05:05:14 -0500 Jean-Philippe Evrard wrote ---- > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! > > Thanks to all of you. It was nice. Thanks JP for all your excellent contribution to TC. It was a pleasure working with you there and or I should say continue working :). -gmann > > Regards, > Jean-Philippe Evrard (evrardjp) > > From jungleboyj at gmail.com Fri Sep 18 15:40:28 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 18 Sep 2020 10:40:28 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <20200918152933.oz2vfzupbbzvqn7k@firewall> References: <20200918152933.oz2vfzupbbzvqn7k@firewall> Message-ID: <70c7385d-be17-eba2-c8a2-49dec7efa296@gmail.com> On 9/18/2020 10:29 AM, Nate Johnston wrote: > All, > > I have decided not to run for reelection to the TC. Although I remain a part of > the OpenStack community my responsibilities have shifted away from engineering, > and I strongly believe that the TC should be composed of people who are actively > creating or operating OpenStack on a daily basis. I am deeply appreciative for > all of the support I have recieved from current and former TC members. > > I also want to take this chance to tell my story a bit, in hopes that it will > encourage others to participate more with the TC. A year ago when I joined the > TC I did not have a clear idea what to expect. I had observed a few TC meetings > and brought one issue to the TC's attention, but since I did not have background > on the workstreams in progress, there was a lot that I did not understand or > could not contextualize. So what I did was observe, gathering an understanding > of the issues and initiatives and raising my hand to participate when I felt > like my efforts could make a difference. I was pleasantly surprised how many > times I was able to raise my hand and work on things like community goals or > proposals like distributed project leadership. The fact that I have not been > around since the beginning - my first significant code contributions were merged > in the Mitaka cycle - and I did not already know all the names and histories did > not matter much. What mattered was a willingness to actively engage, to > participate in thoughtful discernment, and when the opportunity presented itself > to put in the work. I feel like I made a difference. > > And if you don't feel the calling to join the TC, that is fine too. Be a part > of the process - join the meetings, discuss the issues that cut across projects, > and have your voice heard. If you are a part of creting or using OpenStack then > you are a part of the TC's constituency and the meetings are to serve you. You > don't have to be a member of the TC to participate in the process. > > Thanks so much, > > Nate Johnston > Nate, Good story!  It has been good working with you the last year! Hope to continue working with you through the community. Jay From kennelson11 at gmail.com Fri Sep 18 16:26:10 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 18 Sep 2020 09:26:10 -0700 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: Don't be a stranger! Please keep lurking and offering your opinions. They are definitely appreciated :) -Kendall (diablo_rojo) On Fri, Sep 18, 2020 at 3:06 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth > writing it down: I won't be a candidate for another term at the TC. It was > a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite > the opposite. I will probably still lurk and follow the channels and ML. I > just have switched to (yet) another duty at my employer, keeping me away > from OpenStack. Next to this, I believe it's good to have some fresh > members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need > fresh ideas and motivated people. It's not by having always the same people > at the helm that OpenStack will naturally or drastically evolve. If you > want to change OpenStack, be the change! > > Thanks to all of you. It was nice. > > Regards, > Jean-Philippe Evrard (evrardjp) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Sep 18 16:30:12 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 18 Sep 2020 11:30:12 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: JP, Thanks for everything you've done over the years for OpenStack and for me. Hopefully we'll see each other in the future, I'll miss hanging with you. Amy (spotz) On Fri, Sep 18, 2020 at 5:07 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth > writing it down: I won't be a candidate for another term at the TC. It was > a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite > the opposite. I will probably still lurk and follow the channels and ML. I > just have switched to (yet) another duty at my employer, keeping me away > from OpenStack. Next to this, I believe it's good to have some fresh > members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need > fresh ideas and motivated people. It's not by having always the same people > at the helm that OpenStack will naturally or drastically evolve. If you > want to change OpenStack, be the change! > > Thanks to all of you. It was nice. > > Regards, > Jean-Philippe Evrard (evrardjp) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Sep 18 16:42:48 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 18 Sep 2020 11:42:48 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <20200918152933.oz2vfzupbbzvqn7k@firewall> References: <20200918152933.oz2vfzupbbzvqn7k@firewall> Message-ID: Nate, Thanks for all your contributions and hard work. Thanks especially for your story! Thanks, Amy (spotz) On Fri, Sep 18, 2020 at 10:32 AM Nate Johnston wrote: > All, > > I have decided not to run for reelection to the TC. Although I remain a > part of > the OpenStack community my responsibilities have shifted away from > engineering, > and I strongly believe that the TC should be composed of people who are > actively > creating or operating OpenStack on a daily basis. I am deeply > appreciative for > all of the support I have recieved from current and former TC members. > > I also want to take this chance to tell my story a bit, in hopes that it > will > encourage others to participate more with the TC. A year ago when I > joined the > TC I did not have a clear idea what to expect. I had observed a few TC > meetings > and brought one issue to the TC's attention, but since I did not have > background > on the workstreams in progress, there was a lot that I did not understand > or > could not contextualize. So what I did was observe, gathering an > understanding > of the issues and initiatives and raising my hand to participate when I > felt > like my efforts could make a difference. I was pleasantly surprised how > many > times I was able to raise my hand and work on things like community goals > or > proposals like distributed project leadership. The fact that I have not > been > around since the beginning - my first significant code contributions were > merged > in the Mitaka cycle - and I did not already know all the names and > histories did > not matter much. What mattered was a willingness to actively engage, to > participate in thoughtful discernment, and when the opportunity presented > itself > to put in the work. I feel like I made a difference. > > And if you don't feel the calling to join the TC, that is fine too. Be a > part > of the process - join the meetings, discuss the issues that cut across > projects, > and have your voice heard. If you are a part of creting or using > OpenStack then > you are a part of the TC's constituency and the meetings are to serve > you. You > don't have to be a member of the TC to participate in the process. > > Thanks so much, > > Nate Johnston > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Sep 18 16:43:39 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 18 Sep 2020 10:43:39 -0600 Subject: [tripleo] patch abandonment, cleaning up gerrit Message-ID: Greetings, FYI folks I'm going to tear through our projects today and abandon patches older than 365 days. Please read through our policy [1] if you have any questions or concerns. It will be nice to have a little clean up before the next release and PTL take over :) Thanks [1] https://specs.openstack.org/openstack/tripleo-specs/specs/policy/patch-abandonment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Sep 18 16:54:55 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 18 Sep 2020 10:54:55 -0600 Subject: [tripleo] patch abandonment, cleaning up gerrit In-Reply-To: References: Message-ID: On Fri, Sep 18, 2020 at 10:43 AM Wesley Hayutin wrote: > Greetings, > > FYI folks I'm going to tear through our projects today and abandon patches > older than 365 days. Please read through our policy [1] if you have any > questions or concerns. It will be nice to have a little clean up before > the next release and PTL take over :) > > Thanks > > > [1] > https://specs.openstack.org/openstack/tripleo-specs/specs/policy/patch-abandonment.html > OK.. done.. Unless there is a lot of objection my next pass at this will be to abandon patches older than 160 days with multiple -1's. Let me know if that is a problem for you. Thanks 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 18 16:58:21 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 18 Sep 2020 16:58:21 +0000 Subject: [horizon][dev] Horizon.next In-Reply-To: Message-ID: <20200918165821.4mhoe5cpt2rlqxti@yuggoth.org> Related, there's a suggestion in the StarlingX community at the moment about the possibility of incorporating SAP's Elektra UI ( https://github.com/sapcc/elektra ) as an alternative to Horizon: http://lists.starlingx.io/pipermail/starlingx-discuss/2020-September/009659.html Looks like it's a mix of Apache and MIT/Expat (Apache-compatible) licensed Javascript and Ruby/Rails. Given there's not really any Ruby in OpenStack proper (except for Ruby-based configuration management and maybe SDKs), I expect that may not be a great starting point for a next-generation Horizon, but it could still serve as a source of some inspiration. Also this highlights an interest from some of the StarlingX contributors in Horizon replacements, so maybe they'd be willing to contribute... or at least help with requirements gathering. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Fri Sep 18 17:02:38 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 18 Sep 2020 10:02:38 -0700 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: References: <20200918152933.oz2vfzupbbzvqn7k@firewall> Message-ID: You have been an enormous asset to the TC during your cycle! I really hope your path leads you back at some point as well. You have done great work and I am very glad you aren't leaving the community entirely :) -Kendall (diablo_rojo) On Fri, Sep 18, 2020 at 9:44 AM Amy Marrich wrote: > Nate, > > Thanks for all your contributions and hard work. Thanks especially for > your story! > > Thanks, > > Amy (spotz) > > On Fri, Sep 18, 2020 at 10:32 AM Nate Johnston > wrote: > >> All, >> >> I have decided not to run for reelection to the TC. Although I remain a >> part of >> the OpenStack community my responsibilities have shifted away from >> engineering, >> and I strongly believe that the TC should be composed of people who are >> actively >> creating or operating OpenStack on a daily basis. I am deeply >> appreciative for >> all of the support I have recieved from current and former TC members. >> >> I also want to take this chance to tell my story a bit, in hopes that it >> will >> encourage others to participate more with the TC. A year ago when I >> joined the >> TC I did not have a clear idea what to expect. I had observed a few TC >> meetings >> and brought one issue to the TC's attention, but since I did not have >> background >> on the workstreams in progress, there was a lot that I did not understand >> or >> could not contextualize. So what I did was observe, gathering an >> understanding >> of the issues and initiatives and raising my hand to participate when I >> felt >> like my efforts could make a difference. I was pleasantly surprised how >> many >> times I was able to raise my hand and work on things like community goals >> or >> proposals like distributed project leadership. The fact that I have not >> been >> around since the beginning - my first significant code contributions were >> merged >> in the Mitaka cycle - and I did not already know all the names and >> histories did >> not matter much. What mattered was a willingness to actively engage, to >> participate in thoughtful discernment, and when the opportunity presented >> itself >> to put in the work. I feel like I made a difference. >> >> And if you don't feel the calling to join the TC, that is fine too. Be a >> part >> of the process - join the meetings, discuss the issues that cut across >> projects, >> and have your voice heard. If you are a part of creting or using >> OpenStack then >> you are a part of the TC's constituency and the meetings are to serve >> you. You >> don't have to be a member of the TC to participate in the process. >> >> Thanks so much, >> >> Nate Johnston >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirk at dmllr.de Fri Sep 18 17:53:41 2020 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 18 Sep 2020 19:53:41 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: <4761402a-c67f-54cc-547e-c3d9e822f5a9@openstack.org> References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> <4761402a-c67f-54cc-547e-c3d9e822f5a9@openstack.org> Message-ID: Am Fr., 18. Sept. 2020 um 15:20 Uhr schrieb Thierry Carrez : > Clarification: you can still do releases (by pushing tags, as described > in [1]), but the release management team is no longer responsible for > them (so you're not using openstack/releases to propose them). I think we never pushed tags or tagged releases, so thats not a problem. The only support we so far needed from the release team was to create a stable branch for every release once the stable branches of OpenStack have been created. How would that work with a SIG? Access in gerrit to create branches via the gerrit webui (which we used to do years ago) was revoked quite some time ago. Without the ability to create branches the SIG workflow would not work for us. Thanks, Dirk From dirk at dmllr.de Fri Sep 18 17:59:17 2020 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 18 Sep 2020 19:59:17 +0200 Subject: [rpm-packaging][packaging][sigs] Moving to a packaging SIG In-Reply-To: References: <341a7032-22a9-4011-996b-9f829d23023d@www.fastmail.com> <4761402a-c67f-54cc-547e-c3d9e822f5a9@openstack.org> Message-ID: > How would that work with a SIG? Access in gerrit to create branches > via the gerrit webui (which we used to do years ago) was revoked quite > some time ago. Without the ability to create branches the SIG workflow > would not work for us. Okay I found the documentation, we can create a separate -release team. okay, that works. Lets go with that approach then. Greetings, Dirk From gmann at ghanshyammann.com Fri Sep 18 20:15:23 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Sep 2020 15:15:23 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <20200918152933.oz2vfzupbbzvqn7k@firewall> References: <20200918152933.oz2vfzupbbzvqn7k@firewall> Message-ID: <174a2db6d09.c8b832fd35864.8716062445069944980@ghanshyammann.com> ---- On Fri, 18 Sep 2020 10:29:33 -0500 Nate Johnston wrote ---- > All, > > I have decided not to run for reelection to the TC. Although I remain a part of > the OpenStack community my responsibilities have shifted away from engineering, > and I strongly believe that the TC should be composed of people who are actively > creating or operating OpenStack on a daily basis. I am deeply appreciative for > all of the support I have recieved from current and former TC members. > > I also want to take this chance to tell my story a bit, in hopes that it will > encourage others to participate more with the TC. A year ago when I joined the > TC I did not have a clear idea what to expect. I had observed a few TC meetings > and brought one issue to the TC's attention, but since I did not have background > on the workstreams in progress, there was a lot that I did not understand or > could not contextualize. So what I did was observe, gathering an understanding > of the issues and initiatives and raising my hand to participate when I felt > like my efforts could make a difference. I was pleasantly surprised how many > times I was able to raise my hand and work on things like community goals or > proposals like distributed project leadership. The fact that I have not been > around since the beginning - my first significant code contributions were merged > in the Mitaka cycle - and I did not already know all the names and histories did > not matter much. What mattered was a willingness to actively engage, to > participate in thoughtful discernment, and when the opportunity presented itself > to put in the work. I feel like I made a difference. > > And if you don't feel the calling to join the TC, that is fine too. Be a part > of the process - join the meetings, discuss the issues that cut across projects, > and have your voice heard. If you are a part of creting or using OpenStack then > you are a part of the TC's constituency and the meetings are to serve you. You > don't have to be a member of the TC to participate in the process. Thanks Nate for your contribution in TC, I really enjoyed working with you especially on goal selections. -gmann > > Thanks so much, > > Nate Johnston > > > From hongbin034 at gmail.com Sun Sep 20 04:41:06 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 20 Sep 2020 00:41:06 -0400 Subject: [Neutron] Bug Deputy Report (Sep 14 - Sep 19) Message-ID: Hi all, Below is the bug deputy report: High: * https://bugs.launchpad.net/neutron/+bug/1895671 [OVN] test_add_interface_in_use fail with OVN Midium: * https://bugs.launchpad.net/neutron/+bug/1895933 Admin user can do anything without the control of policy.json * https://bugs.launchpad.net/neutron/+bug/1895950 keepalived can't perform failover if the l3 agent is down * https://bugs.launchpad.net/neutron/+bug/1895972 IPv6 prefix delegation for OVN routers Low: * https://bugs.launchpad.net/neutron/+bug/1896217 [OVS] When "explicitly_egress_direct" is enabled, egress flows are not deleted when the port is removed Undecided: * https://bugs.launchpad.net/neutron/+bug/1895677 [OVS][FW] In some cases, OVS FW tries to set a OF rule when ofport = -1 * https://bugs.launchpad.net/neutron/+bug/1896203 TypeError: Cannot look up record by empty string in check_for_igmp_snoop_support task * https://bugs.launchpad.net/neutron/+bug/1896205 AttributeError: 'NoneType' object has no attribute 'db_find_rows' during neutron-server startup * https://bugs.launchpad.net/neutron/+bug/1896226 The vnics are disappearing in the vm Invalid: * https://bugs.launchpad.net/neutron/+bug/1895636 'NoneType' object has no attribute 'address_scope_id' -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sun Sep 20 06:43:28 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sun, 20 Sep 2020 14:43:28 +0800 Subject: [tc][heat] PTL not available next week Message-ID: Dear all Due to some personal matters, I'm not available next week 9/21-9/25. for releases, I already propose the patch [1]. And I'm fine with any renew on the patch as long as there's no new feature included (will try to keep update [1] too). [1] https://review.opendev.org/#/c/752809/ -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ops at clustspace.com Sun Sep 20 20:38:04 2020 From: ops at clustspace.com (ops at clustspace.com) Date: Sun, 20 Sep 2020 23:38:04 +0300 Subject: Openstack version 18.04 Message-ID: <2f122a4574d73155d658c811058fc356@clustspace.com> Hello, We are decided to change Apache CloudStack to OpenStack. Trying to install, but only Stein....As you can see we have repositories ussuri root at ops:~# apt update Hit:1 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/ussuri InRelease Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. root at ops:~# But after installation we got only Stein root at ops:~# openstack --version openstack 5.2.0 Do'es anyone have idea why we can install only Stein? From xavpaice at gmail.com Sun Sep 20 21:02:18 2020 From: xavpaice at gmail.com (Xav Paice) Date: Mon, 21 Sep 2020 09:02:18 +1200 Subject: Openstack version 18.04 In-Reply-To: <2f122a4574d73155d658c811058fc356@clustspace.com> References: <2f122a4574d73155d658c811058fc356@clustspace.com> Message-ID: What you're looking at is the OpenStack client version: root at ops:~# openstack --version If you want to know the versions of the various OpenStack components, you need to be looking at the versions of packages like Nova, Keystone, etc, and comparing with https://releases.openstack.org/ussuri/index.html. On Mon, 21 Sep 2020 at 08:39, wrote: > Hello, > > We are decided to change Apache CloudStack to OpenStack. > > Trying to install, but only Stein....As you can see we have repositories > ussuri > > root at ops:~# apt update > Hit:1 http://ubuntu-cloud.archive.canonical.com/ubuntu > bionic-updates/ussuri InRelease > Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease > Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease > Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease > Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease > Reading package lists... Done > Building dependency tree > Reading state information... Done > All packages are up to date. > root at ops:~# > > > But after installation we got only Stein > > root at ops:~# openstack --version > openstack 5.2.0 > > > Do'es anyone have idea why we can install only Stein? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Sep 21 07:53:43 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 21 Sep 2020 09:53:43 +0200 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <20200918152933.oz2vfzupbbzvqn7k@firewall> References: <20200918152933.oz2vfzupbbzvqn7k@firewall> Message-ID: <41f5b79d-7710-4529-a8e1-8d2a5a17d9dd@www.fastmail.com> Thanks Nate for the work you've done for OpenStack. It was nice working with you on the TC. I hope our paths will cross again soon! Regards, JP From ruslanas at lpic.lt Mon Sep 21 08:32:34 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 11:32:34 +0300 Subject: [tripleo][ansible-ceph][ussuri]ceph-ansiible fails at chown stack: /tmp/ceph_ansible_tmp on distributed compute HCI node Message-ID: Hi all, using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo. I get error [1]. When ansi playbook tries to chown to stack user dir: /tmp/ceph_ansible_tmp, but not able to find user stack. As I understand the deployment process, should it have tripleo-admin user? Also executed ansible.sh file with ansible-playbook -vvvv [2] it also has /home/stack/config-download/v3/ceph-ansible/create_ceph_ansible_remote_tmp.log I see in default file, it has the correct things set: owner: "{{ ansible_user | default('tripleo-admin', true) }}" In undercloud.conf I do not have deployment user set, that option is commented out. I have not set any in overcloud config files... [1] http://paste.openstack.org/show/798126/ [2] http://paste.openstack.org/show/798127 -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpantano at redhat.com Mon Sep 21 08:49:04 2020 From: fpantano at redhat.com (Francesco Pantano) Date: Mon, 21 Sep 2020 10:49:04 +0200 Subject: [tripleo][ansible-ceph][ussuri]ceph-ansiible fails at chown stack: /tmp/ceph_ansible_tmp on distributed compute HCI node In-Reply-To: References: Message-ID: Hi Ruslanas, I think you just hit [1][2] that should be solved by [3]. Can you just redeploy including the patch [3]? Thanks, Francesco [1] https://bugs.launchpad.net/tripleo/+bug/1887708 [2] https://bugs.launchpad.net/tripleo/+bug/1886497 [3] https://review.opendev.org/#/c/742287/ On Mon, Sep 21, 2020 at 10:37 AM Ruslanas Gžibovskis wrote: > Hi all, > > using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri > repo. I get error [1]. When ansi playbook tries to chown to stack user > dir: /tmp/ceph_ansible_tmp, but not able to find user stack. As I > understand the deployment process, should it have tripleo-admin user? Also > executed ansible.sh file with ansible-playbook -vvvv [2] it also has > /home/stack/config-download/v3/ceph-ansible/create_ceph_ansible_remote_tmp.log > > I see in default file, it has the correct things set: > owner: "{{ ansible_user | default('tripleo-admin', true) }}" > > In undercloud.conf I do not have deployment user set, that option is > commented out. > I have not set any in overcloud config files... > > [1] http://paste.openstack.org/show/798126/ > [2] http://paste.openstack.org/show/798127 > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Mon Sep 21 09:11:12 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 21 Sep 2020 11:11:12 +0200 Subject: [openstack-ansible] OpenStack Ansible deployment fails due to lxc containers not having network connection In-Reply-To: <60f6d937-d184-8768-897d-f81ddc414a34@dhbw-mannheim.de> References: <60f6d937-d184-8768-897d-f81ddc414a34@dhbw-mannheim.de> Message-ID: <72d9b521-5149-29ee-912c-587ee6b72741@dhbw-mannheim.de> Hi Jonathan, thank you for your reply! I probably should have specified that I already changed some default values in /etc/ansible/roles/lxc_hosts/defaults/main.yml to prevent a conflict with my storage network. Here's the part that I changed: ``` lxc_net_address: 10.255.255.1 lxc_net_netmask: 255.255.255.0 lxc_net_dhcp_range: 10.255.255.2,10.255.255.253 ``` Could there be some other reference to the original default address range which causes the error? I'm also confused about dnsmasq: Running 'apt-get install dnsmasq' I discovered that it wasn't installed on the infra host yet (though installing it also didn't solve the problem). Moreover, I couldn't find dnsmasq in the prerequisites in the OSA deployment guide. Kind regards, Oliver On 03/09/2020 18:38, openstack-discuss-request at lists.openstack.org wrote: > Message: 1 > Date: Thu, 3 Sep 2020 16:51:51 +0100 > From: Jonathan Rosser > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] OpenStack Ansible deployment fails > due to lxc containers not having network connection > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Hi Oliver, > > The default route would normally be via eth0 in the container, which I > suspect has some issue. > > This is given an address by dnsmasq/dhcp on the host and attached to > lxcbr0. This is where I would start to look. I am straight seeing that > the default address range used for eth0 is in conflict with your storage > network, so perhaps this is also something to look at. See > https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/defaults/main.yml#L104 > > > You join us on irc at #openstack-ansible for some 'real-time' assistance > if necessary. > > Regards, > Jonathan. > > On 03/09/2020 16:18, Oliver Wenz wrote: >> I'm trying to deploy OpenStack Ansible. When running the first playbook >> ```openstack-ansible setup-hosts.yml```, there are errors for all >> containers during the task ```[openstack_hosts : Remove the blacklisted >> packages]``` (see below) and the playbook fails. >> >> ``` >> fatal: [infra1_repo_container-1f1565cd]: FAILED! => {"changed": false, >> "cmd": "apt-get update", "msg": "E: The repository >> 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a >> Release file. >> E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates >> Release' no longer has a Release file. >> E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports >> Release' no longer has a Release file. >> E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security >> Release' no longer has a Release file.", "rc": 100, "stderr": "E: The >> repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer >> has a Release file. >> E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates >> Release' no longer has a Release file. >> E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports >> Release' no longer has a Release file. >> E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security >> Release' no longer has a Release file. >> ", "stderr_lines": ["E: The repository >> 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a >> Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu >> bionic-updates Release' no longer has a Release file.", "E: The >> repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' >> no longer has a Release file.", "E: The repository >> 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer >> has a Release file."], "stdout": "Ign:1 >> http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease >> Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease >> Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease >> Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease >> Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release >>   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). >> - connect (101: Network is unreachable) >> Err:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release >>   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). >> - connect (101: Network is unreachable) >> Err:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release >>   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). >> - connect (101: Network is unreachable) >> Err:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release >>   Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). >> - connect (101: Network is unreachable) >> Reading package lists... >> ", "stdout_lines": ["Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic >> InRelease", "Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates >> InRelease", "Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports >> InRelease", "Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security >> InRelease", "Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release", >> "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). >> - connect (101: Network is unreachable)", "Err:6 >> http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release", "  Cannot >> initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect >> (101: Network is unreachable)", "Err:7 >> http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release", "  Cannot >> initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect >> (101: Network is unreachable)", "Err:8 >> http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release", "  Cannot >> initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect >> (101: Network is unreachable)", "Reading package lists..."]} >> >> ``` >> >> When I attach to any container and run ```ping 192.168.100.6``` (local >> DNS), I get the same error (```connect: Network is unreachable```). >> However, when I specify an interface by running ```ping -I eth1 >> 192.168.100.6``` there is a successful connection. >> Running ```ip r``` on the infra_cinder container yields: >> ``` >> 10.0.3.0/24 dev eth2 proto kernel scope link src 10.0.3.5 >> 192.168.110.0/24 dev eth1 proto kernel scope link src 192.168.110.232 >> ``` >> so there seems to be no default route which is why the connection fails >> (similar for the other infra containers). Shouldn't OSA automatically >> configure this? I didn't find anything regarding a default route on >> containers in the Docs. >> >> Here's my openstack_user_config.yml: >> >> ``` >> cidr_networks: >>   container: 192.168.110.0/24 >>   tunnel: 192.168.32.0/24 >>   storage: 10.0.3.0/24 >> >> used_ips: >>   - "192.168.110.1,192.168.110.2" >>   - "192.168.110.111" >>   - "192.168.110.115" >>   - "192.168.110.117,192.168.110.118" >>   - "192.168.110.131,192.168.110.140" >>   - "192.168.110.201,192.168.110.207" >>   - "192.168.32.1,192.168.32.2" >>   - "192.168.32.201,192.168.32.207" >>   - "10.0.3.1" >>   - "10.0.3.11,10.0.3.14" >>   - "10.0.3.21,10.0.3.24" >>   - "10.0.3.31,10.0.3.42" >>   - "10.0.3.201,10.0.3.207" >> >> global_overrides: >>   # The internal and external VIP should be different IPs, however they >>   # do not need to be on separate networks. >>   external_lb_vip_address: 192.168.100.168 >>   internal_lb_vip_address: 192.168.110.201 >>   management_bridge: "br-mgmt" >>   provider_networks: >>     - network: >>         container_bridge: "br-mgmt" >>         container_type: "veth" >>         container_interface: "eth1" >>         ip_from_q: "container" >>         type: "raw" >>         group_binds: >>           - all_containers >>           - hosts >>         is_container_address: true >>     - network: >>         container_bridge: "br-vxlan" >>         container_type: "veth" >>         container_interface: "eth10" >>         ip_from_q: "tunnel" >>         type: "vxlan" >>         range: "1:1000" >>         net_name: "vxlan" >>         group_binds: >>           - neutron_linuxbridge_agent >>     - network: >>         container_bridge: "br-ext1" >>         container_type: "veth" >>         container_interface: "eth12" >>         host_bind_override: "eth12" >>         type: "flat" >>         net_name: "ext_net" >>         group_binds: >>           - neutron_linuxbridge_agent >>     - network: >>         container_bridge: "br-storage" >>         container_type: "veth" >>         container_interface: "eth2" >>         ip_from_q: "storage" >>         type: "raw" >>         group_binds: >>           - glance_api >>           - cinder_api >>           - cinder_volume >>           - nova_compute >>           - swift-proxy >> >> ### >> ### Infrastructure >> ### >> >> # galera, memcache, rabbitmq, utility >> shared-infra_hosts: >>   infra1: >>     ip: 192.168.110.201 >> >> # repository (apt cache, python packages, etc) >> repo-infra_hosts: >>   infra1: >>     ip: 192.168.110.201 >> >> # load balancer >> haproxy_hosts: >>   infra1: >>     ip: 192.168.110.201 >> >> ### >> ### OpenStack >> ### >> >> os-infra_hosts: >>    infra1: >>      ip: 192.168.110.201 >> >> identity_hosts: >>    infra1: >>      ip: 192.168.110.201 >> >> network_hosts: >>    infra1: >>      ip: 192.168.110.201 >> >> compute_hosts: >>    compute1: >>      ip: 192.168.110.204 >>    compute2: >>      ip: 192.168.110.205 >>    compute3: >>      ip: 192.168.110.206 >>    compute4: >>      ip: 192.168.110.207 >> >> storage-infra_hosts: >>    infra1: >>      ip: 192.168.110.201 >> >> storage_hosts: >>    lvm-storage1: >>      ip: 192.168.110.202 >>      container_vars: >>        cinder_backends: >>          lvm: >>            volume_backend_name: LVM_iSCSI >>            volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver >>            volume_group: cinder_volumes >>            iscsi_ip_address: "{{ cinder_storage_address }}" >>          limit_container_types: cinder_volume >> >> ``` >> >> I also asked this question on the server fault stackexchange: >> https://serverfault.com/questions/1032573/openstack-ansible-deployment-fails-due-to-lxc-containers-not-having-network-conn >> >> >> Kind regards, >> Oliver >> >> >> > From ruslanas at lpic.lt Mon Sep 21 09:32:32 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 12:32:32 +0300 Subject: [tripleo][ansible-ceph][ussuri]ceph-ansiible fails at chown stack: /tmp/ceph_ansible_tmp on distributed compute HCI node In-Reply-To: References: Message-ID: Yes I am applying it now, BUT I do not like, that group has exact same message: "lookup('env','ANSIBLE_REMOTE_USER') | default(ansible_user, true) }}" Sould it search for Group: lookup('env','ANSIBLE_REMOTE_GROUP') | default(ansible_group, true) }} ? On Mon, 21 Sep 2020 at 11:49, Francesco Pantano wrote: > Hi Ruslanas, > I think you just hit [1][2] that should be solved by [3]. > Can you just redeploy including the patch [3]? > > Thanks, > Francesco > > [1] https://bugs.launchpad.net/tripleo/+bug/1887708 > [2] https://bugs.launchpad.net/tripleo/+bug/1886497 > [3] https://review.opendev.org/#/c/742287/ > > On Mon, Sep 21, 2020 at 10:37 AM Ruslanas Gžibovskis > wrote: > >> Hi all, >> >> using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri >> repo. I get error [1]. When ansi playbook tries to chown to stack user >> dir: /tmp/ceph_ansible_tmp, but not able to find user stack. As I >> understand the deployment process, should it have tripleo-admin user? Also >> executed ansible.sh file with ansible-playbook -vvvv [2] it also has >> /home/stack/config-download/v3/ceph-ansible/create_ceph_ansible_remote_tmp.log >> >> I see in default file, it has the correct things set: >> owner: "{{ ansible_user | default('tripleo-admin', true) }}" >> >> In undercloud.conf I do not have deployment user set, that option is >> commented out. >> I have not set any in overcloud config files... >> >> [1] http://paste.openstack.org/show/798126/ >> [2] http://paste.openstack.org/show/798127 >> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > > > -- > Francesco Pantano > GPG KEY: F41BD75C > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Sep 21 12:20:45 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 21 Sep 2020 14:20:45 +0200 Subject: [largescale-sig] Next meeting: September 23, 8utc Message-ID: <220d05cf-b9f8-50db-187a-8934f24bd772@openstack.org> Hi everyone, Our next meeting will be a EU-APAC-friendly meeting, this Wednesday, September 23 at 8 UTC[1] in the #openstack-meeting-3 channel on IRC: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200923T08 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - all to describe briefly how you solved metrics/billing in your deployment in https://etherpad.openstack.org/p/large-scale-sig-documentation - ttx to look into a basic test framework for oslo.metrics - masahito to push latest patches to oslo.metrics - amorin to see if oslo.metrics could be tested at OVH - ttx to file Scaling Stories forum session, with amorin and someone from penick's team to help get it off the ground Talk to you all later, -- Thierry Carrez From mnaser at vexxhost.com Mon Sep 21 13:04:55 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 21 Sep 2020 09:04:55 -0400 Subject: [tc] weekly meeting time In-Reply-To: References: Message-ID: On Thu, Sep 17, 2020 at 3:11 PM Kendall Nelson wrote: > > I am working on filling this out, but I had a thought: perhaps you want to wait to close the poll till the new TC is seated since they might have different schedules? I agree, I'll put this on hold until the elections are held. Thanks! > Just a thought :) > > I'll finish filling out the poll now. > > -Kendall (diablo_rojo) > > On Thu, Sep 17, 2020 at 6:05 AM Mohammed Naser wrote: >> >> Hi folks: >> >> Given that we've landed the change to start having weekly meetings >> again, it's time to start picking a time: >> >> https://doodle.com/poll/xw2wiebm2ayqxvki >> >> A few people also mentioned they wouldn't mind taking the current >> office hours time towards that, I guess we can try and come to an >> agreement if we want to do that here (or simply by voting that time on >> the Doodle). >> >> Thanks, >> Mohammed >> >> -- >> Mohammed Naser >> VEXXHOST, Inc. >> -- Mohammed Naser VEXXHOST, Inc. From ruslanas at lpic.lt Mon Sep 21 13:12:12 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 16:12:12 +0300 Subject: [tripleo][ansible-ceph][ussuri]ceph-ansiible fails at chown stack: /tmp/ceph_ansible_tmp on distributed compute HCI node In-Reply-To: References: Message-ID: ok, it works now, with user in both places. On Mon, 21 Sep 2020 at 12:32, Ruslanas Gžibovskis wrote: > Yes I am applying it now, BUT I do not like, that group has exact same > message: "lookup('env','ANSIBLE_REMOTE_USER') | default(ansible_user, true) > }}" Sould it search for Group: lookup('env','ANSIBLE_REMOTE_GROUP') | > default(ansible_group, true) }} ? > > On Mon, 21 Sep 2020 at 11:49, Francesco Pantano > wrote: > >> Hi Ruslanas, >> I think you just hit [1][2] that should be solved by [3]. >> Can you just redeploy including the patch [3]? >> >> Thanks, >> Francesco >> >> [1] https://bugs.launchpad.net/tripleo/+bug/1887708 >> [2] https://bugs.launchpad.net/tripleo/+bug/1886497 >> [3] https://review.opendev.org/#/c/742287/ >> >> On Mon, Sep 21, 2020 at 10:37 AM Ruslanas Gžibovskis >> wrote: >> >>> Hi all, >>> >>> using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 >>> from @centos-openstack-ussuri repo. I get error [1]. When ansi >>> playbook tries to chown to stack user dir: /tmp/ceph_ansible_tmp, but >>> not able to find user stack. As I understand the deployment process, should >>> it have tripleo-admin user? Also executed ansible.sh file with >>> ansible-playbook -vvvv [2] it also has >>> /home/stack/config-download/v3/ceph-ansible/create_ceph_ansible_remote_tmp.log >>> >>> I see in default file, it has the correct things set: >>> owner: "{{ ansible_user | default('tripleo-admin', true) >>> }}" >>> >>> In undercloud.conf I do not have deployment user set, that option is >>> commented out. >>> I have not set any in overcloud config files... >>> >>> [1] http://paste.openstack.org/show/798126/ >>> [2] http://paste.openstack.org/show/798127 >>> >>> >>> -- >>> Ruslanas Gžibovskis >>> +370 6030 7030 >>> >> >> >> -- >> Francesco Pantano >> GPG KEY: F41BD75C >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Sep 21 13:41:46 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 16:41:46 +0300 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. Message-ID: Hi all, using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo. but get [1] error. I thought it was due to not found python exec, but I saw later, when added verbosity 4, that it is able to find python3. It looks like output in paste.openstack.org is too short, not sure what was an issue, but here is a second place [2] for same full output [1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ [2] https://proxy.qwq.lt/ceph-ansible.html -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Mon Sep 21 13:54:39 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 09:54:39 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: In [2] I see Error: Could not stat device /dev/vdb - No such file or directory. /dev/vdb is the default and as per the logs it doesn't exist on your HCI node. For your HCI node you need to have a block device (usually a dedicated disk) which can be configured as an OSD and you need to pass the path to it as described in the following section of the doc. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ceph_config.html#configure-osd-settings-with-ceph-ansible Also, ensure your disk is factory clean or the ceph tools won't initialize it as an OSD. The easiest way to do this is to configure ironic's automatic node cleaning. John On Mon, Sep 21, 2020 at 9:45 AM Ruslanas Gžibovskis wrote: > > Hi all, > > using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo. > > but get [1] error. I thought it was due to not found python exec, but I saw later, when added verbosity 4, that it is able to find python3. > > It looks like output in paste.openstack.org is too short, not sure what was an issue, but here is a second place [2] for same full output > > [1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ > [2] https://proxy.qwq.lt/ceph-ansible.html > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From donny at fortnebula.com Mon Sep 21 15:04:06 2020 From: donny at fortnebula.com (Donny Davis) Date: Mon, 21 Sep 2020 11:04:06 -0400 Subject: Openstack version 18.04 In-Reply-To: References: <2f122a4574d73155d658c811058fc356@clustspace.com> Message-ID: https://wiki.ubuntu.com/OpenStack/CloudArchive On Sun, Sep 20, 2020 at 5:05 PM Xav Paice wrote: > What you're looking at is the OpenStack client version: > > root at ops:~# openstack --version > > If you want to know the versions of the various OpenStack components, you > need to be looking at the versions of packages like Nova, Keystone, etc, > and comparing with https://releases.openstack.org/ussuri/index.html. > > On Mon, 21 Sep 2020 at 08:39, wrote: > >> Hello, >> >> We are decided to change Apache CloudStack to OpenStack. >> >> Trying to install, but only Stein....As you can see we have repositories >> ussuri >> >> root at ops:~# apt update >> Hit:1 http://ubuntu-cloud.archive.canonical.com/ubuntu >> bionic-updates/ussuri InRelease >> Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease >> Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease >> Hit:4 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease >> Hit:5 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease >> Reading package lists... Done >> Building dependency tree >> Reading state information... Done >> All packages are up to date. >> root at ops:~# >> >> >> But after installation we got only Stein >> >> root at ops:~# openstack --version >> openstack 5.2.0 >> >> >> Do'es anyone have idea why we can install only Stein? >> >> -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Sep 21 15:11:27 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 18:11:27 +0300 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: Yes, I do not have vdb... I have sda sdb sdc sde sdd... and I believe it might have come from journal_size: 16384 ? here is a part of conf file... CephAnsibleDiskConfig: devices: - /dev/sdc - /dev/sde - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore journal_size: 16384 # commented this out now. Yes, undercloud node cleaning is the first option I enable/configure in undercloud.conf ;) after that I configure IP addresses/subnets :) On Mon, 21 Sep 2020 at 16:55, John Fulton wrote: > In [2] I see Error: Could not stat device /dev/vdb - No such file or > directory. > > /dev/vdb is the default and as per the logs it doesn't exist on your > HCI node. For your HCI node you need to have a block device (usually a > dedicated disk) which can be configured as an OSD and you need to pass > the path to it as described in the following section of the doc. > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ceph_config.html#configure-osd-settings-with-ceph-ansible > > Also, ensure your disk is factory clean or the ceph tools won't > initialize it as an OSD. The easiest way to do this is to configure > ironic's automatic node cleaning. > > John > > > On Mon, Sep 21, 2020 at 9:45 AM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from > @centos-openstack-ussuri repo. > > > > but get [1] error. I thought it was due to not found python exec, but I > saw later, when added verbosity 4, that it is able to find python3. > > > > It looks like output in paste.openstack.org is too short, not sure what > was an issue, but here is a second place [2] for same full output > > > > [1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ > > [2] https://proxy.qwq.lt/ceph-ansible.html > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Mon Sep 21 15:27:04 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 11:27:04 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: On Mon, Sep 21, 2020 at 11:11 AM Ruslanas Gžibovskis wrote: > > Yes, I do not have vdb... I have sda sdb sdc sde sdd... and I believe it might have come from journal_size: 16384 ? here is a part of conf file... > CephAnsibleDiskConfig: > devices: > - /dev/sdc > - /dev/sde > - /dev/sdd > osd_scenario: lvm > osd_objectstore: bluestore > journal_size: 16384 # commented this out now. If you used the above, perhaps in foo.yaml, but got the error message you shared, then I suspect you are deploying with your parameters in the wrong order. You should use the following order: openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e foo.yaml If the order of arguments is such that foo.yaml precedes /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml then the CephAnsibleDisksConfig will override what was set in foo.yaml and instead use the default in /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml which uses a disk you don't have. Also, please do not use journal_size it is deprecated and that parameter doesn't make sense for bluestore. As linked from the documentation ceph-volume batch mode (https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/) should do the right thing if you modify the above (and just drop journal size). John > Yes, undercloud node cleaning is the first option I enable/configure in undercloud.conf ;) after that I configure IP addresses/subnets :) > > On Mon, 21 Sep 2020 at 16:55, John Fulton wrote: >> >> In [2] I see Error: Could not stat device /dev/vdb - No such file or directory. >> >> /dev/vdb is the default and as per the logs it doesn't exist on your >> HCI node. For your HCI node you need to have a block device (usually a >> dedicated disk) which can be configured as an OSD and you need to pass >> the path to it as described in the following section of the doc. >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ceph_config.html#configure-osd-settings-with-ceph-ansible >> >> Also, ensure your disk is factory clean or the ceph tools won't >> initialize it as an OSD. The easiest way to do this is to configure >> ironic's automatic node cleaning. >> >> John >> >> >> On Mon, Sep 21, 2020 at 9:45 AM Ruslanas Gžibovskis wrote: >> > >> > Hi all, >> > >> > using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo. >> > >> > but get [1] error. I thought it was due to not found python exec, but I saw later, when added verbosity 4, that it is able to find python3. >> > >> > It looks like output in paste.openstack.org is too short, not sure what was an issue, but here is a second place [2] for same full output >> > >> > [1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ >> > [2] https://proxy.qwq.lt/ceph-ansible.html >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From ruslanas at lpic.lt Mon Sep 21 15:49:14 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 18:49:14 +0300 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: Hmm, looks like a good point, I even thought I forgot to sort it. BUT, I double checked now, and my node-info.yaml is the last one... only network_data and roles_data are above default configs: _THT="/usr/share/openstack-tripleo-heat-templates" _LTHT="$(pwd)" time openstack --verbose overcloud deploy \ --force-postconfig --templates \ --stack v3 \ -r ${_LTHT}/roles_data.yaml \ -n ${_LTHT}/network_data.yaml \ -e ${_LTHT}/containers-prepare-parameter.yaml \ -e ${_LTHT}/overcloud_images.yaml \ -e ${_THT}/environments/disable-telemetry.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ -e ${_LTHT}/node-info.yaml \ --ntp-server 8.8.8.8 all the config, can be found here[1]. meanwhile I will comment out my journal option. [1] https://github.com/qw3r3wq/OSP-ussuri/blob/master/v3/node-info.yaml -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Mon Sep 21 15:55:29 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 11:55:29 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: Your config-download directory, per stack, will have a ceph-ansible sub directory; check the devices list there. It will contain the resultant of your overrides. What devices are listed? If /dev/vdb is still listed then something is breaking the expected override pattern. John On Mon, Sep 21, 2020 at 11:49 AM Ruslanas Gžibovskis wrote: > > Hmm, > > looks like a good point, I even thought I forgot to sort it. BUT, I double checked now, and my node-info.yaml is the last one... only network_data and roles_data are above default configs: > > _THT="/usr/share/openstack-tripleo-heat-templates" > _LTHT="$(pwd)" > time openstack --verbose overcloud deploy \ > --force-postconfig --templates \ > --stack v3 \ > -r ${_LTHT}/roles_data.yaml \ > -n ${_LTHT}/network_data.yaml \ > -e ${_LTHT}/containers-prepare-parameter.yaml \ > -e ${_LTHT}/overcloud_images.yaml \ > -e ${_THT}/environments/disable-telemetry.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ > -e ${_LTHT}/node-info.yaml \ > --ntp-server 8.8.8.8 > > > all the config, can be found here[1]. > > meanwhile I will comment out my journal option. > > [1] https://github.com/qw3r3wq/OSP-ussuri/blob/master/v3/node-info.yaml > > > From jonathan.rosser at rd.bbc.co.uk Mon Sep 21 16:10:27 2020 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 21 Sep 2020 17:10:27 +0100 Subject: [openstack-ansible] OpenStack Ansible deployment fails due to lxc containers not having network connection In-Reply-To: <72d9b521-5149-29ee-912c-587ee6b72741@dhbw-mannheim.de> References: <60f6d937-d184-8768-897d-f81ddc414a34@dhbw-mannheim.de> <72d9b521-5149-29ee-912c-587ee6b72741@dhbw-mannheim.de> Message-ID: <91ecace1-c38b-01ba-b700-fccdfeaba19d@rd.bbc.co.uk> Hi Oliver, The dnsmasq dependancy will be pulled in by lxc, which in turn needs lxc-utils, that then wants dnsmasq-base as you can see here https://packages.ubuntu.com/bionic/lxc-utils. You will not find LXC itself as a per-requisite in the documentation as the setup is handled completely by the lxc_hosts ansible role. For openstack-ansible it is not necessarily a good idea to adjust the variables in /etc/ansible/roles/.... because these repositories will be overwritten any time you do a minor/major upgrade. There is a reference here https://docs.openstack.org/openstack-ansible/latest/reference/configuration/using-overrides.html for overriding variables, and the most common starting point would be to create /etc/openstack_deploy/user_variables.yml and put your customization there. I would recommend always building an All-In-One deployment in a virtual machine so that you have a reference to compare against when moving away from the 'stock config'. Documentation for the AIO can be found here https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html Regards, Jonathan. On 21/09/2020 10:11, Oliver Wenz wrote: > Hi Jonathan, > thank you for your reply! I probably should have specified that I > already changed some default values in > /etc/ansible/roles/lxc_hosts/defaults/main.yml to prevent a conflict > with my storage network. > Here's the part that I changed: > > ``` > lxc_net_address: 10.255.255.1 > lxc_net_netmask: 255.255.255.0 > lxc_net_dhcp_range: 10.255.255.2,10.255.255.253 > ``` > > Could there be some other reference to the original default address > range which causes the error? > > I'm also confused about dnsmasq: Running 'apt-get install dnsmasq' I > discovered that it wasn't installed on the infra host yet (though > installing it also didn't solve the problem). Moreover, I couldn't find > dnsmasq in the prerequisites in the OSA deployment guide. > > > Kind regards, > Oliver > > From ruslanas at lpic.lt Mon Sep 21 16:12:19 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 19:12:19 +0300 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: it's in: ./external_deploy_steps_tasks.yaml and: (undercloud) [stack at undercloudv3 v3]$ cat ./ceph-ansible/group_vars/osds.yml devices: - /dev/vdb osd_objectstore: bluestore osd_scenario: lvm (undercloud) [stack at undercloudv3 v3]$ And you ARE right. Thank you for helping to notice it, there is no my list of devices... those sdc sde sdd... clearing out and redeploying my OpenStack now. but node-info is always the last one. Maybe I should add it before and after, 2 times Just for fun? (added, I will see how it will go) By the way, just a small notice, but I believe that should not be a problem, that I have stack named v3, not overcloud... I believe it is ok, yes? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Sep 21 16:33:54 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 19:33:54 +0300 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: I have one thought. stack at undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml resource_registry: OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml parameter_defaults: # Ensure that if user overrides CephAnsiblePlaybook via some env # file, we go back to default when they stop passing their env file. CephAnsiblePlaybook: ['default'] CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd ## Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd [stack at undercloudv3 v3]$ And my deploy has: -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings? Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", [1] https://proxy.qwq.lt/ceph-ansible.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Sep 21 17:05:31 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Sep 2020 20:05:31 +0300 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: Also another thing, cat ./ceph-ansible/group_vars/osds.yml looks that have not been modified over last re-deployments. delete'ing it again and removing config-download and everything from swift... I do not like it do not override everything... especially when launching deployment, when there is no stack (I mean in undercloud host, as overcloud nodes should be cleaned up by undercloud). Thank you, will keep updated. On Mon, 21 Sep 2020 at 19:33, Ruslanas Gžibovskis wrote: > I have one thought. > > stack at undercloudv3 v3]$ cat > /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml > > resource_registry: > OS::TripleO::Services::CephMgr: > ../../deployment/ceph-ansible/ceph-mgr.yaml > OS::TripleO::Services::CephMon: > ../../deployment/ceph-ansible/ceph-mon.yaml > OS::TripleO::Services::CephOSD: > ../../deployment/ceph-ansible/ceph-osd.yaml > OS::TripleO::Services::CephClient: > ../../deployment/ceph-ansible/ceph-client.yaml > > parameter_defaults: > # Ensure that if user overrides CephAnsiblePlaybook via some env > # file, we go back to default when they stop passing their env file. > CephAnsiblePlaybook: ['default'] > > CinderEnableIscsiBackend: false > CinderEnableRbdBackend: true > CinderBackupBackend: ceph > NovaEnableRbdBackend: true > GlanceBackend: rbd > ## Uncomment below if enabling legacy telemetry > # GnocchiBackend: rbd > [stack at undercloudv3 v3]$ > > > And my deploy has: > -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ > > generally the same files, BUT, they are specified by user, and it "might > feel like" the user overwrote default settings? > > Also I am thinking on the things you helped me tho find, John. And I > recalled, what I have found strange. NFS part. > That it was trying to configure CephNfs... Or it should even I do not have > it specified? From the output [1] here is the small part of it: > "statically imported: > /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", > "statically imported: > /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", > "statically imported: > /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", > > > [1] https://proxy.qwq.lt/ceph-ansible.html > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Mon Sep 21 17:16:19 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 13:16:19 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: On Mon, Sep 21, 2020 at 12:12 PM Ruslanas Gžibovskis wrote: > > it's in: ./external_deploy_steps_tasks.yaml > and: > (undercloud) [stack at undercloudv3 v3]$ cat ./ceph-ansible/group_vars/osds.yml > devices: > - /dev/vdb > osd_objectstore: bluestore > osd_scenario: lvm > (undercloud) [stack at undercloudv3 v3]$ > > And you ARE right. Thank you for helping to notice it, there is no my list of devices... those sdc sde sdd... > > clearing out and redeploying my OpenStack now. but node-info is always the last one. Maybe I should add it before and after, 2 times Just for fun? (added, I will see how it will go) Then for whatever reason in the series of overrides the default CephAnsibleDisksConfig devices list is getting used and not your overrides. I'm very confident the override order works correctly if the templates are in the right order. I recommend simplifying by removing templates and then adding in only what you need in iterative layers. You node overrides look complex. > By the way, just a small notice, but I believe that should not be a problem, that I have stack named v3, not overcloud... I believe it is ok, yes? Yes, you can call the stack whatever you like by using the --stack option. John > > > From johfulto at redhat.com Mon Sep 21 17:22:58 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 13:22:58 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: On Mon, Sep 21, 2020 at 12:34 PM Ruslanas Gžibovskis wrote: > > I have one thought. > > stack at undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml > resource_registry: > OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml > OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml > OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml > OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml > > parameter_defaults: > # Ensure that if user overrides CephAnsiblePlaybook via some env > # file, we go back to default when they stop passing their env file. > CephAnsiblePlaybook: ['default'] > > CinderEnableIscsiBackend: false > CinderEnableRbdBackend: true > CinderBackupBackend: ceph > NovaEnableRbdBackend: true > GlanceBackend: rbd > ## Uncomment below if enabling legacy telemetry > # GnocchiBackend: rbd > [stack at undercloudv3 v3]$ > > > And my deploy has: > -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ > -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ The above is normal. Looks like you're using it as expected. > generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings? I assume that ${_THT} refers to /usr/share/openstack-tripleo-heat-templates. I don't recommend editing the THT shipped with TripleO. If it has been modified then I recommend restoring it to the original from the RPM. > Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. > That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: > "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", > "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", > "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", Those roles will be used if you're also trying to configure manila: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/deploy_manila.html#deploying-the-overcloud-with-the-internal-ceph-backend It will get the OSD running first however and that's failing the the vdb issue in your log below ([1]). John > > [1] https://proxy.qwq.lt/ceph-ansible.html > From johfulto at redhat.com Mon Sep 21 17:29:43 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 13:29:43 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: On Mon, Sep 21, 2020 at 1:05 PM Ruslanas Gžibovskis wrote: > > Also another thing, cat ./ceph-ansible/group_vars/osds.yml > looks that have not been modified over last re-deployments. delete'ing it again and removing config-download and everything from swift... The tripleo-ansible role tripleo_ceph_work_dir will manage that directory for you (recreate it when needed to reflect what is in Heat). It is run when config-download is run. https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_work_dir > I do not like it do not override everything... especially when launching deployment, when there is no stack (I mean in undercloud host, as overcloud nodes should be cleaned up by undercloud). If there is no stack, the stack will be created when you deploy and config-download's directory of playbooks will also be recreated. You shouldn't need to worry about cleaning up the existing config-download directory. You can, but you don't have to. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/ansible_config_download.html#ansible-project-directory John > Thank you, will keep updated. > > On Mon, 21 Sep 2020 at 19:33, Ruslanas Gžibovskis wrote: >> >> I have one thought. >> >> stack at undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml >> resource_registry: >> OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml >> OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml >> OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml >> OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml >> >> parameter_defaults: >> # Ensure that if user overrides CephAnsiblePlaybook via some env >> # file, we go back to default when they stop passing their env file. >> CephAnsiblePlaybook: ['default'] >> >> CinderEnableIscsiBackend: false >> CinderEnableRbdBackend: true >> CinderBackupBackend: ceph >> NovaEnableRbdBackend: true >> GlanceBackend: rbd >> ## Uncomment below if enabling legacy telemetry >> # GnocchiBackend: rbd >> [stack at undercloudv3 v3]$ >> >> >> And my deploy has: >> -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ >> -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ >> -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ >> -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ >> >> generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings? >> >> Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. >> That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: >> "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", >> "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", >> "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", >> >> >> [1] https://proxy.qwq.lt/ceph-ansible.html >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From johfulto at redhat.com Mon Sep 21 17:51:39 2020 From: johfulto at redhat.com (John Fulton) Date: Mon, 21 Sep 2020 13:51:39 -0400 Subject: [tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution. In-Reply-To: References: Message-ID: Just wanted to share a few observations from your https://github.com/qw3r3wq/OSP-ussuri/blob/master/v3/node-info.yaml 1. Your mon_max_pg_per_osd should be closer to 100 or 200. You have it set at 4k: CephConfigOverrides: global: mon_max_pg_per_osd: 4096 Maybe you set this to workaround https://ceph.com/community/new-luminous-pg-overdose-protection/ but this is not a good way to do it for any production data. This check was added to avoid setting this value too high so working around it increases the chances you can have the problems the check was made to avoid. I assume this is just a test cluster (1 mon) but I wanted to let you know. 2. Replicas If you only have one OSD node you need to set "CephPoolDefaultSize: 1" (that should help you with the pg overdose issue too). 3. metrics pool If you're deploying with telemetry disabled then you don't need a metrics pool. 4. Backend overrides You shouldn't need GlanceBackend: rbd, GnocchiBackend: rbd, or NovaEnableRbdBackend: true as that gets set by default by using the ceph-ansible env file we've been talking about. 5. DistributedComputeHCICount role This role is meant to be used with distributed compute nodes which don't run in the same stack as the controller node. They are meant to be used as described in [1] I think the ComputeHCI node would be a better role to deploy in the same stack as the Controller. Not saying you can't do this but it doesn't look like you're using the role for what it was designed for so I at least wanted to point that out. [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_multibackend_storage.html John On Mon, Sep 21, 2020 at 1:29 PM John Fulton wrote: > > On Mon, Sep 21, 2020 at 1:05 PM Ruslanas Gžibovskis wrote: > > > > Also another thing, cat ./ceph-ansible/group_vars/osds.yml > > looks that have not been modified over last re-deployments. delete'ing it again and removing config-download and everything from swift... > > The tripleo-ansible role tripleo_ceph_work_dir will manage that > directory for you (recreate it when needed to reflect what is in > Heat). It is run when config-download is run. > > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_work_dir > > > I do not like it do not override everything... especially when launching deployment, when there is no stack (I mean in undercloud host, as overcloud nodes should be cleaned up by undercloud). > > If there is no stack, the stack will be created when you deploy and > config-download's directory of playbooks will also be recreated. You > shouldn't need to worry about cleaning up the existing config-download > directory. You can, but you don't have to. > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/ansible_config_download.html#ansible-project-directory > > John > > > Thank you, will keep updated. > > > > On Mon, 21 Sep 2020 at 19:33, Ruslanas Gžibovskis wrote: > >> > >> I have one thought. > >> > >> stack at undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml > >> resource_registry: > >> OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml > >> OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml > >> OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml > >> OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml > >> > >> parameter_defaults: > >> # Ensure that if user overrides CephAnsiblePlaybook via some env > >> # file, we go back to default when they stop passing their env file. > >> CephAnsiblePlaybook: ['default'] > >> > >> CinderEnableIscsiBackend: false > >> CinderEnableRbdBackend: true > >> CinderBackupBackend: ceph > >> NovaEnableRbdBackend: true > >> GlanceBackend: rbd > >> ## Uncomment below if enabling legacy telemetry > >> # GnocchiBackend: rbd > >> [stack at undercloudv3 v3]$ > >> > >> > >> And my deploy has: > >> -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ > >> -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ > >> -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ > >> -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ > >> > >> generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings? > >> > >> Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. > >> That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: > >> "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", > >> "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", > >> "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", > >> > >> > >> [1] https://proxy.qwq.lt/ceph-ansible.html > >> > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 From gr at ham.ie Mon Sep 21 17:53:17 2020 From: gr at ham.ie (Graham Hayes) Date: Mon, 21 Sep 2020 18:53:17 +0100 Subject: [tc][all] Wallaby Cycle Community Goals Message-ID: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Hi All It is that time of year / release again - and we need to choose the community goals for Wallaby. Myself and Nate looked over the list of goals [1][2][3], and we are suggesting one of the following: - Finish moving legacy python-*client CLIs to python-openstackclient - Move from oslo.rootwrap to oslo.privsep - Implement the API reference guide changes - All API to provide a /healthcheck URL like Keystone (and others) provide Some of these goals have champions signed up already, but we need to make sure they are still available to do them. If you are interested in helping drive any of the goals, please speak up! We need to select goals in time for the new release cycle - so please reply if there is goals you think should be included in this list, or not included. Next steps after this will be helping people write a proposed goal and then the TC selecting the ones we will pursue during Wallaby. Additionally, we have traditionally selected 2 goals per cycle - however with the people available to do the work across projects Nate and I briefly discussed reducing that to one for this cycle. What does the community think about this? Thanks, Graham 1 - https://etherpad.opendev.org/p/community-goals 2 - https://governance.openstack.org/tc/goals/proposed/index.html 3 - https://etherpad.opendev.org/p/community-w-series-goals 4 - https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule From ignaziocassano at gmail.com Mon Sep 21 17:54:32 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 21 Sep 2020 19:54:32 +0200 Subject: [openstack][cinder] volume_copy_bps_limit Message-ID: Hello stackers, I am using cinder netapp nfs driver and I have vm performances problems during the backup of volumes. My backup tool make snapshots of volumes, then, on compute notes, it use virt-convert for creating a volume backup on another nfs backend using a different vlan. But during the virt-convert a lot of read are done on netapp and my virtual machines become very slow. I am asking if volume_copy_bps_limit can help me to mitigate. Please, anyone could help me ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Sep 21 18:35:54 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 21 Sep 2020 13:35:54 -0500 Subject: [Diversity] Community feedback on Divisive Language stance Message-ID: The OSF Diversity & Inclusion WG has been working on creating the OSF's stance concerning divisive language. We will be holding one more meeting before sending the stance to the OSF Board for any changes before bringing it back to the Community. Our goal however is to get your input now to reduce any concerns in the future! Please check out Draft 4 on the etherpad[0] and place your comments there and join us on October 5th (meeting information will be sent out closer to the meeting) Thanks, Amy (spotz) 0 - https://etherpad.opendev.org/p/divisivelanguage -------------- next part -------------- An HTML attachment was scrubbed... URL: From victoria at vmartinezdelacruz.com Mon Sep 21 19:01:01 2020 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Mon, 21 Sep 2020 16:01:01 -0300 Subject: [manila][FFE] Request for "User messages panel" Message-ID: Hi, I would like to ask for an FFE for the RFE "User messages panel" [0] This feature adds support for manila's user messages API to the user interface. It's implemented by this single patch [1] that already has reviews. Thanks, Victoria [0] https://blueprints.launchpad.net/manila-ui/+spec/ui-user-messages [1] https://review.opendev.org/#/c/742550/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Sep 21 19:14:27 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 21 Sep 2020 15:14:27 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Retire devstack-plugin-pika project https://review.opendev.org/748730 - Clarify impact on releases for SIGs https://review.opendev.org/752699 - Define TC-approved release in a resolution https://review.opendev.org/752256 - Add assert:supports-standalone https://review.opendev.org/722399 - Add election schedule exceptions in charter https://review.opendev.org/751941 - Remove tc:approved-release tag https://review.opendev.org/749363 ## Project Updates - Add openstack/osops to Ops Docs and Tooling SIG https://review.opendev.org/749835 - Add python-dracclient to be owned by Hardware Vendor SIG https://review.opendev.org/745564 - Retire the devstack-plugin-zmq project https://review.opendev.org/748731 - Migrate rpm-packaging to a SIG https://review.opendev.org/752661 ## General Changes - Reinstate weekly meetings https://review.opendev.org/749279 - Resolution to define distributed leadership for projects https://review.opendev.org/744995 - Add exception for Sept 2020 term election https://review.opendev.org/751936 # Other Reminders - PTG Brainstorming: https://etherpad.opendev.org/p/tc-wallaby-ptg Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From kennelson11 at gmail.com Mon Sep 21 22:16:49 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 21 Sep 2020 15:16:49 -0700 Subject: [all] Forum Schedule is now LIVE! Message-ID: Hello Everyone! The forum schedule is now live: https://www.openstack.org/summit/2020/summit-schedule/global-search?t=forum Please let myself or Jimmy McArthur know as soon as possible if you see any conflicts. Speakers will be contacted shortly (if not already) with more details about the event and your sessions. Thanks! -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bindbn at gmail.com Mon Sep 21 17:34:12 2020 From: bindbn at gmail.com (Alexander Usov) Date: Mon, 21 Sep 2020 20:34:12 +0300 Subject: [nova] update field cpu_info.features value Message-ID: Hi guys, Several servers in the cluster have an older version of Intel microcode. I updated one of the servers and rebooted it. After reboot: root at xxx3:~# cat /proc/cpuinfo | grep flags | tail -1 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts flush_l1d root at xxx3:~# cat /proc/cpuinfo | grep micro | tail -1 microcode : 0xb000038 But the value on the controller didn't update: ~$ nova hypervisor-show 99 | grep ssbd xxx at admin:~$ I updated the value in the database, but it was immediately reverted If the way to update cpu_info.features value without deleting and adding the hypervisor? Nova version: 2:16.1.0 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 21 22:37:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 21 Sep 2020 17:37:21 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <1749990229b.bfd8d46673892.9090899423267334607@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1749990229b.bfd8d46673892.9090899423267334607@ghanshyammann.com> Message-ID: <174b2d0795a.b733bbbf111764.6640064513583537393@ghanshyammann.com> Updates: * Ceilometer fix is ready to merge with +A - https://review.opendev.org/#/c/752294/ * Barbican fix is not yet merged but I think we should move on and switch the integration testing to Focal and keeping the barbican based job keep running on Bionic. * If you have any of the job failing on Focal and can not be fixed quickly then set the bionic nodeset for those to avoid gate block. Examples are below: - https://review.opendev.org/#/c/743079/4/.zuul.yaml at 31 - https://review.opendev.org/#/c/743124/3/.zuul.d/base.yaml * I am planning to switch the devstck and tempest base job by tomorrow or the day after tomorrow, please take appropriate action in advance if your project is not tested or failing. Testing Status: =========== * ~270 repos gate have been tested green or fixed till now. ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) * ~28 repos are failing. Need immediate action mentioned above. ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open * ~18repos fixes ready to merge: ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 -gmann > > -gmann > > > > > > > -gmann > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > Hello Everyone, > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > Progress: > > > ======= > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > plan. > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > ** Bug#1882521 > > > ** DB migration issues, > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > Testing Till now: > > > ============ > > > * ~200 repos gate have been tested or fixed till now. > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > project repos if I am late to fix them): > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > * ~30repos fixes ready to merge: > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > Bugs Report: > > > ========== > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > There is open bug for nova/cinder where three tempest tests are failing for > > > volume detach operation. There is no clear root cause found yet > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > We have skipped the tests in tempest base patch to proceed with the other > > > projects testing but this is blocking things for the migration. > > > > > > 2. DB migration issues (IN-PROGRESS) > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > to maintain pyflakes version compatibility. > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > What work to be done on the project side: > > > ================================ > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > OR > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > this. > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > this migration. > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > base patches. > > > > > > > > > Important things to note: > > > =================== > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > * Use gerrit topic 'migrate-to-focal' > > > * Do not backport any of the patches. > > > > > > > > > References: > > > ========= > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > [2] https://review.opendev.org/#/c/739315/ > > > [3] https://review.opendev.org/#/c/739334/ > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > -gmann > > > > > > > > > > From rui.zang at yandex.com Tue Sep 22 03:34:37 2020 From: rui.zang at yandex.com (rui zang) Date: Tue, 22 Sep 2020 11:34:37 +0800 Subject: [nova] update field cpu_info.features value In-Reply-To: References: Message-ID: <184941600745673@mail.yandex.com> An HTML attachment was scrubbed... URL: From imtiaz.chowdhury at workday.com Tue Sep 22 04:11:18 2020 From: imtiaz.chowdhury at workday.com (Imtiaz Chowdhury) Date: Tue, 22 Sep 2020 04:11:18 +0000 Subject: [ops][neutron]: Anyone using Calico with neutron in production? Message-ID: <67EC9A38-094C-4DC5-BA87-20C39E25A35D@workdayinternal.com> Hi all, I have some Calico related questions and am wondering if any operators in this forum have experience in running Calico with OpenStack in production environment. Thanks and regards, Imtiaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From Istvan.Szabo at agoda.com Tue Sep 22 04:35:54 2020 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Tue, 22 Sep 2020 04:35:54 +0000 Subject: Can't delete a vm Message-ID: <1600749354619.71543@agoda.com> Hello, I have vm which is stuck in the build phase: openstack server list --all-projects --long --limit -1 | grep 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 | BUILD | scheduling | NOSTATE | | CentOS-7-x86_64-1511 (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None I've tried to delete with openstack server delete, nova delete force, restarted the nova services on all management nodes, restarted the nova compute service where it was originally spawned but still visible. I see in the database in couple of places either the id and either the hostname, like in instance mapping table, instances table, nova_cell0 database ... I have an idea how to delete, so I'd spawn just a vm and check which tables are created the entries, and I would go through all tables with the problematic one and delete 1 by 1 but this will takes me hours .... Any faster way you might suggest me please? Thank you ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From ignaziocassano at gmail.com Tue Sep 22 05:21:23 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Sep 2020 07:21:23 +0200 Subject: Can't delete a vm In-Reply-To: <1600749354619.71543@agoda.com> References: <1600749354619.71543@agoda.com> Message-ID: Hello, before feeling instance, try to use nova reset-state to change the state of in instance to available. Then try to remove. Ignazio Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) ha scritto: > Hello, > > > I have vm which is stuck in the build phase: > > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | > hk-qaciapp-2020 | BUILD | scheduling | NOSTATE > | | CentOS-7-x86_64-1511 > (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 > | None > > > I've tried to delete with openstack server delete, nova delete force, > restarted the nova services on all management nodes, restarted the nova > compute service where it was originally spawned but still visible. > > > I see in the database in couple of places either the id and either the > hostname, like in instance mapping table, instances table, nova_cell0 > database ... > > > I have an idea how to delete, so I'd spawn just a vm and check which > tables are created the entries, and I would go through all tables with the > problematic one and delete 1 by 1 but this will takes me hours .... > > Any faster way you might suggest me please? > > > Thank you > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by copyright > or other legal rules. If you have received it by mistake please let us know > by reply email and delete it from your system. It is prohibited to copy > this message or disclose its content to anyone. Any confidentiality or > privilege is not waived or lost by any mistaken delivery or unauthorized > disclosure of the message. All messages sent to and from Agoda may be > monitored to ensure compliance with company policies, to protect the > company's interests and to remove potential malware. Electronic messages > may be intercepted, amended, lost or deleted, or contain viruses. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Istvan.Szabo at agoda.com Tue Sep 22 05:32:00 2020 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Tue, 22 Sep 2020 05:32:00 +0000 Subject: Can't delete a vm In-Reply-To: References: <1600749354619.71543@agoda.com>, Message-ID: <1600752720008.34304@agoda.com> The problem is that in opestack you can't find this vm if I want to delete: nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. ERROR (CommandError): Unable to reset the state for the specified server(s). openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 ERROR (CommandError): No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. openstack server delete hk-qaciapp-2020 No server with a name or ID of 'hk-qaciapp-2020' exists. But if I list it like this can see: openstack server list --all-projects --long --limit -1 | grep 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 | BUILD | scheduling | NOSTATE | | CentOS-7-x86_64-1511 (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None | :/ It was created with the api before, maybe should try to delete with the API? Not sure :/ ________________________________ From: Ignazio Cassano Sent: Tuesday, September 22, 2020 12:21 PM To: Szabo, Istvan (Agoda) Cc: openstack-discuss Subject: Re: Can't delete a vm Email received from outside the company. If in doubt don't click links nor open attachments! ________________________________ Hello, before feeling instance, try to use nova reset-state to change the state of in instance to available. Then try to remove. Ignazio Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) > ha scritto: Hello, I have vm which is stuck in the build phase: openstack server list --all-projects --long --limit -1 | grep 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 | BUILD | scheduling | NOSTATE | | CentOS-7-x86_64-1511 (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None I've tried to delete with openstack server delete, nova delete force, restarted the nova services on all management nodes, restarted the nova compute service where it was originally spawned but still visible. I see in the database in couple of places either the id and either the hostname, like in instance mapping table, instances table, nova_cell0 database ... I have an idea how to delete, so I'd spawn just a vm and check which tables are created the entries, and I would go through all tables with the problematic one and delete 1 by 1 but this will takes me hours .... Any faster way you might suggest me please? Thank you ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From ignaziocassano at gmail.com Tue Sep 22 05:49:26 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Sep 2020 07:49:26 +0200 Subject: Can't delete a vm In-Reply-To: <1600752720008.34304@agoda.com> References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com> Message-ID: Probably you need to see the instances table on nova db. Ignazio Il Mar 22 Set 2020, 07:32 Szabo, Istvan (Agoda) ha scritto: > The problem is that in opestack you can't find this vm if I want to delete: > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: No > server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > ERROR (CommandError): Unable to reset the state for the specified > server(s). > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > exists. > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > ERROR (CommandError): No server with a name or ID of > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > openstack server delete hk-qaciapp-2020 > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > But if I list it like this can see: > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > | BUILD | scheduling | NOSTATE | > | CentOS-7-x86_64-1511 (14.11.2017) | > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None | > > > :/ > > > It was created with the api before, maybe should try to delete with the > API? Not sure :/ > > ________________________________ > From: Ignazio Cassano > Sent: Tuesday, September 22, 2020 12:21 PM > To: Szabo, Istvan (Agoda) > Cc: openstack-discuss > Subject: Re: Can't delete a vm > > Email received from outside the company. If in doubt don't click links nor > open attachments! > ________________________________ > Hello, before feeling instance, try to use nova reset-state to change the > state of in instance to available. > Then try to remove. > Ignazio > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) > ha scritto: > Hello, > > > I have vm which is stuck in the build phase: > > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | > hk-qaciapp-2020 | BUILD | scheduling | NOSTATE > | | CentOS-7-x86_64-1511 > (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 > | None > > > I've tried to delete with openstack server delete, nova delete force, > restarted the nova services on all management nodes, restarted the nova > compute service where it was originally spawned but still visible. > > > I see in the database in couple of places either the id and either the > hostname, like in instance mapping table, instances table, nova_cell0 > database ... > > > I have an idea how to delete, so I'd spawn just a vm and check which > tables are created the entries, and I would go through all tables with the > problematic one and delete 1 by 1 but this will takes me hours .... > > Any faster way you might suggest me please? > > > Thank you > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by copyright > or other legal rules. If you have received it by mistake please let us know > by reply email and delete it from your system. It is prohibited to copy > this message or disclose its content to anyone. Any confidentiality or > privilege is not waived or lost by any mistaken delivery or unauthorized > disclosure of the message. All messages sent to and from Agoda may be > monitored to ensure compliance with company policies, to protect the > company's interests and to remove potential malware. Electronic messages > may be intercepted, amended, lost or deleted, or contain viruses. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Sep 22 06:05:21 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 21 Sep 2020 23:05:21 -0700 Subject: [manila][FFE] Request for "User messages panel" In-Reply-To: References: Message-ID: Hello Victoria, all, I discussed this with the reviewers - it's unorthodox to have a feature change like this merge before RC, but there have been several contributing factors here, including my personal confusion regarding the feature freeze deadline for the UI plugin, given we switched its release model very recently (https://review.opendev.org/#/c/746197/); and further because Victoria Martinez and the reviewers were coordinating a common user experience with the corresponding cinder feature in horizon ( https://review.opendev.org/#/c/734161/), and getting reviews from horizon cores. This change has been tested to the team's satisfaction; and the risk here is having the newly introduced UI panel fields lack translations because we're in a hard string freeze. We thoroughly value the translators' time and energy - we have had ~16% of this project translated in the past releases and these translations are valuable to us. However, I would like to approve this request for two reasons - no existing translatable strings have been modified, and this change implements something that end users have been waiting for, completing the self-service piece of consuming actionable error messages. Thanks, Goutham On Mon, Sep 21, 2020 at 12:10 PM Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi, > > I would like to ask for an FFE for the RFE "User messages panel" [0] > > This feature adds support for manila's user messages API to the user > interface. > > It's implemented by this single patch [1] that already has reviews. > > Thanks, > > Victoria > > [0] https://blueprints.launchpad.net/manila-ui/+spec/ui-user-messages > [1] https://review.opendev.org/#/c/742550/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Sep 22 07:12:22 2020 From: eblock at nde.ag (Eugen Block) Date: Tue, 22 Sep 2020 07:12:22 +0000 Subject: Can't delete a vm In-Reply-To: <1600752720008.34304@agoda.com> References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com> Message-ID: <20200922071222.Horde.JlEFVaEaikzlnKQ1zmchtFz@webmail.nde.ag> It seems to be that you're in the wrong project. If you see the vm with '--all-projects' but can't reset its state then I would recommend to check your environment variables (OS_PROJECT_ID etc.), see if there's some mismatch. Zitat von "Szabo, Istvan (Agoda)" : > The problem is that in opestack you can't find this vm if I want to delete: > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: > No server with a name or ID of > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > ERROR (CommandError): Unable to reset the state for the specified server(s). > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > ERROR (CommandError): No server with a name or ID of > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > openstack server delete hk-qaciapp-2020 > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > But if I list it like this can see: > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > | BUILD | scheduling | NOSTATE | > | CentOS-7-x86_64-1511 (14.11.2017) | > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None > | > > > :/ > > > It was created with the api before, maybe should try to delete with > the API? Not sure :/ > > ________________________________ > From: Ignazio Cassano > Sent: Tuesday, September 22, 2020 12:21 PM > To: Szabo, Istvan (Agoda) > Cc: openstack-discuss > Subject: Re: Can't delete a vm > > Email received from outside the company. If in doubt don't click > links nor open attachments! > ________________________________ > Hello, before feeling instance, try to use nova reset-state to > change the state of in instance to available. > Then try to remove. > Ignazio > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) > > ha scritto: > Hello, > > > I have vm which is stuck in the build phase: > > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | hk-qaciapp-2020 | BUILD | scheduling | > NOSTATE | | > CentOS-7-x86_64-1511 (14.11.2017) | > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None > > > I've tried to delete with openstack server delete, nova delete > force, restarted the nova services on all management nodes, > restarted the nova compute service where it was originally spawned > but still visible. > > > I see in the database in couple of places either the id and either > the hostname, like in instance mapping table, instances table, > nova_cell0 database ... > > > I have an idea how to delete, so I'd spawn just a vm and check which > tables are created the entries, and I would go through all tables > with the problematic one and delete 1 by 1 but this will takes me > hours .... > > Any faster way you might suggest me please? > > > Thank you > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to > anyone. Any confidentiality or privilege is not waived or lost by > any mistaken delivery or unauthorized disclosure of the message. All > messages sent to and from Agoda may be monitored to ensure > compliance with company policies, to protect the company's interests > and to remove potential malware. Electronic messages may be > intercepted, amended, lost or deleted, or contain viruses. From Istvan.Szabo at agoda.com Tue Sep 22 07:33:40 2020 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Tue, 22 Sep 2020 07:33:40 +0000 Subject: Can't delete a vm In-Reply-To: <20200922071222.Horde.JlEFVaEaikzlnKQ1zmchtFz@webmail.nde.ag> References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com>, <20200922071222.Horde.JlEFVaEaikzlnKQ1zmchtFz@webmail.nde.ag> Message-ID: <1600760020649.72616@agoda.com> Yeah but I guess as an admin can delete. ________________________________________ From: Eugen Block Sent: Tuesday, September 22, 2020 2:12 PM To: openstack-discuss at lists.openstack.org Subject: Re: Can't delete a vm Email received from outside the company. If in doubt don't click links nor open attachments! ________________________________ It seems to be that you're in the wrong project. If you see the vm with '--all-projects' but can't reset its state then I would recommend to check your environment variables (OS_PROJECT_ID etc.), see if there's some mismatch. Zitat von "Szabo, Istvan (Agoda)" : > The problem is that in opestack you can't find this vm if I want to delete: > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: > No server with a name or ID of > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > ERROR (CommandError): Unable to reset the state for the specified server(s). > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > ERROR (CommandError): No server with a name or ID of > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > openstack server delete hk-qaciapp-2020 > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > But if I list it like this can see: > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > | BUILD | scheduling | NOSTATE | > | CentOS-7-x86_64-1511 (14.11.2017) | > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None > | > > > :/ > > > It was created with the api before, maybe should try to delete with > the API? Not sure :/ > > ________________________________ > From: Ignazio Cassano > Sent: Tuesday, September 22, 2020 12:21 PM > To: Szabo, Istvan (Agoda) > Cc: openstack-discuss > Subject: Re: Can't delete a vm > > Email received from outside the company. If in doubt don't click > links nor open attachments! > ________________________________ > Hello, before feeling instance, try to use nova reset-state to > change the state of in instance to available. > Then try to remove. > Ignazio > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) > > ha scritto: > Hello, > > > I have vm which is stuck in the build phase: > > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | hk-qaciapp-2020 | BUILD | scheduling | > NOSTATE | | > CentOS-7-x86_64-1511 (14.11.2017) | > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None > > > I've tried to delete with openstack server delete, nova delete > force, restarted the nova services on all management nodes, > restarted the nova compute service where it was originally spawned > but still visible. > > > I see in the database in couple of places either the id and either > the hostname, like in instance mapping table, instances table, > nova_cell0 database ... > > > I have an idea how to delete, so I'd spawn just a vm and check which > tables are created the entries, and I would go through all tables > with the problematic one and delete 1 by 1 but this will takes me > hours .... > > Any faster way you might suggest me please? > > > Thank you > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to > anyone. Any confidentiality or privilege is not waived or lost by > any mistaken delivery or unauthorized disclosure of the message. All > messages sent to and from Agoda may be monitored to ensure > compliance with company policies, to protect the company's interests > and to remove potential malware. Electronic messages may be > intercepted, amended, lost or deleted, or contain viruses. ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. From ralonsoh at redhat.com Tue Sep 22 07:49:27 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 22 Sep 2020 09:49:27 +0200 Subject: [Neutron][QoS] Meeting cancelled today Message-ID: Hello: Due to the lack of agenda, the Neutron QoS meeting will be cancelled today (22/9/2020). Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Sep 22 12:36:10 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 22 Sep 2020 14:36:10 +0200 Subject: [openstack][cinder] volume_copy_bps_limit In-Reply-To: References: Message-ID: <20200922123610.3tdbntoe7dtcor4e@localhost> On 21/09, Ignazio Cassano wrote: > Hello stackers, > I am using cinder netapp nfs driver and I have vm performances problems > during the backup of volumes. > My backup tool make snapshots of volumes, then, on compute notes, it use > virt-convert for creating a volume backup on another nfs backend using a > different vlan. > But during the virt-convert a lot of read are done on netapp and my virtual > machines become very slow. > I am asking if volume_copy_bps_limit can help me to mitigate. > > Please, anyone could help me ? > Ignazio Hi, The volume_copy_bps_limit is a Cinder configuration parameter, and it is used only when Cinder does the copying, which doesn't seem to be your case. In your case it looks like the backup tool is accessing directly the compute hosts, so the throttling configuration should be done in the backup tool. If the backup tool doesn't support throttling, you can try using cgroups... Cheers, Gorka. From ignaziocassano at gmail.com Tue Sep 22 12:56:58 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Sep 2020 14:56:58 +0200 Subject: [openstack][cinder] volume_copy_bps_limit In-Reply-To: <20200922123610.3tdbntoe7dtcor4e@localhost> References: <20200922123610.3tdbntoe7dtcor4e@localhost> Message-ID: Thanks, I'll check it out. Il giorno mar 22 set 2020 alle ore 14:36 Gorka Eguileor ha scritto: > On 21/09, Ignazio Cassano wrote: > > Hello stackers, > > I am using cinder netapp nfs driver and I have vm performances problems > > during the backup of volumes. > > My backup tool make snapshots of volumes, then, on compute notes, it use > > virt-convert for creating a volume backup on another nfs backend using a > > different vlan. > > But during the virt-convert a lot of read are done on netapp and my > virtual > > machines become very slow. > > I am asking if volume_copy_bps_limit can help me to mitigate. > > > > Please, anyone could help me ? > > Ignazio > > Hi, > > The volume_copy_bps_limit is a Cinder configuration parameter, and it is > used only when Cinder does the copying, which doesn't seem to be your > case. > > In your case it looks like the backup tool is accessing directly the > compute hosts, so the throttling configuration should be done in the > backup tool. > > If the backup tool doesn't support throttling, you can try using > cgroups... > > Cheers, > Gorka. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Tue Sep 22 13:09:27 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 22 Sep 2020 13:09:27 -0000 Subject: =?utf-8?B?UmU6wqBVc3N1cmkgQ2VudE9TIDggYWRkIG1wdHNhcyBkcml2ZXIgdG8gaW50?= =?utf-8?B?cm9zcGVjdGlvbiBpbml0cmFtZnM=?= Message-ID: <050e91ec-b07c-42fa-bf74-166d10e7fea1@me.com> Hi, thanks a lot. That did the trick. Now it loads the module automatically. I even added another driver and it works like a charm. :) Best Regards, Oliver Am 15. September 2020 um 0:19 schrieb Donny Davis : On Fri, Sep 11, 2020 at 3:25 PM Oliver Weinmann wrote: Hi, I already asked this question on serverfault. But I guess here is a better place. I have a very ancient hardware with a MPTSAS controller. I use this for TripleO deployment testing. With the release of Ussuri which is running CentOS8, I can no longer provision my overcloud nodes as the MPTSAS driver has been removed in CentOS8: https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ I managed to include the driver provided from ELrepo in the introspection image but It is not loaded automatically: All commands are run as user "stack". Extract the introspection image: cd ~ mkdir imagesnew cd imagesnew tar xvf ../ironic-python-agent.tar mkdir ~/ipa-tmp cd ~/ipa-tmp /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs | zcat | cpio -ivd | pax -r Extract the contents of the mptsas driver rpm: rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r Put the kernel module in the right places. To figure out where the module has to reside I installed the rpm on a already deployed node and used find to locate it. xz -c ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz mkdir ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko sudo chown root . -R find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8  > ~/images/ironic-python-agent.initramfs Upload the new image cd ~/images openstack overcloud image upload --update-existing --image-path /home/stack/images/ Now when I start the introspection and ssh into the host I see no disks: [root at localhost ~]# fdisk -l [root at localhost ~]# lsmod | grep mptsas Once i manually load the driver, I can see the disks: [root at localhost ~]# modprobe mptsas [root at localhost ~]# lsmod | grep mptsas mptsas                 69632  0 mptscsih               45056  1 mptsas mptbase                98304  2 mptsas,mptscsih scsi_transport_sas     45056  1 mptsas [root at localhost ~]# fdisk -l Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes But how can I make it so that it will automatically load on boot? Best Regards, Oliver I guess you could try using modules-load to load the module at boot.  > sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko echo "mptsas" > ./etc/modules-load.d/mptsas.conf > sudo chown root . -R Also I would have a look see at these docs to build an image using ipa builder  https://docs.openstack.org/ironic-python-agent-builder/latest/ -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Sep 22 14:40:08 2020 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 22 Sep 2020 10:40:08 -0400 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <20200918152933.oz2vfzupbbzvqn7k@firewall> References: <20200918152933.oz2vfzupbbzvqn7k@firewall> Message-ID: On 18/09/20 11:29 am, Nate Johnston wrote: > I also want to take this chance to tell my story a bit, in hopes that it will > encourage others to participate more with the TC. A year ago when I joined the > TC I did not have a clear idea what to expect. I had observed a few TC meetings > and brought one issue to the TC's attention, but since I did not have background > on the workstreams in progress, there was a lot that I did not understand or > could not contextualize. So what I did was observe, gathering an understanding > of the issues and initiatives and raising my hand to participate when I felt > like my efforts could make a difference. I was pleasantly surprised how many > times I was able to raise my hand and work on things like community goals or > proposals like distributed project leadership. The fact that I have not been > around since the beginning - my first significant code contributions were merged > in the Mitaka cycle - and I did not already know all the names and histories did > not matter much. What mattered was a willingness to actively engage, to > participate in thoughtful discernment, and when the opportunity presented itself > to put in the work. I feel like I made a difference. > > And if you don't feel the calling to join the TC, that is fine too. Be a part > of the process - join the meetings, discuss the issues that cut across projects, > and have your voice heard. If you are a part of creting or using OpenStack then > you are a part of the TC's constituency and the meetings are to serve you. You > don't have to be a member of the TC to participate in the process. Great advice! Thanks Nate for stepping up and helping. cheers, Zane. From zbitter at redhat.com Tue Sep 22 14:42:12 2020 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 22 Sep 2020 10:42:12 -0400 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: On 18/09/20 6:05 am, Jean-Philippe Evrard wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! > > Thanks to all of you. It was nice. Thanks for all your work JP! It was a pleasure serving with you. Hopefully we'll run into each other at some future event, in a future where events are a thing again. cheers, Zane. From mdulko at redhat.com Tue Sep 22 14:45:35 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Tue, 22 Sep 2020 16:45:35 +0200 Subject: [Kuryr] Proposing Roman Dobosz for kuryr-kubernetes core Message-ID: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> Hello, I'd like to propose Roman for the core reviewer role in kuryr- kubernetes. Roman was leading several successful development activities during Train and Ussuri cycles: * Switch to use openstacksdk as our OpenStack API client. * Support for IPv6. * Moving VIF data from pod annotations to a KuryrPort CR. He also demonstrated code reviewing skills of an experienced Python developer. In the absence of objections, I'll proceed with adding Roman to the core team next week. Thanks, Michał From mdulko at redhat.com Tue Sep 22 15:07:06 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Tue, 22 Sep 2020 17:07:06 +0200 Subject: [Kuryr] Stepping down as PTL Message-ID: Hi, Given that it's considered healthy to have rotation in the PTL role, after serving 2.5 cycle as Kuryr PTL, I won't be a candidating in Wallaby cycle. I'm not going anywhere and I will continue to work on Kuryr projects. Thanks, Michał From dmellado at redhat.com Tue Sep 22 15:07:10 2020 From: dmellado at redhat.com (Daniel Mellado) Date: Tue, 22 Sep 2020 17:07:10 +0200 Subject: [Kuryr] Proposing Roman Dobosz for kuryr-kubernetes core In-Reply-To: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> References: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> Message-ID: +1 from my side, given he's also able to provide beer ;) On 22/9/20 16:45, Michał Dulko wrote: > Hello, > > I'd like to propose Roman for the core reviewer role in kuryr- > kubernetes. Roman was leading several successful development activities > during Train and Ussuri cycles: > > * Switch to use openstacksdk as our OpenStack API client. > * Support for IPv6. > * Moving VIF data from pod annotations to a KuryrPort CR. > > He also demonstrated code reviewing skills of an experienced Python > developer. > > In the absence of objections, I'll proceed with adding Roman to the > core team next week. > > Thanks, > Michał > > From pierre at stackhpc.com Tue Sep 22 15:31:21 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 22 Sep 2020 17:31:21 +0200 Subject: [cloudkitty][election][ptl] PTL non-candidacy Message-ID: Hello, Late in the Victoria cycle, I volunteered to help with the then inactive CloudKitty project, which resulted in becoming its PTL. While I plan to continue contributing to CloudKitty, I will have very limited availability during the beginning of the Wallaby cycle. In particular, I may not even be able to join the PTG. Thus it would be best if someone else ran for CloudKitty PTL this cycle. If you are interested in nominating yourself but aren't sure what is involved, don't hesitate to reach out to me by email or IRC. Thanks, Pierre Riteau (priteau) From ltomasbo at redhat.com Tue Sep 22 15:38:58 2020 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Tue, 22 Sep 2020 17:38:58 +0200 Subject: [Kuryr] Proposing Roman Dobosz for kuryr-kubernetes core In-Reply-To: References: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> Message-ID: +1 from my side, well deserved! On Tue, Sep 22, 2020 at 5:15 PM Daniel Mellado wrote: > +1 from my side, given he's also able to provide beer ;) > > > On 22/9/20 16:45, Michał Dulko wrote: > > Hello, > > > > I'd like to propose Roman for the core reviewer role in kuryr- > > kubernetes. Roman was leading several successful development activities > > during Train and Ussuri cycles: > > > > * Switch to use openstacksdk as our OpenStack API client. > > * Support for IPv6. > > * Moving VIF data from pod annotations to a KuryrPort CR. > > > > He also demonstrated code reviewing skills of an experienced Python > > developer. > > > > In the absence of objections, I'll proceed with adding Roman to the > > core team next week. > > > > Thanks, > > Michał > > > > > > > -- LUIS TOMÁS BOLÍVAR Senior Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Tue Sep 22 15:54:09 2020 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Tue, 22 Sep 2020 17:54:09 +0200 Subject: [Kuryr] Proposing Roman Dobosz for kuryr-kubernetes core In-Reply-To: References: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> Message-ID: +1 from my side. Happy to have him on board! On Tue, Sep 22, 2020 at 5:44 PM Luis Tomas Bolivar wrote: > +1 from my side, well deserved! > > On Tue, Sep 22, 2020 at 5:15 PM Daniel Mellado > wrote: > >> +1 from my side, given he's also able to provide beer ;) >> >> >> On 22/9/20 16:45, Michał Dulko wrote: >> > Hello, >> > >> > I'd like to propose Roman for the core reviewer role in kuryr- >> > kubernetes. Roman was leading several successful development activities >> > during Train and Ussuri cycles: >> > >> > * Switch to use openstacksdk as our OpenStack API client. >> > * Support for IPv6. >> > * Moving VIF data from pod annotations to a KuryrPort CR. >> > >> > He also demonstrated code reviewing skills of an experienced Python >> > developer. >> > >> > In the absence of objections, I'll proceed with adding Roman to the >> > core team next week. >> > >> > Thanks, >> > Michał >> > >> > >> >> >> > > -- > LUIS TOMÁS BOLÍVAR > Senior Software Engineer > Red Hat > Madrid, Spain > ltomasbo at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at icloud.com Tue Sep 22 13:04:26 2020 From: oliver.weinmann at icloud.com (Oliver Weinmann) Date: Tue, 22 Sep 2020 13:04:26 -0000 Subject: =?utf-8?B?UmU6wqBVc3N1cmkgQ2VudE9TIDggYWRkIG1wdHNhcyBkcml2ZXIgdG8gaW50?= =?utf-8?B?cm9zcGVjdGlvbiBpbml0cmFtZnM=?= Message-ID: Hi, thanks a lot. That did the trick. Now it loads the module automatically. I even added another driver and it works like a charm. :) Best Regards, Oliver Am 15. September 2020 um 0:19 schrieb Donny Davis : On Fri, Sep 11, 2020 at 3:25 PM Oliver Weinmann wrote: Hi, I already asked this question on serverfault. But I guess here is a better place. I have a very ancient hardware with a MPTSAS controller. I use this for TripleO deployment testing. With the release of Ussuri which is running CentOS8, I can no longer provision my overcloud nodes as the MPTSAS driver has been removed in CentOS8: https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ I managed to include the driver provided from ELrepo in the introspection image but It is not loaded automatically: All commands are run as user "stack". Extract the introspection image: cd ~ mkdir imagesnew cd imagesnew tar xvf ../ironic-python-agent.tar mkdir ~/ipa-tmp cd ~/ipa-tmp /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs | zcat | cpio -ivd | pax -r Extract the contents of the mptsas driver rpm: rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r Put the kernel module in the right places. To figure out where the module has to reside I installed the rpm on a already deployed node and used find to locate it. xz -c ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz mkdir ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko sudo chown root . -R find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8  > ~/images/ironic-python-agent.initramfs Upload the new image cd ~/images openstack overcloud image upload --update-existing --image-path /home/stack/images/ Now when I start the introspection and ssh into the host I see no disks: [root at localhost ~]# fdisk -l [root at localhost ~]# lsmod | grep mptsas Once i manually load the driver, I can see the disks: [root at localhost ~]# modprobe mptsas [root at localhost ~]# lsmod | grep mptsas mptsas                 69632  0 mptscsih               45056  1 mptsas mptbase                98304  2 mptsas,mptscsih scsi_transport_sas     45056  1 mptsas [root at localhost ~]# fdisk -l Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes But how can I make it so that it will automatically load on boot? Best Regards, Oliver I guess you could try using modules-load to load the module at boot.  > sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko echo "mptsas" > ./etc/modules-load.d/mptsas.conf > sudo chown root . -R Also I would have a look see at these docs to build an image using ipa builder  https://docs.openstack.org/ironic-python-agent-builder/latest/ -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Tue Sep 22 18:57:46 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 22 Sep 2020 18:57:46 +0000 Subject: [Neutron] Not create .2 port In-Reply-To: <20200918074904.GB701072@p1> References: <20200918074904.GB701072@p1> Message-ID: I create a subnet with --no-dhcp, the .2 address is not allocated, but a port is still created without any address. Is this expected? Since DHCP is disabled, what's this port for? Thanks! Tony > -----Original Message----- > From: Slawek Kaplonski > Sent: Friday, September 18, 2020 12:49 AM > To: Tony Liu > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Neutron] Not create .2 port > > Hi, > > On Fri, Sep 18, 2020 at 03:40:54AM +0000, Tony Liu wrote: > > Hi, > > > > When create a subnet, by default, the first address is the gateway and > > Neutron also allocates an address for serving DHCP and DNS. Is there > > any way to NOT create such port when creating subnet? > > You can specify "--gateway None" if You don't want to have gateway > configured in Your subnet. > And for dhcp ports, You can set "--no-dhcp" for subnet so it will not > create dhcp ports in such subnet also. > > > > > > > Thanks! > > Tony > > > > > > -- > Slawek Kaplonski > Senior software engineer > Red Hat From openstack at nemebean.com Tue Sep 22 19:13:34 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 22 Sep 2020 14:13:34 -0500 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <22ceda9f-88e1-26c9-1999-7224312b3576@nemebean.com> Everyone else has pretty much covered all of the general well-wishes, so I wanted to share a quick anecdote from my time working with JP. As many of you know, I've gone through some tough times in my personal life over the past year or so. During one of the worst parts of that, JP pinged me claiming he wanted me to review something. First, he asked how I was doing and we had a chat about that. He never did ask for a review. Now, maybe he simply forgot. ;-) But I choose to believe he was just looking for an excuse to check in on me because that's the kind of person he is. It's been a pleasure, JP, and I hope our paths cross again in the future. -Ben On 9/18/20 5:05 AM, Jean-Philippe Evrard wrote: > Hello everyone, > > This is probably not a surprise for most of you, but I think it's worth writing it down: I won't be a candidate for another term at the TC. It was a pleasure to work with all of you. > > I am not leaving because I don't find the TC interesting anymore, quite the opposite. I will probably still lurk and follow the channels and ML. I just have switched to (yet) another duty at my employer, keeping me away from OpenStack. Next to this, I believe it's good to have some fresh members in the TC, it's been a while I am part of this family now :) > > For those interested by running the TC: Don't hesitate to run! We need fresh ideas and motivated people. It's not by having always the same people at the helm that OpenStack will naturally or drastically evolve. If you want to change OpenStack, be the change! > > Thanks to all of you. It was nice. > > Regards, > Jean-Philippe Evrard (evrardjp) > From openstack at nemebean.com Tue Sep 22 19:26:39 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 22 Sep 2020 14:26:39 -0500 Subject: [oslo] PTL Non-candidacy: I mean it this time :-P Message-ID: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> I'm eminently familiar with midwest goodbyes, but an entire cycle to actually leave is taking it a bit far, don't you think? :-) Seriously though, this is two cycles in a row that we've had to scramble late in a cycle to finish basic community goals, mostly because of my lack of attention. Now that we've all sort of adapted to the whole global pandemic thing I'm hoping my abdication will stick this time. I'll still be around and try to help out when I can, but it's past time for me to turn the leadership of Oslo over to someone else. We've got a great team and I'm sure any one of them will do a great job! Thanks. -Ben From amy at demarco.com Tue Sep 22 19:35:58 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 22 Sep 2020 14:35:58 -0500 Subject: [oslo] PTL Non-candidacy: I mean it this time :-P In-Reply-To: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> References: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> Message-ID: But Ben!!! But more seriously thanks for sticking around and leading Oslo with your plate already full with things. AMy (spotz) On Tue, Sep 22, 2020 at 2:29 PM Ben Nemec wrote: > I'm eminently familiar with midwest goodbyes, but an entire cycle to > actually leave is taking it a bit far, don't you think? :-) > > Seriously though, this is two cycles in a row that we've had to scramble > late in a cycle to finish basic community goals, mostly because of my > lack of attention. Now that we've all sort of adapted to the whole > global pandemic thing I'm hoping my abdication will stick this time. > > I'll still be around and try to help out when I can, but it's past time > for me to turn the leadership of Oslo over to someone else. We've got a > great team and I'm sure any one of them will do a great job! > > Thanks. > > -Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 22 23:45:02 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 22 Sep 2020 23:45:02 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations Kickoff Message-ID: <20200922234501.doi5xyjtj2dwx7sz@yuggoth.org> Nominations for OpenStack PTLs (Project Team Leads) and TC (Technical Committee) positions (4 positions) are now open and will remain open until Sep 29, 2020 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/wallaby// (for example, "candidates/wallaby/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for an available, directly-elected seat on the Technical Committee. In order to be an eligible candidate for PTL you must be an OpenStack Foundation Individual Member. PTL candidates must also have contributed to the corresponding team during the Ussuri to Victoria timeframe, Sep 27, 2019 00:00 UTC - Sep 29, 2020 00:00 UTC. Your Gerrit account must also have a verified email address matching the one used in your candidacy filename. Both PTL and TC elections will be held from Oct 06, 2020 23:45 UTC through to Oct 13, 2020 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution to one of the official teams over the Ussuri to Victoria timeframe, Sep 27, 2019 00:00 UTC - Sep 29, 2020 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. The electorate for a PTL election are the OpenStack Foundation Individual Members who have a code contribution over the Ussuri to Victoria timeframe, Sep 27, 2019 00:00 UTC - Sep 29, 2020 00:00 UTC, in a deliverable repository maintained by the team which the PTL would lead, as well as the Extra ATCs who are acknowledged by the TC for that specific team. The list of project teams can be found at https://governance.openstack.org/tc/reference/projects/ and their individual team pages include lists of corresponding Extra ATCs. Please find below the timeline: nomination starts @ Sep 22, 2020 23:45 UTC nomination ends @ Sep 29, 2020 23:45 UTC campaigning starts @ Sep 29, 2020 23:45 UTC campaigning ends @ Oct 06, 2020 23:45 UTC elections start @ Oct 06, 2020 23:45 UTC elections end @ Oct 13, 2020 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. The electorate is requested to confirm their email addresses in Gerrit prior to 2020-09-29 00:00:00+00:00, so that the emailed ballots are sent to the correct email address. This email address should match one which was provided in your foundation member profile as well. Gerrit account information and OSF member profiles can be updated at https://review.openstack.org/#/settings/contact and https://www.openstack.org/profile/ accordingly. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tetsuro.nakamura.bc at hco.ntt.co.jp Wed Sep 23 02:05:19 2020 From: tetsuro.nakamura.bc at hco.ntt.co.jp (Tetsuro Nakamura) Date: Wed, 23 Sep 2020 11:05:19 +0900 Subject: [elections][placement][blazar] Placement PTL Non-candidacy: Stepping down Message-ID: Hello everyone, Due to my current responsibilities, I'm not able to keep up with my duties either as a Placement PTL, core reviewer, or as a Blazar core reviewer in Wallaby cycle. Thank you so much to everyone that has supported. I won't be able to checking ML or IRC, but I'll still be checking my emails. Please ping me via email if you need help. Thanks. - Tetsuro -- Tetsuro Nakamura NTT Network Service Systems Laboratories TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan From tkajinam at redhat.com Wed Sep 23 04:10:51 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 23 Sep 2020 13:10:51 +0900 Subject: [storlets] Wallaby PTG planning In-Reply-To: References: Message-ID: Hi, According to the planning etherpad, we expect members only from APAC so far, so I'll request changing our time slots to earlier time, UTC 6:00-8:00, regarding timezone of attendance. However feel free to reach out to me (in private or in this ml) if you are interested in joining the session and have concerns with that change. Thank you Takashi On Wed, Sep 16, 2020 at 8:47 AM Takashi Kajinami wrote: > Hello, > > > After discussion in IRC I booked a slot for Storlets project in the > Wallaby vPTG. > I created a planning etherpad[1] so please put your name/nick if you are > interested to join , > and also put any topics you want to discuss there. > [1] https://etherpad.opendev.org/p/storlets-ptg-wallaby > > Currently we have a slot from 13:00 UTC booked, but if we don't see > attendance > from EMEA/NA, we might reschedule it to "earlier" slots since current > active > cores are based in APAC. > > Please let me know if you have any questions. > > Thank you > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Sep 23 07:18:10 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 23 Sep 2020 09:18:10 +0200 Subject: [kolla] Ussuri -source images now use stable branches tarballs Message-ID: Hello, Since [1] was merged - Kolla source type images are using stable branch tarballs for OpenStack projects - compared to latest point releases - as previously. There are some projects though, that kolla-build uses point releases for those, the list can be found in version-check tool source [2]. Users that want to build source images using latest point releases - can run tools/version-check.py with --versioned-releases option (-v) - and it will adapt sources list accordingly. [1]: https://review.opendev.org/#/c/750297/ [2]: https://opendev.org/openstack/kolla/src/branch/master/tools/version-check.py#L54 -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Sep 23 07:56:32 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 23 Sep 2020 08:56:32 +0100 Subject: [kolla] Kolla klub meeting Message-ID: Hi, Tomorrow (Thursday) sees the return of the Kolla Klub! As usual, the meeting will be at 15:00 UTC. Lets kick off with the following topics: * Quick upstream update * Improving engagement with the community in non EU/US timezones * Operator feedback Look forward to seeing you there. https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw/edit# Thanks, Mark From hberaud at redhat.com Wed Sep 23 08:08:06 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 23 Sep 2020 10:08:06 +0200 Subject: [oslo] PTL Non-candidacy: I mean it this time :-P In-Reply-To: References: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> Message-ID: Thanks for your commitment, your dedication and for all the things you have done in this team. Le mar. 22 sept. 2020 à 21:39, Amy Marrich a écrit : > But Ben!!! But more seriously thanks for sticking around and leading Oslo > with your plate already full with things. > > AMy (spotz) > > On Tue, Sep 22, 2020 at 2:29 PM Ben Nemec wrote: > >> I'm eminently familiar with midwest goodbyes, but an entire cycle to >> actually leave is taking it a bit far, don't you think? :-) >> >> Seriously though, this is two cycles in a row that we've had to scramble >> late in a cycle to finish basic community goals, mostly because of my >> lack of attention. Now that we've all sort of adapted to the whole >> global pandemic thing I'm hoping my abdication will stick this time. >> >> I'll still be around and try to help out when I can, but it's past time >> for me to turn the leadership of Oslo over to someone else. We've got a >> great team and I'm sure any one of them will do a great job! >> >> Thanks. >> >> -Ben >> >> -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Wed Sep 23 08:52:56 2020 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 23 Sep 2020 10:52:56 +0200 Subject: [Kuryr] Stepping down as PTL In-Reply-To: References: Message-ID: <46586170-bf0d-fc40-10d5-92bb125d1904@redhat.com> Thanks for all your hard work, Michał! Welcome to the former PTLs VIP club ;) On 22/9/20 17:07, Michał Dulko wrote: > Hi, > > Given that it's considered healthy to have rotation in the PTL role, > after serving 2.5 cycle as Kuryr PTL, I won't be a candidating in > Wallaby cycle. > > I'm not going anywhere and I will continue to work on Kuryr projects. > > Thanks, > Michał > > From pierre at stackhpc.com Wed Sep 23 08:55:59 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 23 Sep 2020 10:55:59 +0200 Subject: [elections][placement][blazar] Placement PTL Non-candidacy: Stepping down In-Reply-To: References: Message-ID: Hi Tetsuro, Thank you so much for all the work that you've done on Blazar (and of course also on Placement). Your contributions to integrate Blazar with Placement provided major improvements to the project. And your code reviews were always helpful, you didn't hesitate to say no when a patch wasn't quite ready. I am sad that we won't get to work together anymore. I wish you good luck for the future. Cheers, Pierre On Wed, 23 Sep 2020 at 04:07, Tetsuro Nakamura wrote: > > Hello everyone, > > Due to my current responsibilities, > I'm not able to keep up with my duties > either as a Placement PTL, core reviewer, > or as a Blazar core reviewer in Wallaby cycle. > > Thank you so much to everyone that has supported. > > I won't be able to checking ML or IRC, > but I'll still be checking my emails. > Please ping me via email if you need help. > > Thanks. > > - Tetsuro > > -- > Tetsuro Nakamura > NTT Network Service Systems Laboratories > TEL:0422 59 6914(National)/+81 422 59 6914(International) > 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan > > From thierry at openstack.org Wed Sep 23 09:20:40 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 23 Sep 2020 11:20:40 +0200 Subject: [largescale-sig] Next meeting: September 23, 8utc In-Reply-To: <220d05cf-b9f8-50db-187a-8934f24bd772@openstack.org> References: <220d05cf-b9f8-50db-187a-8934f24bd772@openstack.org> Message-ID: <0f02bdd1-64d0-aa78-a81b-66131e302a97@openstack.org> During our meeting today we discussed creating a new workstream on Meaningful monitoring, status updates on the other workstreams, and how to structure our upcoming Forum session (see etherpad at https://etherpad.opendev.org/p/w-forum-scaling-stories). Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-09-23-08.00.html TODOs: - ttx to draft a plan to tackle "meaningful monitoring" as a new SIG workstream - all to describe briefly how you solved metrics/billing in your deployment in https://etherpad.openstack.org/p/large-scale-sig-documentation - masahito to push latest patches to oslo.metrics Next meeting: Oct 7, 16:00UTC (#openstack-meeting-3) -- Thierry Carrez (ttx) From balazs.gibizer at est.tech Wed Sep 23 11:22:56 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 23 Sep 2020 13:22:56 +0200 Subject: [election][nova] PTL Candidacy for Wallaby Message-ID: <8MZ3HQ.156NWU1JICTX@est.tech> Hi, I would like to continue serving as a Nova PTL in the Wallaby cycle. In Victoria I helped keeping Nova alive and kicking: * The team merged 56% (9/16) of the blueprints we approved for Victoria. I think the renewed runway process helped making that achievement happen. * We was also managed to reduce Nova's non triaged bug backlog from more than 100 to less than 10. * We had a productive virtual PTG and the organization of the second PTG is already underway. I intend to continue what I have started. Cheers, gibi From massimo.sgaravatto at gmail.com Wed Sep 23 12:59:28 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 23 Sep 2020 14:59:28 +0200 Subject: Can't delete a vm In-Reply-To: <1600752720008.34304@agoda.com> References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com> Message-ID: >From time to time (in particular where there are problems with the Rabbit cluster) I also see this problem. In such cases I "mark" the instance as deleted in the database. I don't know if there are cleaner solutions Cheers, Massimo [*] update instances set deleted=1 where uuid=""; On Tue, Sep 22, 2020 at 7:35 AM Szabo, Istvan (Agoda) < Istvan.Szabo at agoda.com> wrote: > The problem is that in opestack you can't find this vm if I want to delete: > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: No > server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > ERROR (CommandError): Unable to reset the state for the specified > server(s). > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > exists. > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > ERROR (CommandError): No server with a name or ID of > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > openstack server delete hk-qaciapp-2020 > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > But if I list it like this can see: > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > | BUILD | scheduling | NOSTATE | > | CentOS-7-x86_64-1511 (14.11.2017) | > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None | > > > :/ > > > It was created with the api before, maybe should try to delete with the > API? Not sure :/ > > ________________________________ > From: Ignazio Cassano > Sent: Tuesday, September 22, 2020 12:21 PM > To: Szabo, Istvan (Agoda) > Cc: openstack-discuss > Subject: Re: Can't delete a vm > > Email received from outside the company. If in doubt don't click links nor > open attachments! > ________________________________ > Hello, before feeling instance, try to use nova reset-state to change the > state of in instance to available. > Then try to remove. > Ignazio > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) > ha scritto: > Hello, > > > I have vm which is stuck in the build phase: > > > openstack server list --all-projects --long --limit -1 | grep > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | > hk-qaciapp-2020 | BUILD | scheduling | NOSTATE > | | CentOS-7-x86_64-1511 > (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 > | None > > > I've tried to delete with openstack server delete, nova delete force, > restarted the nova services on all management nodes, restarted the nova > compute service where it was originally spawned but still visible. > > > I see in the database in couple of places either the id and either the > hostname, like in instance mapping table, instances table, nova_cell0 > database ... > > > I have an idea how to delete, so I'd spawn just a vm and check which > tables are created the entries, and I would go through all tables with the > problematic one and delete 1 by 1 but this will takes me hours .... > > Any faster way you might suggest me please? > > > Thank you > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by copyright > or other legal rules. If you have received it by mistake please let us know > by reply email and delete it from your system. It is prohibited to copy > this message or disclose its content to anyone. Any confidentiality or > privilege is not waived or lost by any mistaken delivery or unauthorized > disclosure of the message. All messages sent to and from Agoda may be > monitored to ensure compliance with company policies, to protect the > company's interests and to remove potential malware. Electronic messages > may be intercepted, amended, lost or deleted, or contain viruses. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Sep 23 13:19:21 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Sep 2020 14:19:21 +0100 Subject: Can't delete a vm In-Reply-To: References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com> Message-ID: <73adb4b31f5699e9adbd30c4f1f7b3094556c090.camel@redhat.com> On Wed, 2020-09-23 at 14:59 +0200, Massimo Sgaravatto wrote: > From time to time (in particular where there are problems with the Rabbit > cluster) I also see this problem. In such cases I "mark" the instance as > deleted in the database. I don't know if there are cleaner solutions > > Cheers, Massimo > > [*] > update instances set deleted=1 where uuid=""; this is not how you mark a vm as deleted. you set deleted=id not 1 > > On Tue, Sep 22, 2020 at 7:35 AM Szabo, Istvan (Agoda) < > Istvan.Szabo at agoda.com> wrote: > > > The problem is that in opestack you can't find this vm if I want to delete: > > > > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: No > > server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > ERROR (CommandError): Unable to reset the state for the specified > > server(s). > > > > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > > exists. > > > > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > ERROR (CommandError): No server with a name or ID of > > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > > > > openstack server delete hk-qaciapp-2020 > > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > > > > But if I list it like this can see: > > > > openstack server list --all-projects --long --limit -1 | grep > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > > > > | BUILD | scheduling | NOSTATE | > > | CentOS-7-x86_64-1511 (14.11.2017) | > > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None | > > > > > > :/ > > > > > > It was created with the api before, maybe should try to delete with the > > API? Not sure :/ > > > > ________________________________ > > From: Ignazio Cassano > > Sent: Tuesday, September 22, 2020 12:21 PM > > To: Szabo, Istvan (Agoda) > > Cc: openstack-discuss > > Subject: Re: Can't delete a vm > > > > Email received from outside the company. If in doubt don't click links nor > > open attachments! > > ________________________________ > > Hello, before feeling instance, try to use nova reset-state to change the > > state of in instance to available. > > Then try to remove. > > Ignazio > > > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) > > ha scritto: > > Hello, > > > > > > I have vm which is stuck in the build phase: > > > > > > openstack server list --all-projects --long --limit -1 | grep > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | > > hk-qaciapp-2020 | BUILD | scheduling | NOSTATE > > | | CentOS-7-x86_64-1511 > > (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 > > | None > > > > > > I've tried to delete with openstack server delete, nova delete force, > > restarted the nova services on all management nodes, restarted the nova > > compute service where it was originally spawned but still visible. > > > > > > I see in the database in couple of places either the id and either the > > hostname, like in instance mapping table, instances table, nova_cell0 > > database ... > > > > > > I have an idea how to delete, so I'd spawn just a vm and check which > > tables are created the entries, and I would go through all tables with the > > problematic one and delete 1 by 1 but this will takes me hours .... > > > > Any faster way you might suggest me please? > > > > > > Thank you > > > > > > ________________________________ > > This message is confidential and is for the sole use of the intended > > recipient(s). It may also be privileged or otherwise protected by copyright > > or other legal rules. If you have received it by mistake please let us know > > by reply email and delete it from your system. It is prohibited to copy > > this message or disclose its content to anyone. Any confidentiality or > > privilege is not waived or lost by any mistaken delivery or unauthorized > > disclosure of the message. All messages sent to and from Agoda may be > > monitored to ensure compliance with company policies, to protect the > > company's interests and to remove potential malware. Electronic messages > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > From massimo.sgaravatto at gmail.com Wed Sep 23 13:33:00 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 23 Sep 2020 15:33:00 +0200 Subject: Can't delete a vm In-Reply-To: <73adb4b31f5699e9adbd30c4f1f7b3094556c090.camel@redhat.com> References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com> <73adb4b31f5699e9adbd30c4f1f7b3094556c090.camel@redhat.com> Message-ID: On Wed, Sep 23, 2020 at 3:19 PM Sean Mooney wrote: > On Wed, 2020-09-23 at 14:59 +0200, Massimo Sgaravatto wrote: > > From time to time (in particular where there are problems with the Rabbit > > cluster) I also see this problem. In such cases I "mark" the instance as > > deleted in the database. I don't know if there are cleaner solutions > > > > Cheers, Massimo > > > > [*] > > update instances set deleted=1 where uuid=""; > this is not how you mark a vm as deleted. > you set deleted=id not 1 > > I (wongly) thought that any number different than 0 was fine What is the 'id' that should be specified ? Thanks, Massimo > > > > On Tue, Sep 22, 2020 at 7:35 AM Szabo, Istvan (Agoda) < > > Istvan.Szabo at agoda.com> wrote: > > > > > The problem is that in opestack you can't find this vm if I want to > delete: > > > > > > > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > > > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: No > > > server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > exists. > > > ERROR (CommandError): Unable to reset the state for the specified > > > server(s). > > > > > > > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > > > exists. > > > > > > > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > ERROR (CommandError): No server with a name or ID of > > > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > > > > > > > openstack server delete hk-qaciapp-2020 > > > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > > > > > > > But if I list it like this can see: > > > > > > openstack server list --all-projects --long --limit -1 | grep > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > > > > > > | BUILD | scheduling | NOSTATE | > > > | CentOS-7-x86_64-1511 (14.11.2017) | > > > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None > | > > > > > > > > > :/ > > > > > > > > > It was created with the api before, maybe should try to delete with the > > > API? Not sure :/ > > > > > > ________________________________ > > > From: Ignazio Cassano > > > Sent: Tuesday, September 22, 2020 12:21 PM > > > To: Szabo, Istvan (Agoda) > > > Cc: openstack-discuss > > > Subject: Re: Can't delete a vm > > > > > > Email received from outside the company. If in doubt don't click links > nor > > > open attachments! > > > ________________________________ > > > Hello, before feeling instance, try to use nova reset-state to change > the > > > state of in instance to available. > > > Then try to remove. > > > Ignazio > > > > > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) < > Istvan.Szabo at agoda.com > > > > ha scritto: > > > Hello, > > > > > > > > > I have vm which is stuck in the build phase: > > > > > > > > > openstack server list --all-projects --long --limit -1 | grep > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | > > > hk-qaciapp-2020 | BUILD | scheduling | NOSTATE > > > | | CentOS-7-x86_64-1511 > > > (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | > QA_VLAN62 > > > | None > > > > > > > > > I've tried to delete with openstack server delete, nova delete force, > > > restarted the nova services on all management nodes, restarted the nova > > > compute service where it was originally spawned but still visible. > > > > > > > > > I see in the database in couple of places either the id and either the > > > hostname, like in instance mapping table, instances table, nova_cell0 > > > database ... > > > > > > > > > I have an idea how to delete, so I'd spawn just a vm and check which > > > tables are created the entries, and I would go through all tables with > the > > > problematic one and delete 1 by 1 but this will takes me hours .... > > > > > > Any faster way you might suggest me please? > > > > > > > > > Thank you > > > > > > > > > ________________________________ > > > This message is confidential and is for the sole use of the intended > > > recipient(s). It may also be privileged or otherwise protected by > copyright > > > or other legal rules. If you have received it by mistake please let us > know > > > by reply email and delete it from your system. It is prohibited to copy > > > this message or disclose its content to anyone. Any confidentiality or > > > privilege is not waived or lost by any mistaken delivery or > unauthorized > > > disclosure of the message. All messages sent to and from Agoda may be > > > monitored to ensure compliance with company policies, to protect the > > > company's interests and to remove potential malware. Electronic > messages > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Sep 23 13:34:54 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 23 Sep 2020 08:34:54 -0500 Subject: [oslo] PTL Non-candidacy: I mean it this time :-P In-Reply-To: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> References: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> Message-ID: Ben, Thanks for all you have done to lead Oslo!  It is always a pleasure to work with you. Jay On 9/22/2020 2:26 PM, Ben Nemec wrote: > I'm eminently familiar with midwest goodbyes, but an entire cycle to > actually leave is taking it a bit far, don't you think? :-) > > Seriously though, this is two cycles in a row that we've had to > scramble late in a cycle to finish basic community goals, mostly > because of my lack of attention. Now that we've all sort of adapted to > the whole global pandemic thing I'm hoping my abdication will stick > this time. > > I'll still be around and try to help out when I can, but it's past > time for me to turn the leadership of Oslo over to someone else. We've > got a great team and I'm sure any one of them will do a great job! > > Thanks. > > -Ben > From smooney at redhat.com Wed Sep 23 14:10:16 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Sep 2020 15:10:16 +0100 Subject: Can't delete a vm In-Reply-To: References: <1600749354619.71543@agoda.com> <1600752720008.34304@agoda.com> <73adb4b31f5699e9adbd30c4f1f7b3094556c090.camel@redhat.com> Message-ID: On Wed, 2020-09-23 at 15:33 +0200, Massimo Sgaravatto wrote: > On Wed, Sep 23, 2020 at 3:19 PM Sean Mooney wrote: > > > On Wed, 2020-09-23 at 14:59 +0200, Massimo Sgaravatto wrote: > > > From time to time (in particular where there are problems with the Rabbit > > > cluster) I also see this problem. In such cases I "mark" the instance as > > > deleted in the database. I don't know if there are cleaner solutions > > > > > > Cheers, Massimo > > > > > > [*] > > > update instances set deleted=1 where uuid=""; > > > > this is not how you mark a vm as deleted. > > you set deleted=id not 1 > > > > > > I (wongly) thought that any number different than 0 was fine > What is the 'id' that should be specified ? each instance has an internal id so of you select id form instances where uuid= it will tell you the id to use or just change update instances set deleted=1 where uuid="" to update instances set deleted=id where uuid="" this is related to how soft delete works if i remember correctly if deleted is not equal to id or 0 its soft deleted i think. > > Thanks, Massimo > > > > > > > > On Tue, Sep 22, 2020 at 7:35 AM Szabo, Istvan (Agoda) < > > > Istvan.Szabo at agoda.com> wrote: > > > > > > > The problem is that in opestack you can't find this vm if I want to > > > > delete: > > > > > > > > > > > > nova reset-state 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 --active > > > > Reset state for server 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 failed: No > > > > server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > > > > exists. > > > > ERROR (CommandError): Unable to reset the state for the specified > > > > server(s). > > > > > > > > > > > > openstack server delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > > No server with a name or ID of '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' > > > > exists. > > > > > > > > > > > > nova force-delete 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > > ERROR (CommandError): No server with a name or ID of > > > > '94ab55d3-3e39-48d3-a9b2-505838ceb6e7' exists. > > > > > > > > > > > > openstack server delete hk-qaciapp-2020 > > > > No server with a name or ID of 'hk-qaciapp-2020' exists. > > > > > > > > > > > > But if I list it like this can see: > > > > > > > > openstack server list --all-projects --long --limit -1 | grep > > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | hk-qaciapp-2020 > > > > > > > > | BUILD | scheduling | NOSTATE | > > > > | CentOS-7-x86_64-1511 (14.11.2017) | > > > > 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | QA_VLAN62 | None > > > > | > > > > > > > > > > > > :/ > > > > > > > > > > > > It was created with the api before, maybe should try to delete with the > > > > API? Not sure :/ > > > > > > > > ________________________________ > > > > From: Ignazio Cassano > > > > Sent: Tuesday, September 22, 2020 12:21 PM > > > > To: Szabo, Istvan (Agoda) > > > > Cc: openstack-discuss > > > > Subject: Re: Can't delete a vm > > > > > > > > Email received from outside the company. If in doubt don't click links > > > > nor > > > > open attachments! > > > > ________________________________ > > > > Hello, before feeling instance, try to use nova reset-state to change > > > > the > > > > state of in instance to available. > > > > Then try to remove. > > > > Ignazio > > > > > > > > Il Mar 22 Set 2020, 06:43 Szabo, Istvan (Agoda) < > > > > Istvan.Szabo at agoda.com > > > > > ha scritto: > > > > Hello, > > > > > > > > > > > > I have vm which is stuck in the build phase: > > > > > > > > > > > > openstack server list --all-projects --long --limit -1 | grep > > > > 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 > > > > | 94ab55d3-3e39-48d3-a9b2-505838ceb6e7 | > > > > hk-qaciapp-2020 | BUILD | scheduling | NOSTATE > > > > | | CentOS-7-x86_64-1511 > > > > (14.11.2017) | 2caaef01-e2f9-4e93-961a-c2b2d4a6be82 | > > > > QA_VLAN62 > > > > | None > > > > > > > > > > > > I've tried to delete with openstack server delete, nova delete force, > > > > restarted the nova services on all management nodes, restarted the nova > > > > compute service where it was originally spawned but still visible. > > > > > > > > > > > > I see in the database in couple of places either the id and either the > > > > hostname, like in instance mapping table, instances table, nova_cell0 > > > > database ... > > > > > > > > > > > > I have an idea how to delete, so I'd spawn just a vm and check which > > > > tables are created the entries, and I would go through all tables with > > > > the > > > > problematic one and delete 1 by 1 but this will takes me hours .... > > > > > > > > Any faster way you might suggest me please? > > > > > > > > > > > > Thank you > > > > > > > > > > > > ________________________________ > > > > This message is confidential and is for the sole use of the intended > > > > recipient(s). It may also be privileged or otherwise protected by > > > > copyright > > > > or other legal rules. If you have received it by mistake please let us > > > > know > > > > by reply email and delete it from your system. It is prohibited to copy > > > > this message or disclose its content to anyone. Any confidentiality or > > > > privilege is not waived or lost by any mistaken delivery or > > > > unauthorized > > > > disclosure of the message. All messages sent to and from Agoda may be > > > > monitored to ensure compliance with company policies, to protect the > > > > company's interests and to remove potential malware. Electronic > > > > messages > > > > may be intercepted, amended, lost or deleted, or contain viruses. > > > > > > > > > > > > > > > > From ashlee at openstack.org Wed Sep 23 16:06:35 2020 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 23 Sep 2020 11:06:35 -0500 Subject: Community voting for the 2020 Open Infrastructure Summit Superuser Awards is now open! Message-ID: <73E5BA39-5E60-4B05-BA60-4C2382DBF196@openstack.org> Hi everyone, It’s time for the community to help determine the winner of the 2020 Open Infrastructure Summit Superuser Awards. The Superuser Editorial Advisory Board will review the nominees and determine the finalists and overall winner after the community has had a chance to review and rate nominees. Now, it’s your turn. Review the nominees and rate them before the deadline September 28 at 11:59 p.m. PDT . Share your top pick for the Superuser Awards on Twitter , Facebook , and LinkedIn with #OpenInfraSummit! Cheers, Ashlee Ashlee Ferguson Community & Events Coordinator OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Sep 23 17:08:14 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 23 Sep 2020 12:08:14 -0500 Subject: [ptl][release] RC1 Deadline Tomorrow Message-ID: <4e92ffee-b965-3df8-b792-2a773f79d3de@gmx.com> Hey everyone, Just a reminder that tomorrow, September 24, is the RC1 deadline for the Victoria cycle. If you have any cycle-based deliverables, there should have been a patch proposed earlier this week. If your project is ready, and if you've merged any final critical patches for Victoria, please +1 that patch so the release team knows it is safe to proceed. If you are working on any final patches, please update the release patch as soon as those merge to point to the commit hash for the tag and branch. After RC1 and the stable/victoria branch is created, any additional bugfixes will need to merge to the master branch first, then be backported to stable/victoria. Thanks, Sean From tonyliu0592 at hotmail.com Wed Sep 23 19:39:11 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Wed, 23 Sep 2020 19:39:11 +0000 Subject: [neutron][ovn] N/S traffic that needs SNAT (without floating IPs) will always pass through the centralized gateway nodes Message-ID: Hi, I read "N/S traffic that needs SNAT (without floating IPs) will always pass through the centralized gateway nodes," in [1]. Why is that? Is SNAT on each compute node not supported by OVN, or it's about ML2 driver or provisioning? [1] https://docs.openstack.org/networking-ovn/latest/admin/refarch/refarch.html Thanks! Tony From gmann at ghanshyammann.com Wed Sep 23 20:34:30 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 23 Sep 2020 15:34:30 -0500 Subject: [election][tc] TC Candidacy Message-ID: <174bcacb8b4.db7973b540538.866197788207131177@ghanshyammann.com> Hello Everyone, I want to announce my candidacy for another term on the TC. First of all, thanks for giving me the opportunity as the technical committee in the previous terms. It's my 7th year in upstream developer activities and overall 9th year associated with OpenStack. I really enjoy my current role in the community. I've served as QA PTL for 2 years and currently a core developer in QA projects and Nova. Also, helping Tempest plugins for bug/Tempest compatible fixes and co-leader in the policy-popup team to make Openstack API policy more consistent and secure. With my QA, Nova, policy popup team role, I mainly target cross projects work as my TC responsibility; helping all projects team for the common work like Migrating the OpenStack CI/CD from Ubuntu Xenial to Bionic in Stein cycle and Ubuntu Bionic to Focal in Victoria, dropping python2.7 and, IPv6 deployment testing are the key activities in recent cycles. In my 3rd continuous cycle as community-wide goal champion, I tried my best to help in coding/investigation across many projects. I am actively involved in programs helping new contributors in OpenStack like a mentor in the Upstream Institute Training since Barcelona Summit (Oct 2016)[1] and an active member in FirstContact SIG [2]. It's always a great experience to introduce OpenStack upstream workflow to new contributors and encourage them to start contributing which is very much needed in OpenStack. TC direction has always been valuable and helps to keep the common standards in OpenStack. There is always room for improvement and so does in TC. I expect more involvement from the community and share what they expect from TC, and bring up discussions during TC office hours or meetings. In my next term, I would like to continue doing cross-project teams work for solving the common problem in OpenStack. In the next one or two-cycle, my main target is to make OpenStack API policy more consistent and secure as part of the policy popup team. Also, improving TC and team interaction. More interaction and involvement is the key to improve OpenStack. With the merging of UC into TC, it's a great opportunity to work together with the users/operators and collect valuable feedback to implement the required or priority things. Thank you for reading and considering my candidacy. Refernce: * Blogs: https://ghanshyammann.com * Review: http://stackalytics.com/?release=all&metric=marks&user_id=ghanshyammann&project_type=all * Commit: http://stackalytics.com/?release=all&metric=commits&user_id=ghanshyammann&project_type=all * Foundation Profile: https://www.openstack.org/community/members/profile/6461 * IRC (Freenode): gmann [1] https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute [2] https://wiki.openstack.org/wiki/First_Contact_SIG - Ghanshyam Mann (gmann) From tonyliu0592 at hotmail.com Wed Sep 23 23:20:11 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Wed, 23 Sep 2020 23:20:11 +0000 Subject: [neutron][ovn] enable_distributed_floating_ip Message-ID: Hi, Could anyone elaborate what difference this option makes to Neutron OVN ML2 driver? I assume the driver will program OVN NB differently based on true or false? Any detail is appreciated. Thanks! Tony From pfb29 at cam.ac.uk Wed Sep 23 23:56:24 2020 From: pfb29 at cam.ac.uk (Paul Browne) Date: Thu, 24 Sep 2020 00:56:24 +0100 Subject: [ironic] Recovering IPMI-type baremetal nodes in 'error' state Message-ID: Hello all, I have a handful of baremetal nodes enrolled in Ironic that use the IPMI hardware type, whose motherboards were recently replaced in a hardware recall by the vendor. After the replacement, the BMC IPMI-over-LAN feature was accidentally left disabled on the nodes, and future attempts to control them with Ironic has put these nodes into the ERROR provisioning state. The IPMI-over-LAN feature on the boards has been enabled again as expected, but is there now any easy way to get the BM nodes back out of that ERROR state, without first deleting and re-enrolling them? -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29 at cam.ac.uk Tel: 0044-1223-746548 ******************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Thu Sep 24 01:35:56 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 24 Sep 2020 01:35:56 +0000 Subject: [neutron][ovn] N/S traffic that needs SNAT (without floating IPs) will always pass through the centralized gateway nodes In-Reply-To: References: Message-ID: Figured it out. Tony > -----Original Message----- > From: Tony Liu > Sent: Wednesday, September 23, 2020 12:39 PM > To: openstack-discuss at lists.openstack.org > Subject: [neutron][ovn] N/S traffic that needs SNAT (without floating > IPs) will always pass through the centralized gateway nodes > > Hi, > > I read "N/S traffic that needs SNAT (without floating IPs) will always > pass through the centralized gateway nodes," in [1]. > > Why is that? Is SNAT on each compute node not supported by OVN, or it's > about ML2 driver or provisioning? > > [1] https://docs.openstack.org/networking- > ovn/latest/admin/refarch/refarch.html > > Thanks! > Tony > From juliaashleykreger at gmail.com Thu Sep 24 04:20:04 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 23 Sep 2020 21:20:04 -0700 Subject: [ironic] Recovering IPMI-type baremetal nodes in 'error' state In-Reply-To: References: Message-ID: Greetings Paul, Obviously, deleting and re-enrolling would be an action of last resort. The only way that I can think you could have gotten the machines into the provision state of ERROR is if they were somehow requested to be un-provisioned. The state machine diagram[0], refers to the provision state verb as "deleted", but the command line tool command this is undeploy[1]. [0]: https://docs.openstack.org/ironic/latest/_images/states.svg [1]: https://docs.openstack.org/python-ironicclient/latest/cli/osc/v1/index.html#baremetal-node-undeploy On Wed, Sep 23, 2020 at 4:58 PM Paul Browne wrote: > > Hello all, > > I have a handful of baremetal nodes enrolled in Ironic that use the IPMI hardware type, whose motherboards were recently replaced in a hardware recall by the vendor. > > After the replacement, the BMC IPMI-over-LAN feature was accidentally left disabled on the nodes, and future attempts to control them with Ironic has put these nodes into the ERROR provisioning state. > > The IPMI-over-LAN feature on the boards has been enabled again as expected, but is there now any easy way to get the BM nodes back out of that ERROR state, without first deleting and re-enrolling them? > > -- > ******************* > Paul Browne > Research Computing Platforms > University Information Services > Roger Needham Building > JJ Thompson Avenue > University of Cambridge > Cambridge > United Kingdom > E-Mail: pfb29 at cam.ac.uk > Tel: 0044-1223-746548 > ******************* From juliaashleykreger at gmail.com Thu Sep 24 04:29:11 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 23 Sep 2020 21:29:11 -0700 Subject: [ironic] Recovering IPMI-type baremetal nodes in 'error' state In-Reply-To: References: Message-ID: Well, somehow I accidentally clicked send! \o/ If you can confirm that the provision_state is ERROR, and if you can identify how the machines got there, it would be helpful. If the machines are still in working order in the database, you may need to actually edit the database because we offer no explicit means to force override the state, mainly to help prevent issues sort of exactly like this. I suspect you may be encountering issues if the node is marked in maintenance state. If the power state is None, maintenance is also set automatically. Newer versions of ironic _do_ periodically check nodes and reset that state, but again it is something to check and if there are continued connectivity issues to the BMC then that may not be happening. So: to recap: 1) Verify the node's provision_state is ERROR. If ERROR is coming from Nova, that is a different situation. 2) Ensure the node is not set in maintenance mode[3] 3) You may also need to ensure the ipmi_address/ipmi_username/ipmi_password is also correct for the node that matches what can be accessed on the motherboard. Additionally, you may also want to externally verify that you actually query the IPMI BMCs. If this somehow started down this path due to power management being lost due to the BMC, some BMCs can have some weirdness around IP networking so it is always good just to manually check using ipmitool. One last thing, is target_provision_state set for these nodes? [3]: https://docs.openstack.org/python-ironicclient/latest/cli/osc/v1/index.html#baremetal-node-maintenance-unset On Wed, Sep 23, 2020 at 9:20 PM Julia Kreger wrote: > > Greetings Paul, > > Obviously, deleting and re-enrolling would be an action of last > resort. The only way that I can think you could have gotten the > machines into the provision state of ERROR is if they were somehow > requested to be un-provisioned. > > The state machine diagram[0], refers to the provision state verb as > "deleted", but the command line tool command this is undeploy[1]. > > > [0]: https://docs.openstack.org/ironic/latest/_images/states.svg > [1]: https://docs.openstack.org/python-ironicclient/latest/cli/osc/v1/index.html#baremetal-node-undeploy > > > > On Wed, Sep 23, 2020 at 4:58 PM Paul Browne wrote: > > > > Hello all, > > > > I have a handful of baremetal nodes enrolled in Ironic that use the IPMI hardware type, whose motherboards were recently replaced in a hardware recall by the vendor. > > > > After the replacement, the BMC IPMI-over-LAN feature was accidentally left disabled on the nodes, and future attempts to control them with Ironic has put these nodes into the ERROR provisioning state. > > > > The IPMI-over-LAN feature on the boards has been enabled again as expected, but is there now any easy way to get the BM nodes back out of that ERROR state, without first deleting and re-enrolling them? > > > > -- > > ******************* > > Paul Browne > > Research Computing Platforms > > University Information Services > > Roger Needham Building > > JJ Thompson Avenue > > University of Cambridge > > Cambridge > > United Kingdom > > E-Mail: pfb29 at cam.ac.uk > > Tel: 0044-1223-746548 > > ******************* From keiko.kuriu.wa at hco.ntt.co.jp Thu Sep 24 05:52:27 2020 From: keiko.kuriu.wa at hco.ntt.co.jp (=?UTF-8?Q?=E6=A0=97=E7=94=9F_=E6=95=AC=E5=AD=90?=) Date: Thu, 24 Sep 2020 14:52:27 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: <5f43ced2-c6e3-e763-1f1e-b1c0107a2941@hco.ntt.co.jp_1> References: <5f43ced2-c6e3-e763-1f1e-b1c0107a2941@hco.ntt.co.jp_1> Message-ID: <723e5cd1426567b8a9564b95574627a4@hco.ntt.co.jp_1> > On 2020/09/17 4:06, yasufum wrote: >> Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, >> fixing bugs and answering questions in the recent releases [1][2] and >> had several sessions on summits for Tacker. In addition, he is now >> well distinguished as one of the responsibility from ETSI-NFV standard >> community as a contributor between the standard and implementation for >> the recent contributions for both of OpenStack and ETSI. >> >> I'd appreciate if we add Toshiaki to the core team. >> >> [1] https://www.stackalytics.com/?company=nec&module=tacker >> [2] >> https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks >> Regards, >> Yasufumi >> >> +1 From hiroo.kitamura at ntt-at.co.jp Thu Sep 24 05:59:56 2020 From: hiroo.kitamura at ntt-at.co.jp (=?iso-2022-jp?B?GyRCS0xCPDkoQmcbKEI=?=) Date: Thu, 24 Sep 2020 05:59:56 +0000 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: <723e5cd1426567b8a9564b95574627a4@hco.ntt.co.jp_1> References: <5f43ced2-c6e3-e763-1f1e-b1c0107a2941@hco.ntt.co.jp_1> <723e5cd1426567b8a9564b95574627a4@hco.ntt.co.jp_1> Message-ID: > On 2020/09/17 4:06, yasufum wrote: >> Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, >> fixing bugs and answering questions in the recent releases [1][2] and >> had several sessions on summits for Tacker. In addition, he is now >> well distinguished as one of the responsibility from ETSI-NFV >> standard community as a contributor between the standard and >> implementation for the recent contributions for both of OpenStack and ETSI. >> >> I'd appreciate if we add Toshiaki to the core team. >> >> [1] https://www.stackalytics.com/?company=nec&module=tacker >> [2] >> https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&met >> ric=marks >> Regards, >> Yasufumi >> >> +1 From ueha.ayumu at fujitsu.com Thu Sep 24 07:19:45 2020 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Thu, 24 Sep 2020 07:19:45 +0000 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: References: <5f43ced2-c6e3-e763-1f1e-b1c0107a2941@hco.ntt.co.jp_1> <723e5cd1426567b8a9564b95574627a4@hco.ntt.co.jp_1> Message-ID: +1 Best regards, Ayumu Ueha > On 2020/09/17 4:06, yasufum wrote: >> Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, >> fixing bugs and answering questions in the recent releases [1][2] and >> had several sessions on summits for Tacker. In addition, he is now >> well distinguished as one of the responsibility from ETSI-NFV >> standard community as a contributor between the standard and >> implementation for the recent contributions for both of OpenStack and ETSI. >> >> I'd appreciate if we add Toshiaki to the core team. >> >> [1] https://www.stackalytics.com/?company=nec&module=tacker >> [2] >> https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&met >> ric=marks >> Regards, >> Yasufumi >> From araragi222 at gmail.com Thu Sep 24 07:39:58 2020 From: araragi222 at gmail.com (=?UTF-8?B?5ZGC6Imv?=) Date: Thu, 24 Sep 2020 16:39:58 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: References: Message-ID: +1, Thanks and Regards 2020年9月17日(木) 4:09 yasufum : > Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, > fixing bugs and answering questions in the recent releases [1][2] and > had several sessions on summits for Tacker. In addition, he is now well > distinguished as one of the responsibility from ETSI-NFV standard > community as a contributor between the standard and implementation for > the recent contributions for both of OpenStack and ETSI. > > I'd appreciate if we add Toshiaki to the core team. > > [1] https://www.stackalytics.com/?company=nec&module=tacker > [2] > > https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks > > Regards, > Yasufumi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Sep 24 08:45:38 2020 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 24 Sep 2020 10:45:38 +0200 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: Hi, I would like to selectively respond to the number of goals per cycle question. A possible direction could be to forget the "one cycle goal" thing and allow to finish the goals in a longer time frame. From "management" perspective the important is to have a fix number of goals per cycle to avoid overallocation of people. Another approach could be to attach a number, a difficulty feeling or similar to the proposed goals to make it easier to select them, and avoid to choose 2 hard-to-do goal for one cycle. This numbering can be done by project teams/PTLs whoever has the insight for the projects. Example: zuulv3 migration can be a hard to do goal as affects the whole gating of a project with hard coordination between projects. Add healthcheck API is much simpler as can be done without affecting the life of a whole project, or the community. Regards Lajos Katona (lajoskatona) Graham Hayes ezt írta (időpont: 2020. szept. 21., H, 19:54): > Hi All > > It is that time of year / release again - and we need to choose the > community goals for Wallaby. > > Myself and Nate looked over the list of goals [1][2][3], and we are > suggesting one of the following: > > > - Finish moving legacy python-*client CLIs to python-openstackclient > - Move from oslo.rootwrap to oslo.privsep > - Implement the API reference guide changes > - All API to provide a /healthcheck URL like Keystone (and others) > provide > > Some of these goals have champions signed up already, but we need to > make sure they are still available to do them. If you are interested in > helping drive any of the goals, please speak up! > > We need to select goals in time for the new release cycle - so please > reply if there is goals you think should be included in this list, or > not included. > > Next steps after this will be helping people write a proposed goal > and then the TC selecting the ones we will pursue during Wallaby. > > Additionally, we have traditionally selected 2 goals per cycle - > however with the people available to do the work across projects > Nate and I briefly discussed reducing that to one for this cycle. > > What does the community think about this? > > Thanks, > > Graham > > 1 - https://etherpad.opendev.org/p/community-goals > 2 - https://governance.openstack.org/tc/goals/proposed/index.html > 3 - https://etherpad.opendev.org/p/community-w-series-goals > 4 - > > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Thu Sep 24 09:43:03 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Thu, 24 Sep 2020 18:43:03 +0900 Subject: [tacker] Propose Toshiaki Takahashi for tacker core In-Reply-To: References: Message-ID: <0d140c84-5f46-3eec-6be3-7001ed052371@gmail.com> On 2020/09/17 4:06, yasufum wrote: > Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, > fixing bugs and answering questions in the recent releases [1][2] and > had several sessions on summits for Tacker. In addition, he is now well > distinguished as one of the responsibility from ETSI-NFV standard > community as a contributor between the standard and implementation for > the recent contributions for both of OpenStack and ETSI. > > I'd appreciate if we add Toshiaki to the core team. > > [1] https://www.stackalytics.com/?company=nec&module=tacker > [2] > https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metric=marks Thank you everyone! I've seen only positive responses, so I've added Toshiaki as a new core of tacker team. Congratulations, Toshiaki! > > > Regards, > Yasufumi > From ts-takahashi at nec.com Thu Sep 24 10:14:11 2020 From: ts-takahashi at nec.com (=?utf-8?B?VEFLQUhBU0hJIFRPU0hJQUtJKOmrmOapi+OAgOaVj+aYjik=?=) Date: Thu, 24 Sep 2020 10:14:11 +0000 Subject: [tacker] Propose Toshiaki Takahashi for tacker core Message-ID: Hi Yasufumi and Team, Thank you so much! I'm glad to join core team. I'll continue to contribute for Tacker community activation! Toshiaki > -----Original Message----- > From: Yasufumi Ogawa > Sent: Thursday, September 24, 2020 6:43 PM > To: openstack-discuss at lists.openstack.org; TAKAHASHI TOSHIAKI(高橋 敏明) > > Subject: ##freemail## Re: [tacker] Propose Toshiaki Takahashi for tacker core > > On 2020/09/17 4:06, yasufum wrote: > > Toshiaki Takahashi (takahashi-tsc) has been so active for reviewing, > > fixing bugs and answering questions in the recent releases [1][2] and > > had several sessions on summits for Tacker. In addition, he is now > > well distinguished as one of the responsibility from ETSI-NFV standard > > community as a contributor between the standard and implementation for > > the recent contributions for both of OpenStack and ETSI. > > > > I'd appreciate if we add Toshiaki to the core team. > > > > [1] https://www.stackalytics.com/?company=nec&module=tacker > > [2] > > https://www.stackalytics.com/?user_id=t-takahashi%40ig.jp.nec.com&metr > > ic=marks > > Thank you everyone! I've seen only positive responses, so I've added Toshiaki as a > new core of tacker team. > > Congratulations, Toshiaki! > > > > > > > Regards, > > Yasufumi > > From whayutin at redhat.com Thu Sep 24 12:15:12 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 24 Sep 2020 06:15:12 -0600 Subject: [tripleo] rdo infrastructure maintenance Message-ID: Greetings, There is some maintenance on the rdo infra that is causing check and gate jobs to fail upstream. The situation is actively being worked on at this time. You may find errors in your check / gate jobs.. like the following. TASK [oooci-build-images : Download TripleO source image] 2020-09-24 10:26:35.615996 | primary | ERROR 2020-09-24 10:26:35.617685 | primary | { 2020-09-24 10:26:35.617774 | primary | "attempts": 60, 2020-09-24 10:26:35.617850 | primary | "msg": "Failed to connect to images.rdoproject.org at port 443: [Errno 113] No route to host", 2020-09-24 10:26:35.617929 | primary | "status": -1, 2020-09-24 10:26:35.618001 | primary | "url": "https://images.rdoproject.org/CentOS-8-x86_64-GenericCloud.qcow2" 2020-09-24 10:26:35.618072 | primary | } https://5134c188955ee39fd51b-cd04c057bb6c1703504e41a1dbc31642.ssl.cf5.rackcdn.com/750812/10/gate/tripleo-buildimage-overcloud-full-centos-8/3479dc9/job-output.txt Sorry for the trouble, we'll update this email when we're clear. 0/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Thu Sep 24 13:19:10 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 24 Sep 2020 15:19:10 +0200 Subject: [all][elections][tc] Stepping down from the TC In-Reply-To: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> References: <951e58cd-d87a-4063-845b-db47198b4fda@www.fastmail.com> Message-ID: <5f453409-6dbc-43e5-9813-2ed5ff658bb3@www.fastmail.com> Thanks for all the kind words everyone. You are once again proving OpenStack is a great community! I wish you all the best, to all of you. Regards, Jean-Philippe Evrard (evrardjp) From jean-philippe at evrard.me Thu Sep 24 13:40:27 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 24 Sep 2020 15:40:27 +0200 Subject: [oslo] PTL Non-candidacy: I mean it this time :-P In-Reply-To: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> References: <8a380a4a-a6f1-f67b-9262-9f6f5f3761f8@nemebean.com> Message-ID: <67a902da-ea23-42d9-bcb6-305f4d6d09fc@www.fastmail.com> On Tue, Sep 22, 2020, at 21:26, Ben Nemec wrote: > I'll still be around and try to help out when I can, but it's past time > for me to turn the leadership of Oslo over to someone else. We've got a > great team and I'm sure any one of them will do a great job! Thanks for all the work done, Ben. Regards, JP From arunkumar.palanisamy at tcs.com Wed Sep 23 15:21:20 2020 From: arunkumar.palanisamy at tcs.com (ARUNKUMAR PALANISAMY) Date: Wed, 23 Sep 2020 15:21:20 +0000 Subject: Trove images for Cluster testing. In-Reply-To: References: Message-ID: Hi Lingxian, Sorry for late reply, I couldn’t reply to your mail due to my health issue. We have team in Europe and US time zone and can have call between 8 am to 10 am UTC +12. Please let us know you’re your availability and suitable time. Based on that we will share meeting invite. Regards, Arunkumar Palanisamy From: Lingxian Kong Sent: Wednesday, September 2, 2020 12:06 PM To: ARUNKUMAR PALANISAMY Cc: openstack-discuss at lists.openstack.org; Pravin Mohan ; Murugan N Subject: Re: Trove images for Cluster testing. "External email. Open with Caution" Hi Arunkumar, For how to join IRC channel, please see https://docs.opendev.org/opendev/infra-manual/latest/developers.html#irc-account Currently there is no trove team meeting because we don't have any other people interested (for some historical reasons), but if needed, we can schedule a time suitable for both. I'm in UTC+12 btw. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Wed, Sep 2, 2020 at 2:56 AM ARUNKUMAR PALANISAMY > wrote: Hi Lingxian, Hope you are doing Good. Thank you for your mail and detailed information. We would like to join #openstack-trove IRC channel for discussions. Could you please advise us the process to join IRC channel. We came to know that currently there is no IRC channel meeting happening for Trove, if there is any meeting scheduled and happening. we would like to join and understand the works and progress towards Trove and contribute further. Regards, Arunkumar Palanisamy From: Lingxian Kong > Sent: Friday, August 28, 2020 12:09 AM To: ARUNKUMAR PALANISAMY > Cc: openstack-discuss at lists.openstack.org; Pravin Mohan > Subject: Re: Trove images for Cluster testing. "External email. Open with Caution" Hi Arunkumar, Unfortunately, for now Trove only supports MySQL and MariaDB, I'm working on adding PostgreSQL support. All other datastores are unmaintained right now. Since this(Victoria) dev cycle, docker container was introduced in Trove guest agent in order to remove the maintenance overhead for multiple Trove guest images. We only need to maintain one single guest image but could support different datastores. We have to do that as such a small Trove team in the community. If supporting Redis, Cassandra, MongoDB or Couchbase is in your feature request, you are welcome to contribute to Trove. Please let me know if you have any other questions. You are also welcome to join #openstack-trove IRC channel for discussion. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Fri, Aug 28, 2020 at 6:45 AM ARUNKUMAR PALANISAMY > wrote: Hello Team, My name is ARUNKUMAR PALANISAMY, As part of our project requirement, we are evaluating trove components and need your support for experimental datastore Image for testing cluster. (Redis, Cassandra, MongoDB, Couchbase) 1.) We are running devstack enviorment with Victoria Openstack release and with this image (trove-master-guest-ubuntu-bionic-dev.qcow2), we are able to deploy mysql instance and and getting below error while creating mongoDB instances. “ModuleNotFoundError: No module named 'trove.guestagent.datastore.experimental' “ 2.) While tried creating mongoDB image with diskimage-builder tool, but we are getting “Block device ” element error. Regards, Arunkumar Palanisamy Cell: +49 172 6972490 =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsuncp at gmail.com Thu Sep 24 09:34:02 2020 From: tsuncp at gmail.com (Issac Chan) Date: Thu, 24 Sep 2020 17:34:02 +0800 Subject: Error log Message-ID: Hi all, May you have a look on this log file? I cannot start trove guest instance database normally. Best wishes, Issac -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- root at test:/var/log/trove# cat logfile.txt 2020-09-24 09:04:57.216 991 INFO trove.cmd.guest [-] Creating user and group for database service 2020-09-24 09:04:57.217 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo groupadd --gid 1001 database execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:04:57.271 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo groupadd --gid 1001 database" returned: 0 in 0.054s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:04:57.273 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo useradd --uid 1001 --gid 1001 -M database execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:04:57.487 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo useradd --uid 1001 --gid 1001 -M database" returned: 0 in 0.214s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:04:58.539 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkdir -p /etc/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:04:58.560 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkdir -p /etc/mysql" returned: 0 in 0.021s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:04:58.561 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /etc/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:04:58.573 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /etc/mysql" returned: 0 in 0.012s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:04:59.005 991 INFO trove.guestagent.module.driver_manager [-] Initializing module driver manager. 2020-09-24 09:04:59.009 991 INFO trove.guestagent.module.driver_manager [-] Loading Module driver: new_relic_license 2020-09-24 09:04:59.009 991 DEBUG trove.guestagent.module.driver_manager [-] description: New Relic License Module Driver _check_extension /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/module/driver_manager.py:60 2020-09-24 09:04:59.010 991 DEBUG trove.guestagent.module.driver_manager [-] updated : 2016-04-12 _check_extension /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/module/driver_manager.py:61 2020-09-24 09:04:59.011 991 INFO trove.guestagent.module.driver_manager [-] Loading Module driver: ping 2020-09-24 09:04:59.012 991 DEBUG trove.guestagent.module.driver_manager [-] description: Ping Module Driver _check_extension /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/module/driver_manager.py:60 2020-09-24 09:04:59.012 991 DEBUG trove.guestagent.module.driver_manager [-] updated : 2016-03-04 _check_extension /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/module/driver_manager.py:61 2020-09-24 09:04:59.012 991 INFO trove.guestagent.module.driver_manager [-] Loaded module driver: new_relic_license 2020-09-24 09:04:59.012 991 INFO trove.guestagent.module.driver_manager [-] Loaded module driver: ping 2020-09-24 09:04:59.013 991 DEBUG trove.common.strategies.strategy [-] Looking for strategy MysqlGTIDReplication in trove.guestagent.strategies.replication.mysql_gtid get_strategy /opt/guest-agent-venv/lib/python3.6/site-packages/trove/common/strategies/strategy.py:59 2020-09-24 09:04:59.016 991 DEBUG trove.common.strategies.strategy [-] Loaded strategy replication:None __init__ /opt/guest-agent-venv/lib/python3.6/site-packages/trove/common/strategies/strategy.py:39 2020-09-24 09:04:59.017 991 DEBUG trove.guestagent.strategies.replication [-] Got replication instance from: trove.guestagent.strategies.replication.mysql_gtid.MysqlGTIDReplication get_instance /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/strategies/replication/__init__.py:46 2020-09-24 09:04:59.018 991 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 2020-09-24 09:04:59.018 991 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:282 2020-09-24 09:04:59.019 991 DEBUG oslo_service.service [-] Full set of CONF: _wait_for_exit_or_signal /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/service.py:363 2020-09-24 09:04:59.019 991 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2567 2020-09-24 09:04:59.019 991 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2568 2020-09-24 09:04:59.019 991 DEBUG oslo_service.service [-] command line args: ['--config-dir=/etc/trove/conf.d'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2569 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] config files: [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2571 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2572 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] admin_roles = ['admin'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] agent_call_high_timeout = 1200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] agent_call_low_timeout = 15 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] agent_heartbeat_expiry = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.020 991 DEBUG oslo_service.service [-] agent_heartbeat_time = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] agent_replication_snapshot_timeout = 1800 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backdoor_port = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backdoor_socket = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backup_aes_cbc_key = default_aes_cbc_key log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backup_chunk_size = 65536 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backup_docker_image = openstacktrove/db-backup:1.0.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backup_runner = trove.guestagent.backup.backup_types.InnoBackupEx log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.021 991 DEBUG oslo_service.service [-] backup_runner_options = {} log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] backup_segment_max_size = 2147483648 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] backup_swift_container = database_backups log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] backup_use_gzip_compression = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] backup_use_openssl_encryption = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] backup_use_snet = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] backups_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.022 991 DEBUG oslo_service.service [-] bind_port = 8779 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] black_list_regex = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] block_device_mapping = vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] cinder_api_insecure = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] cinder_endpoint_type = publicURL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] cinder_service_type = volumev2 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] cinder_url = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] cinder_volume_type = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.023 991 DEBUG oslo_service.service [-] cloudinit_location = /etc/trove/cloudinit log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] cluster_delete_time_out = 180 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] cluster_usage_timeout = 36000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] clusters_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] command_process_timeout = 30 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] conductor_manager = trove.conductor.manager.Manager log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] conductor_queue = trove-conductor log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.024 991 DEBUG oslo_service.service [-] config_dir = ['/etc/trove/conf.d'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] config_file = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] config_source = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] configurations_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] control_exchange = trove log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] database_service_uid = 1001 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] databases_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] datastore_manager = mysql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] datastore_registry_ext = {} log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.025 991 DEBUG oslo_service.service [-] db_api_implementation = trove.db.sqlalchemy.api log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.026 991 DEBUG oslo_service.service [-] debug = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.026 991 DEBUG oslo_service.service [-] default_datastore = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.026 991 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'docker=WARN'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.026 991 DEBUG oslo_service.service [-] device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.026 991 DEBUG oslo_service.service [-] dns_account_id = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.026 991 DEBUG oslo_service.service [-] dns_auth_url = http://0.0.0.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.027 991 DEBUG oslo_service.service [-] dns_domain_id = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.027 991 DEBUG oslo_service.service [-] dns_domain_name = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.027 991 DEBUG oslo_service.service [-] dns_driver = trove.dns.driver.DnsDriver log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.027 991 DEBUG oslo_service.service [-] dns_endpoint_url = http://0.0.0.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.027 991 DEBUG oslo_service.service [-] dns_hostname = localhost log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_instance_entry_factory = trove.dns.driver.DnsInstanceEntryFactory log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_management_base_url = http://0.0.0.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_passkey = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_project_domain_id = default log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_region = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_service_type = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.028 991 DEBUG oslo_service.service [-] dns_time_out = 120 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] dns_ttl = 300 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] dns_user_domain_id = default log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] dns_username = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] enable_secure_rpc_messaging = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] exists_notification_interval = 3600 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] exists_notification_transformer = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] expected_filetype_suffixes = ['json'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] format_options = -m 5 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.029 991 DEBUG oslo_service.service [-] glance_client_version = 2 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] glance_endpoint_type = publicURL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] glance_service_type = image log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] glance_url = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] guest_config = /etc/trove/trove-guestagent.conf log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] guest_id = ea8f8fcb-2732-488f-87f2-510be37fa73b log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] guest_info = guest_info.conf log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] guest_log_container_name = database_logs log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.030 991 DEBUG oslo_service.service [-] guest_log_expiry = 2592000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] guest_log_limit = 1000000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] host = 0.0.0.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] hostname_require_valid_ip = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] http_delete_rate = 200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] http_get_rate = 200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] http_mgmt_post_rate = 200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] http_post_rate = 200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.031 991 DEBUG oslo_service.service [-] http_put_rate = 200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] injected_config_location = /etc/trove/conf.d log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] inst_rpc_key_encr_key = emYjgHFqfXNB1NGehAFIUeoyw4V4XwWHEaKP log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] instance_rpc_encr_key = D8fN6z9TwJn2zZSOqAsG7mNkVhRA5Tvv log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] instances_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] ip_regex = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.032 991 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_dir = /var/log/trove/ log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_file = logfile.txt log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_options = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.033 991 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] management_networks = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] management_security_groups = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] max_accepted_volume_size = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] max_backups_per_tenant = 50 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.034 991 DEBUG oslo_service.service [-] max_instances_per_tenant = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] max_volumes_per_tenant = 40 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] module_aes_cbc_key = module_aes_cbc_key log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] module_reapply_max_batch_size = 50 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] module_reapply_min_batch_delay = 2 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.035 991 DEBUG oslo_service.service [-] module_types = ['ping', 'new_relic_license'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] modules_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] mount_options = defaults,noatime log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] network_driver = trove.network.nova.NovaNetwork log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] network_label_regex = ^private$ log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] neutron_api_insecure = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] neutron_endpoint_type = publicURL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.036 991 DEBUG oslo_service.service [-] neutron_service_type = network log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] neutron_url = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] notification_service_id = {'mysql': '2f3ff068-2bfb-4f70-9a9d-a6bb65bc084b', 'percona': 'fd1723f5-68d2-409c-994f-a4a197892a17', 'pxc': '75a628c3-f81b-4ffb-b10a-4087c26bc854', 'redis': 'b216ffc5-1947-456c-a4cf-70f94c05f7d0', 'cassandra': '459a230d-4e97-4344-9067-2a54a310b0ed', 'couchbase': 'fa62fe68-74d9-4779-a24e-36f19602c415', 'mongodb': 'c8c907af-7375-456f-b929-b637ff9209ee', 'postgresql': 'ac277e0d-4f21-40aa-b347-1ea31e571720', 'couchdb': 'f0a9ab7b-66f7-4352-93d7-071521d44c7c', 'vertica': 'a8d805ae-a3b2-c4fd-gb23-b62cee5201ae', 'db2': 'e040cd37-263d-4869-aaa6-c62aa97523b5', 'mariadb': '7a4f82cc-10d2-4bc6-aadc-d9aacc2a3cb5'} log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] nova_api_insecure = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] nova_client_version = 2.12 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] nova_compute_endpoint_type = publicURL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] nova_compute_service_type = compute log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] nova_compute_url = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.037 991 DEBUG oslo_service.service [-] nova_keypair = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] num_tries = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] public_endpoint = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] pybasedir = /opt/guest-agent-venv/lib/python3.6/site-packages/trove log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] pydev_debug = disabled log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] pydev_debug_host = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] pydev_debug_port = 5678 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] pydev_path = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.038 991 DEBUG oslo_service.service [-] quota_driver = trove.quota.quota.DbQuotaDriver log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] quota_notification_interval = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] reboot_time_out = 300 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] region = LOCAL_DEV log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] remote_cinder_client = trove.common.clients_admin.cinder_client_trove_admin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.039 991 DEBUG oslo_service.service [-] remote_dns_client = trove.common.clients.dns_client log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] remote_glance_client = trove.common.clients_admin.glance_client_trove_admin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] remote_guest_client = trove.common.clients.guest_client log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] remote_neutron_client = trove.common.clients_admin.neutron_client_trove_admin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] remote_nova_client = trove.common.clients_admin.nova_client_trove_admin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] remote_swift_client = trove.common.clients.swift_client log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] remote_trove_client = trove.common.trove_remote.trove_client log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] report_interval = 30 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.040 991 DEBUG oslo_service.service [-] reserved_network_cidrs = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] resize_time_out = 900 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] restore_usage_timeout = 3600 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] revert_time_out = 600 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] root_grant = ['ALL'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] root_grant_option = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] run_external_periodic_tasks = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] server_delete_time_out = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] sql_query_logging = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.041 991 DEBUG oslo_service.service [-] state_change_poll_time = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] state_change_wait_time = 180 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] storage_namespace = trove.common.strategies.storage.swift log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] storage_strategy = swift log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] swift_api_insecure = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] swift_endpoint_type = publicURL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] swift_service_type = object-store log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] swift_url = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] syslog_log_facility = LOG_USER log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.042 991 DEBUG oslo_service.service [-] taskmanager_queue = taskmanager log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] taskmanager_rpc_encr_key = bzH6y0SGmjuoY0FNSTptrhgieGXNDX6PIhvz log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] template_path = /etc/trove/templates/ log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] trove_api_workers = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] trove_conductor_workers = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] trove_dns_support = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] trove_endpoint_type = publicURL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.043 991 DEBUG oslo_service.service [-] trove_security_group_name_prefix = trove_sg log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] trove_security_group_rule_cidr = 0.0.0.0/0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] trove_security_groups_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] trove_service_type = database log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] trove_url = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] trove_volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] update_status_on_fail = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] usage_sleep_time = 5 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.044 991 DEBUG oslo_service.service [-] usage_timeout = 900 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] use_journal = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] use_json = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] use_nova_server_config_drive = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] users_page_size = 20 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] verify_replica_volume_size = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.045 991 DEBUG oslo_service.service [-] verify_swift_checksum_on_restore = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] volume_format_timeout = 120 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] volume_fstype = ext3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] volume_time_out = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2581 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] keystone_authtoken.admin_password = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.046 991 DEBUG oslo_service.service [-] keystone_authtoken.admin_tenant_name = admin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.admin_token = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.admin_user = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_admin_prefix = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_host = 127.0.0.1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_port = 35357 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_protocol = https log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_section = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.047 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_type = password log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_uri = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.auth_version = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.cache = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.cafile = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.certfile = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.delay_auth_decision = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.enforce_token_bind = permissive log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.048 991 DEBUG oslo_service.service [-] keystone_authtoken.http_connect_timeout = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.http_request_max_retries = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.identity_uri = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.include_service_catalog = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.insecure = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.interface = admin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.keyfile = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_pool_conn_get_timeout = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_pool_dead_retry = 300 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.049 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_pool_maxsize = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_pool_socket_timeout = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_pool_unused_timeout = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_secret_key = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_security_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.memcache_use_advanced_pool = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.memcached_servers = ['192.168.1.14:11211'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.region_name = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.050 991 DEBUG oslo_service.service [-] keystone_authtoken.service_token_roles = ['service'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] keystone_authtoken.service_token_roles_required = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] keystone_authtoken.service_type = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] keystone_authtoken.token_cache_time = 300 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] keystone_authtoken.www_authenticate_uri = http://192.168.1.14:5000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] cache.backend = dogpile.cache.null log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] cache.backend_argument = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] cache.config_prefix = cache.oslo log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.051 991 DEBUG oslo_service.service [-] cache.debug_cache_backend = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.enabled = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.expiration_time = 600 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.memcache_dead_retry = 300 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.memcache_pool_maxsize = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.memcache_pool_unused_timeout = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.memcache_servers = ['localhost:11211'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.052 991 DEBUG oslo_service.service [-] cache.memcache_socket_timeout = 1.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] cache.proxies = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.053 991 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] database.connection = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] database.connection_debug = 0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] database.connection_trace = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] database.idle_timeout = 3600 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] database.max_overflow = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.054 991 DEBUG oslo_service.service [-] database.max_pool_size = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.max_retries = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.mysql_sql_mode = TRADITIONAL log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.pool_timeout = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.query_log = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.retry_interval = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.slave_connection = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.055 991 DEBUG oslo_service.service [-] database.sqlite_synchronous = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.backup_strategy = innobackupex log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.docker_image = mysql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.guest_log_exposed_logs = general,slow_query log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.guest_log_long_query_time = 1000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.056 991 DEBUG oslo_service.service [-] mysql.ignore_dbs = ['mysql', 'information_schema', 'performance_schema', 'sys'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.ignore_users = ['os_admin', 'root'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.mount_point = /var/lib/mysql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.replication_namespace = trove.guestagent.strategies.replication.mysql_gtid log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.replication_strategy = MysqlGTIDReplication log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.tcp_ports = [range(3306, 3305, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.057 991 DEBUG oslo_service.service [-] mysql.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] mysql.usage_timeout = 400 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] mysql.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] percona.backup_strategy = InnoBackupEx log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] percona.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] percona.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] percona.guest_log_exposed_logs = general,slow_query log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.058 991 DEBUG oslo_service.service [-] percona.guest_log_long_query_time = 1000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.ignore_dbs = ['mysql', 'information_schema', 'performance_schema'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.ignore_users = ['os_admin', 'root'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.mount_point = /var/lib/mysql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.replication_namespace = trove.guestagent.strategies.replication.mysql_gtid log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.replication_password = NETOU7897NNLOU log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.059 991 DEBUG oslo_service.service [-] percona.replication_strategy = MysqlGTIDReplication log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.replication_user = slave_user log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.tcp_ports = [range(3306, 3305, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.usage_timeout = 450 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.060 991 DEBUG oslo_service.service [-] percona.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.api_strategy = trove.common.strategies.cluster.experimental.galera_common.api.GaleraCommonAPIStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.backup_strategy = InnoBackupEx log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.cluster_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.guest_log_exposed_logs = general,slow_query log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.061 991 DEBUG oslo_service.service [-] pxc.guest_log_long_query_time = 1000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.guestagent_strategy = trove.common.strategies.cluster.experimental.galera_common.guestagent.GaleraCommonGuestAgentStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.ignore_dbs = ['mysql', 'information_schema', 'performance_schema'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.ignore_users = ['os_admin', 'root', 'clusterrepuser'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.min_cluster_member_count = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.mount_point = /var/lib/mysql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.replication_namespace = trove.guestagent.strategies.replication.mysql_gtid log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.062 991 DEBUG oslo_service.service [-] pxc.replication_strategy = MysqlGTIDReplication log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.replication_user = slave_user log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.root_controller = trove.extensions.pxc.service.PxcRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.taskmanager_strategy = trove.common.strategies.cluster.experimental.galera_common.taskmanager.GaleraCommonTaskManagerStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.tcp_ports = [range(3306, 3305, -1), range(4444, 4443, -1), range(4567, 4566, -1), range(4568, 4567, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.063 991 DEBUG oslo_service.service [-] pxc.usage_timeout = 450 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] pxc.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] redis.api_strategy = trove.common.strategies.cluster.experimental.redis.api.RedisAPIStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] redis.backup_strategy = RedisBackup log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] redis.cluster_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] redis.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] redis.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.064 991 DEBUG oslo_service.service [-] redis.guest_log_exposed_logs = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.guestagent_strategy = trove.common.strategies.cluster.experimental.redis.guestagent.RedisGuestAgentStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.mount_point = /var/lib/redis log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.replication_namespace = trove.guestagent.strategies.replication.experimental.redis_sync log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.replication_strategy = RedisSyncReplication log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.root_controller = trove.extensions.redis.service.RedisRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.taskmanager_strategy = trove.common.strategies.cluster.experimental.redis.taskmanager.RedisTaskManagerStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.065 991 DEBUG oslo_service.service [-] redis.tcp_ports = [range(6379, 6378, -1), range(16379, 16378, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] redis.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] redis.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] cassandra.api_strategy = trove.common.strategies.cluster.experimental.cassandra.api.CassandraAPIStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] cassandra.backup_strategy = NodetoolSnapshot log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] cassandra.cluster_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] cassandra.database_controller = trove.extensions.cassandra.service.CassandraDatabaseController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] cassandra.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.066 991 DEBUG oslo_service.service [-] cassandra.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.enable_cluster_instance_backup = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.enable_saslauthd = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.guest_log_exposed_logs = system log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.guestagent_strategy = trove.common.strategies.cluster.experimental.cassandra.guestagent.CassandraGuestAgentStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.ignore_dbs = ['system', 'system_auth', 'system_traces'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.ignore_users = ['os_admin'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.067 991 DEBUG oslo_service.service [-] cassandra.mount_point = /var/lib/cassandra log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.node_sync_time = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.replication_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.system_log_level = INFO log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.taskmanager_strategy = trove.common.strategies.cluster.experimental.cassandra.taskmanager.CassandraTaskManagerStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.tcp_ports = [range(7000, 6999, -1), range(7001, 7000, -1), range(7199, 7198, -1), range(9042, 9041, -1), range(9160, 9159, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.068 991 DEBUG oslo_service.service [-] cassandra.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] cassandra.user_access_controller = trove.extensions.cassandra.service.CassandraUserAccessController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] cassandra.user_controller = trove.extensions.cassandra.service.CassandraUserController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] cassandra.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] couchbase.backup_strategy = CbBackup log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] couchbase.default_password_length = 24 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] couchbase.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] couchbase.guest_log_exposed_logs = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.069 991 DEBUG oslo_service.service [-] couchbase.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.mount_point = /var/lib/couchbase log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.replication_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.tcp_ports = [range(8091, 8090, -1), range(8092, 8091, -1), range(4369, 4368, -1), range(11209, 11212), range(21100, 21200)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.070 991 DEBUG oslo_service.service [-] couchbase.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.add_members_timeout = 300 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.api_strategy = trove.common.strategies.cluster.experimental.mongodb.api.MongoDbAPIStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.backup_strategy = MongoDump log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.cluster_secure = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.cluster_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.config_servers_volume_size = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.configsvr_port = 27019 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.071 991 DEBUG oslo_service.service [-] mongodb.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.guest_log_exposed_logs = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.guestagent_strategy = trove.common.strategies.cluster.experimental.mongodb.guestagent.MongoDbGuestAgentStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.ignore_dbs = ['admin', 'local', 'config'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.ignore_users = ['admin.os_admin', 'admin.root'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.072 991 DEBUG oslo_service.service [-] mongodb.mongodb_port = 27017 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.mount_point = /var/lib/mongodb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.num_config_servers_per_cluster = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.num_query_routers_per_cluster = 1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.query_routers_volume_size = 10 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.replication_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.root_controller = trove.extensions.mongodb.service.MongoDBRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.taskmanager_strategy = trove.common.strategies.cluster.experimental.mongodb.taskmanager.MongoDbTaskManagerStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.073 991 DEBUG oslo_service.service [-] mongodb.tcp_ports = [range(2500, 2499, -1), range(27017, 27016, -1), range(27019, 27018, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] mongodb.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] mongodb.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] postgresql.backup_strategy = PgBaseBackup log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] postgresql.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] postgresql.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] postgresql.guest_log_exposed_logs = general log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] postgresql.guest_log_long_query_time = 0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.074 991 DEBUG oslo_service.service [-] postgresql.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.ignore_dbs = ['os_admin', 'postgres'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.ignore_users = ['os_admin', 'postgres', 'root'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.mount_point = /var/lib/postgresql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.postgresql_port = 5432 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.replication_namespace = trove.guestagent.strategies.replication.experimental.postgresql_impl log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.replication_strategy = PostgresqlReplicationStreaming log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.075 991 DEBUG oslo_service.service [-] postgresql.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] postgresql.tcp_ports = [range(5432, 5431, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] postgresql.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] postgresql.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] postgresql.wal_archive_location = /mnt/wal_archive log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] couchdb.backup_strategy = CouchDBBackup log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] couchdb.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.076 991 DEBUG oslo_service.service [-] couchdb.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.guest_log_exposed_logs = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.ignore_dbs = ['_users', '_replicator'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.ignore_users = ['os_admin', 'root'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.mount_point = /var/lib/couchdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.replication_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.077 991 DEBUG oslo_service.service [-] couchdb.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] couchdb.tcp_ports = [range(5984, 5983, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] couchdb.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] couchdb.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] vertica.api_strategy = trove.common.strategies.cluster.experimental.vertica.api.VerticaAPIStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] vertica.backup_namespace = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] vertica.backup_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] vertica.cluster_member_count = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.078 991 DEBUG oslo_service.service [-] vertica.cluster_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.guest_log_exposed_logs = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.guestagent_strategy = trove.common.strategies.cluster.experimental.vertica.guestagent.VerticaGuestAgentStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.min_ksafety = 0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.mount_point = /var/lib/vertica log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.079 991 DEBUG oslo_service.service [-] vertica.readahead_size = 2048 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.replication_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.restore_namespace = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.root_controller = trove.extensions.vertica.service.VerticaRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.taskmanager_strategy = trove.common.strategies.cluster.experimental.vertica.taskmanager.VerticaTaskManagerStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.tcp_ports = [range(5433, 5432, -1), range(5434, 5433, -1), range(5444, 5443, -1), range(5450, 5449, -1), range(4803, 4802, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.udp_ports = [range(5433, 5432, -1), range(4803, 4802, -1), range(4804, 4803, -1), range(6453, 6452, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.080 991 DEBUG oslo_service.service [-] vertica.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.backup_strategy = DB2OfflineBackup log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.guest_log_exposed_logs = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.ignore_users = ['PUBLIC', 'DB2INST1'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.081 991 DEBUG oslo_service.service [-] db2.mount_point = /home/db2inst1/db2inst1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] db2.replication_strategy = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] db2.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] db2.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] db2.tcp_ports = [range(50000, 49999, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] db2.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] db2.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] mariadb.api_strategy = trove.common.strategies.cluster.experimental.galera_common.api.GaleraCommonAPIStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.082 991 DEBUG oslo_service.service [-] mariadb.backup_strategy = mariabackup log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.cluster_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.default_password_length = 36 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.device_path = /dev/vdb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.docker_image = mariadb log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.guest_log_exposed_logs = general,slow_query log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.guest_log_long_query_time = 1000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.083 991 DEBUG oslo_service.service [-] mariadb.guestagent_strategy = trove.common.strategies.cluster.experimental.galera_common.guestagent.GaleraCommonGuestAgentStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.icmp = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.ignore_dbs = ['mysql', 'information_schema', 'performance_schema'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.ignore_users = ['os_admin', 'root'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.min_cluster_member_count = 3 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.mount_point = /var/lib/mysql log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.replication_namespace = trove.guestagent.strategies.replication.mariadb_gtid log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.replication_strategy = MariaDBGTIDReplication log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.084 991 DEBUG oslo_service.service [-] mariadb.root_controller = trove.extensions.common.service.DefaultRootController log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] mariadb.root_on_create = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] mariadb.taskmanager_strategy = trove.common.strategies.cluster.experimental.galera_common.taskmanager.GaleraCommonTaskManagerStrategy log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] mariadb.tcp_ports = [range(3306, 3305, -1), range(4444, 4443, -1), range(4567, 4566, -1), range(4568, 4567, -1)] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] mariadb.udp_ports = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] mariadb.usage_timeout = 400 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] mariadb.volume_support = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.085 991 DEBUG oslo_service.service [-] network.public_network_id = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.auth_url = http://192.168.1.14:5000 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.password = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.project_domain_name = Default log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.project_id = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.project_name = service log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.region_name = RegionOne log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.086 991 DEBUG oslo_service.service [-] service_credentials.user_domain_name = Default log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] service_credentials.username = trove log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] upgrade_levels.conductor = latest log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] upgrade_levels.guestagent = latest log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] upgrade_levels.taskmanager = latest log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.087 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.088 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.089 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.090 991 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.091 991 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.091 991 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2589 2020-09-24 09:04:59.091 991 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_config/cfg.py:2591 2020-09-24 09:04:59.091 991 DEBUG trove.common.rpc.service [-] Creating RPC server for service guestagent.ea8f8fcb-2732-488f-87f2-510be37fa73b start /opt/guest-agent-venv/lib/python3.6/site-packages/trove/common/rpc/service.py:56 2020-09-24 09:04:59.170 991 INFO trove.guestagent.datastore.manager [-] Starting datastore prepare for 'mysql:None'. 2020-09-24 09:04:59.180 991 DEBUG trove.guestagent.datastore.service [-] Casting set_status message to conductor (status is 'building'). set_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:137 2020-09-24 09:04:59.181 991 DEBUG trove.conductor.api [-] Making async call to cast heartbeat for instance: ea8f8fcb-2732-488f-87f2-510be37fa73b heartbeat /opt/guest-agent-venv/lib/python3.6/site-packages/trove/conductor/api.py:73 2020-09-24 09:04:59.201 991 DEBUG trove.guestagent.datastore.service [-] Successfully cast set_status. set_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:144 2020-09-24 09:04:59.202 991 INFO trove.guestagent.datastore.mysql_common.manager [-] Preparing the storage for /dev/vdb, mount path /var/lib/mysql 2020-09-24 09:04:59.202 991 INFO trove.guestagent.datastore.mysql_common.service [-] Stopping MySQL. 2020-09-24 09:04:59.217 991 WARNING trove.guestagent.utils.docker [-] Failed to get container database: docker.errors.NotFound: 404 Client Error: Not Found ("No such container: database") 2020-09-24 09:04:59.225 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep '^/dev/vdb ' /etc/mtab execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:04:59.237 991 DEBUG oslo_concurrency.processutils [-] CMD "grep '^/dev/vdb ' /etc/mtab" returned: 1 in 0.012s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:04:59.238 991 DEBUG trove.guestagent.volume [-] Checking if /dev/vdb exists. _check_device_exists /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/volume.py:218 2020-09-24 09:04:59.238 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo blockdev --getsize64 /dev/vdb execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:04:59.257 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo blockdev --getsize64 /dev/vdb" returned: 0 in 0.019s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:04:59.258 991 DEBUG trove.guestagent.volume [-] Formatting '/dev/vdb'. _format /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/volume.py:235 2020-09-24 09:04:59.258 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkfs --type ext3 -m 5 /dev/vdb execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.389 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkfs --type ext3 -m 5 /dev/vdb" returned: 0 in 3.130s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.389 991 DEBUG trove.guestagent.volume [-] Checking whether '/dev/vdb' is formatted. _check_format /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/volume.py:230 2020-09-24 09:05:02.390 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo dumpe2fs /dev/vdb execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.428 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo dumpe2fs /dev/vdb" returned: 0 in 0.038s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.429 991 DEBUG trove.guestagent.volume [-] Will mount /dev/vdb at /var/lib/mysql. mount /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/volume.py:247 2020-09-24 09:05:02.430 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo test -d /var/lib/mysql && echo 1 || echo 0 execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.456 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo test -d /var/lib/mysql && echo 1 || echo 0" returned: 0 in 0.026s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.459 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkdir -p /var/lib/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.481 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkdir -p /var/lib/mysql" returned: 0 in 0.022s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.482 991 DEBUG trove.guestagent.volume [-] Mounting volume. Device path:/dev/vdb, mount_point:/var/lib/mysql, volume_type:ext3, mount options:defaults,noatime mount /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/volume.py:342 2020-09-24 09:05:02.483 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mount -t ext3 -o defaults,noatime /dev/vdb /var/lib/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.511 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mount -t ext3 -o defaults,noatime /dev/vdb /var/lib/mysql" returned: 0 in 0.028s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.514 991 DEBUG trove.guestagent.volume [-] Writing new line to fstab:/dev/vdb /var/lib/mysql ext3 defaults,noatime 0 0 write_to_fstab /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/volume.py:357 2020-09-24 09:05:02.516 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo install -o root -g root -m 644 /tmp/tmphf3qlald /etc/fstab execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.578 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo install -o root -g root -m 644 /tmp/tmphf3qlald /etc/fstab" returned: 0 in 0.061s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.580 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /var/lib/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.606 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /var/lib/mysql" returned: 0 in 0.026s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.607 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkdir -p /var/lib/mysql/data execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.620 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkdir -p /var/lib/mysql/data" returned: 0 in 0.013s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.621 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /var/lib/mysql/data execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.632 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /var/lib/mysql/data" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.633 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkdir -p /etc/mysql/conf.d execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.642 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkdir -p /etc/mysql/conf.d" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.643 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /etc/mysql/conf.d execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.653 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /etc/mysql/conf.d" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.654 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/50-system-([0-9]+)-common\.cnf$$ execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.663 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/50-system-([0-9]+)-common\.cnf$$" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.664 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/50-system-([0-9]+)-.+\.cnf$$ execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.674 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/50-system-([0-9]+)-.+\.cnf$$" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.675 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo cp -f -R /tmp/tmpsr2hv2od /etc/mysql/conf.d/50-system-001-common.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.806 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo cp -f -R /tmp/tmpsr2hv2od /etc/mysql/conf.d/50-system-001-common.cnf" returned: 0 in 0.130s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.808 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /etc/mysql/conf.d/50-system-001-common.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.831 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /etc/mysql/conf.d/50-system-001-common.cnf" returned: 0 in 0.023s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.833 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chmod -R +444 /etc/mysql/conf.d/50-system-001-common.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.844 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chmod -R +444 /etc/mysql/conf.d/50-system-001-common.cnf" returned: 0 in 0.011s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.845 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo test -f /etc/mysql/my.cnf && echo 1 || echo 0 execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.856 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo test -f /etc/mysql/my.cnf && echo 1 || echo 0" returned: 0 in 0.011s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.857 991 WARNING trove.guestagent.common.configuration [-] File /etc/mysql/my.cnf not found: trove.common.exception.UnprocessableEntity: File does not exist: /etc/mysql/my.cnf 2020-09-24 09:05:02.857 991 INFO trove.guestagent.datastore.mysql_common.manager [-] Preparing database configuration 2020-09-24 09:05:02.858 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/20-user-([0-9]+)-.+\.cnf$$ execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.868 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/20-user-([0-9]+)-.+\.cnf$$" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.868 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/10-system-([0-9]+)-.+\.cnf$$ execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.878 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/10-system-([0-9]+)-.+\.cnf$$" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.879 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/50-system-([0-9]+)-.+\.cnf$$ execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.889 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/50-system-([0-9]+)-.+\.cnf$$" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.889 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo rm -f -R /etc/mysql/conf.d/50-system-001-common.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.910 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo rm -f -R /etc/mysql/conf.d/50-system-001-common.cnf" returned: 0 in 0.020s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.913 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo cp -f -R /tmp/tmp2sew91sj /etc/mysql/my.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.936 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo cp -f -R /tmp/tmp2sew91sj /etc/mysql/my.cnf" returned: 0 in 0.023s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.938 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /etc/mysql/my.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.950 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /etc/mysql/my.cnf" returned: 0 in 0.012s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.951 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chmod -R +444 /etc/mysql/my.cnf execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.961 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chmod -R +444 /etc/mysql/my.cnf" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.962 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo cp -f -R -L /etc/mysql/my.cnf /tmp/tmpchhtutpw execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.972 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo cp -f -R -L /etc/mysql/my.cnf /tmp/tmpchhtutpw" returned: 0 in 0.011s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:02.973 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chmod -R +444 /tmp/tmpchhtutpw execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:02.994 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chmod -R +444 /tmp/tmpchhtutpw" returned: 0 in 0.021s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:03.004 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/.+-([0-9]+)-.+\.cnf$$ execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:03.025 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo find /etc/mysql/conf.d -noleaf -type f -regextype posix-extended -regex .*/.+-([0-9]+)-.+\.cnf$$" returned: 0 in 0.022s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:03.027 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkdir -p /etc/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:03.038 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkdir -p /etc/mysql" returned: 0 in 0.011s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:03.039 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /etc/mysql execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:03.048 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /etc/mysql" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:03.049 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo mkdir -p /var/run/mysqld execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:03.059 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo mkdir -p /var/run/mysqld" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:03.060 991 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo chown -R 1001:1001 /var/run/mysqld execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:371 2020-09-24 09:05:03.071 991 DEBUG oslo_concurrency.processutils [-] CMD "sudo chown -R 1001:1001 /var/run/mysqld" returned: 0 in 0.010s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:408 2020-09-24 09:05:03.071 991 INFO trove.guestagent.datastore.mysql_common.service [-] Starting docker container, image: mysql:latest 2020-09-24 09:05:03.074 991 WARNING trove.guestagent.utils.docker [-] Failed to get container database: docker.errors.NotFound: 404 Client Error: Not Found ("No such container: database") 2020-09-24 09:05:22.218 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] Saving root credentials to local host. start_db /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:634 2020-09-24 09:05:22.538 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:22.548 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:22.549 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:25.847 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:25.853 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:25.853 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:29.178 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:29.184 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:29.185 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:32.492 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:32.498 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:32.498 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:35.810 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:35.816 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-09-24T09:05:33.360279Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:35.816 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:39.120 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:39.126 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-09-24T09:05:33.360279Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:39.126 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:42.404 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:42.414 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-09-24T09:05:33.360279Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2020-09-24T09:05:41.784706Z 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_type=1'. 2020-09-24T09:05:41.785244Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/data/ is unusable. You can remove all files that the server added to it. 2020-09-24T09:05:41.785982Z 0 [ERROR] [MY-010119] [Server] Aborting get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:42.414 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:49.150 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:05:49.157 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-09-24T09:05:33.360279Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2020-09-24T09:05:41.784706Z 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_type=1'. 2020-09-24T09:05:41.785244Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/data/ is unusable. You can remove all files that the server added to it. 2020-09-24T09:05:41.785982Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24T09:05:45.358188Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL. 2020-09-24 09:05:46+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:46.631524Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:46.631543Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:47.911897Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:47.911915Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:05:49.157 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:52.165 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:55.176 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:58.257 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:05:59.165 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:05:59.166 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:06:01.265 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:04.273 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:07.283 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:10.292 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:13.301 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:16.307 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:22.441 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:25.452 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:28.461 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:31.468 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:34.478 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:37.486 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:40.495 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:43.504 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:46.512 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:49.520 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:52.527 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:55.535 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:06:58.544 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:01.551 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:04.561 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:07.568 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:10.577 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:13.585 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:16.595 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:19.602 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:22.610 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:25.616 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:28.623 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:29.170 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:07:29.170 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:07:31.630 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:34.636 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:38.139 991 WARNING trove.guestagent.datastore.mysql_common.service [-] Failed to run docker command, error: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n": Exception: Running command error: b"mysql: [Warning] Using a password on the command line interface can be insecure.\nERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)\n" 2020-09-24 09:07:38.145 991 DEBUG trove.guestagent.datastore.mysql_common.service [-] container log: 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. 2020-09-24 09:05:22+00:00 [Note] [Entrypoint]: Initializing database files 2020-09-24T09:05:22.435439Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 29 2020-09-24T09:05:22.437368Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root 2020-09-24T09:05:22.480109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-09-24T09:05:33.360279Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2020-09-24T09:05:41.784706Z 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_type=1'. 2020-09-24T09:05:41.785244Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/data/ is unusable. You can remove all files that the server added to it. 2020-09-24T09:05:41.785982Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24T09:05:45.358188Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL. 2020-09-24 09:05:46+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:46.631524Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:46.631543Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:47.911897Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:47.911915Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:49.298044Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:49.298062Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:51.107414Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:51.107462Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:53+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:53.748440Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:53.748510Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:05:57+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:05:58.046983Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:05:58.046998Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:06:05+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:06:05.509339Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:06:05.509358Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:06:18+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:06:19.351132Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:06:19.351147Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:06:45+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. mysqld: Error on realpath() on '/var/lib/mysql-files' (Error 2 - No such file or directory) 2020-09-24T09:06:46.036074Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /var/lib/mysql-files 2020-09-24T09:06:46.036167Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-09-24 09:07:37+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started. get_actual_db_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py:86 2020-09-24 09:07:38.146 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from running to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:41.153 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:44.160 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:47.168 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:50.176 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:53.184 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:56.191 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:07:59.199 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:02.207 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:05.214 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:08.222 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:11.229 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:14.237 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:17.244 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:20.252 991 DEBUG trove.guestagent.datastore.service [-] Waiting for DB status to change from unknown to healthy. wait_for_real_status_to_change_to /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:312 2020-09-24 09:08:23.260 991 ERROR trove.guestagent.datastore.service [-] Timeout while waiting for database status to change.Expected state healthy, current state is unknown 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager [-] Failed to prepare datastore: Failed to start mysql: trove.common.exception.TroveError: Failed to start mysql 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager Traceback (most recent call last): 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", line 199, in _prepare 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager cluster_config, snapshot, ds_version=ds_version) 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager File "/opt/guest-agent-venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager result = f(*args, **kwargs) 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/manager.py", line 181, in do_prepare 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager self.app.start_db(ds_version=ds_version, command=command) 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py", line 644, in start_db 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager raise exception.TroveError(_("Failed to start mysql")) 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager trove.common.exception.TroveError: Failed to start mysql 2020-09-24 09:08:23.261 991 ERROR trove.guestagent.datastore.manager 2020-09-24 09:08:23.280 991 INFO trove.guestagent.datastore.manager [-] Ending datastore prepare for 'mysql'. 2020-09-24 09:08:23.281 991 INFO trove.guestagent.datastore.service [-] Set final status to failed to spawn. 2020-09-24 09:08:23.282 991 DEBUG trove.guestagent.datastore.service [-] Casting set_status message to conductor (status is 'failed to spawn'). set_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:137 2020-09-24 09:08:23.283 991 DEBUG trove.conductor.api [-] Making async call to cast heartbeat for instance: ea8f8fcb-2732-488f-87f2-510be37fa73b heartbeat /opt/guest-agent-venv/lib/python3.6/site-packages/trove/conductor/api.py:73 2020-09-24 09:08:23.292 991 DEBUG trove.guestagent.datastore.service [-] Successfully cast set_status. set_status /opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/service.py:144 2020-09-24 09:08:23.293 991 DEBUG trove.conductor.api [-] Making async call to cast error notification notify_exc_info /opt/guest-agent-venv/lib/python3.6/site-packages/trove/conductor/api.py:115 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server [-] Exception during message handling: trove.common.exception.TroveError: Failed to start mysql 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 273, in dispatch 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 193, in _do_dispatch 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", line 183, in prepare 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server ds_version=ds_version) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/manager.py", line 199, in _prepare 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server cluster_config, snapshot, ds_version=ds_version) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 160, in wrapper 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/manager.py", line 181, in do_prepare 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server self.app.start_db(ds_version=ds_version, command=command) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/guestagent/datastore/mysql_common/service.py", line 644, in start_db 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server raise exception.TroveError(_("Failed to start mysql")) 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server trove.common.exception.TroveError: Failed to start mysql 2020-09-24 09:08:23.303 991 ERROR oslo_messaging.rpc.server 2020-09-24 09:08:29.173 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:08:29.174 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:09:29.175 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:09:29.176 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:10:29.179 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:10:29.180 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:11:29.183 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:11:29.183 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:12:29.187 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:12:29.188 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:13:29.190 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:13:29.191 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:14:29.193 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:14:29.194 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:15:29.197 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:15:29.198 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:16:29.200 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:16:29.201 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:17:29.204 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:17:29.205 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:18:29.207 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:18:29.208 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:19:29.210 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:19:29.211 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:20:29.214 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:20:29.215 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:21:29.217 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:21:29.218 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:22:29.220 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:22:29.221 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:23:29.224 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:23:29.224 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:24:29.228 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:24:29.228 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:25:29.231 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:25:29.232 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:26:29.235 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:26:29.236 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:27:29.239 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:27:29.240 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:28:29.243 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:28:29.244 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:29:29.248 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:29:29.248 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:30:29.251 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:30:29.252 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check 2020-09-24 09:31:29.255 991 DEBUG oslo_service.periodic_task [-] Running periodic task Manager.update_status run_periodic_tasks /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 2020-09-24 09:31:29.256 991 INFO trove.guestagent.datastore.manager [-] Database service is not installed, skip status check From victoria at vmartinezdelacruz.com Thu Sep 24 14:27:09 2020 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Thu, 24 Sep 2020 11:27:09 -0300 Subject: [manila] Doc-a-thon event coming up next Thursday (Aug 6th) In-Reply-To: References: Message-ID: A quick update on this with some results. Ethercalc with doc bugs that we worked on is available in https://ethercalc.openstack.org/ur17jprbprxx Some metrics - Total number of doc bugs triaged: 26 (25 uniques, 1 duplicate) - Total number of doc bugs fixed and released: 11 - Total number of doc bugs in progress: 3 - Total number of doc bugs not started: 11 (5 third party vendors, from whom we need input; 6 pushed for next cycle) Other docs enhancements work has been done as part of the doc-a-thon that might not be represented in these metrics. Cheers, V On Wed, Aug 5, 2020 at 3:56 PM Goutham Pacha Ravi wrote: > Thank you so much for putting this together Victoria, and Vida! > As a reminder, we will not be meeting on IRC tomorrow (6th August 2020), > but instead will be in https://meetpad.opendev.org/ManilaV-ReleaseDocAThon > > You can get to the etherpad link for the meeting by going to > etherpad.opendev.org instead of meetpad.opendev.org: > https://etherpad.opendev.org/p/ManilaV-ReleaseDocAThon > > Please bring any documentation issues to that meeting > Hoping to see you all there! > > > On Mon, Aug 3, 2020 at 12:20 PM Victoria Martínez de la Cruz < > victoria at vmartinezdelacruz.com> wrote: > >> Hi everybody, >> >> An update on this. We decided to take over the upstream meeting directly >> and start *at* the slot of the Manila weekly meeting. We will join the >> Jitsi bridge [0] at 3pm UTC time and start going through the list of bugs >> we have in [1]. There is no finish time, you can join and leave the bridge >> freely. We will also use IRC Freenode channel #openstack-manila if needed. >> >> If the time slot doesn't work for you (we are aware this is not a >> friendly slot for EMEA/APAC), you can still go through the bug list in [1], >> claim a bug and work on it. >> >> If things go well, we plan to do this again in a different slot so >> everybody that wants to collaborate can do it. >> >> Looking forward to see you there, >> >> Cheers, >> >> V >> >> [0] https://meetpad.opendev.org/ManilaV-ReleaseDocAThon >> [1] https://ethercalc.openstack.org/ur17jprbprxx >> >> On Fri, Jul 31, 2020 at 2:05 PM Victoria Martínez de la Cruz < >> victoria at vmartinezdelacruz.com> wrote: >> >>> Hi folks, >>> >>> We will be organizing a doc-a-thon next Thursday, August 6th, with the >>> main goal of improving our docs for the next release. We will be gathering >>> on our Freenode channel #openstack-manila after our weekly meeting (3pm >>> UTC) and also using a videoconference tool (exact details TBC) to go over a >>> curated list of opened doc bugs we have here [0]. >>> >>> *Your* participation is truly valued, being you an already Manila >>> contributor or if you are interested in contributing and you didn't know >>> how, so looking forward to seeing you there :) >>> >>> Cheers, >>> >>> Victoria >>> >>> [0] https://ethercalc.openstack.org/ur17jprbprxx >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Sep 24 14:48:29 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 24 Sep 2020 08:48:29 -0600 Subject: [tripleo] rdo infrastructure maintenance In-Reply-To: References: Message-ID: On Thu, Sep 24, 2020 at 6:15 AM Wesley Hayutin wrote: > Greetings, > > There is some maintenance on the rdo infra that is causing check and gate > jobs to fail upstream. > The situation is actively being worked on at this time. > > You may find errors in your check / gate jobs.. like the following. > > TASK [oooci-build-images : Download TripleO source image] > 2020-09-24 10:26:35.615996 | primary | ERROR > 2020-09-24 10:26:35.617685 | primary | { > 2020-09-24 10:26:35.617774 | primary | "attempts": 60, > 2020-09-24 10:26:35.617850 | primary | "msg": "Failed to connect to images.rdoproject.org at port 443: [Errno 113] No route to host", > 2020-09-24 10:26:35.617929 | primary | "status": -1, > 2020-09-24 10:26:35.618001 | primary | "url": "https://images.rdoproject.org/CentOS-8-x86_64-GenericCloud.qcow2" > 2020-09-24 10:26:35.618072 | primary | } > > https://5134c188955ee39fd51b-cd04c057bb6c1703504e41a1dbc31642.ssl.cf5.rackcdn.com/750812/10/gate/tripleo-buildimage-overcloud-full-centos-8/3479dc9/job-output.txt > > > Sorry for the trouble, we'll update this email when we're clear. > > All clear.. recheck your heart out.. I know you already have :) > 0/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Sep 24 14:55:02 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 24 Sep 2020 09:55:02 -0500 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: <174c09c4b6b.c00f55be81965.3837064844428484620@ghanshyammann.com> ---- On Thu, 24 Sep 2020 03:45:38 -0500 Lajos Katona wrote ---- > Hi,I would like to selectively respond to the number of goals per cycle question. > A possible direction could be to forget the "one cycle goal" thing and allow to finish the goals in a longer time frame. From "management" perspective the important is to have a fix number of goals per cycle to avoid overallocation of people. > Another approach could be to attach a number, a difficulty feeling or similar to the proposed goals to make it easier to select them, and avoid to choose 2 hard-to-do goal for one cycle.This numbering can be done by project teams/PTLs whoever has the insight for the projects.Example: zuulv3 migration can be a hard to do goal as affects the whole gating of a project with hard coordination between projects.Add healthcheck API is much simpler as can be done without affecting the life of a whole project, or the community. +1 on these feedback. > A possible direction could be to forget the "one cycle goal" thing This is much needed for complex goal or need more work like OSC one, we have not done yet but we discussed the possibility of multi-cycle goal in last cycle. We can add the framwork/guidance for multi-cycle goal like it has to be devided by work to be done or by projects(like target set of projects per cycle). -gmann > RegardsLajos Katona (lajoskatona) > > Graham Hayes ezt írta (időpont: 2020. szept. 21., H, 19:54): > Hi All > > It is that time of year / release again - and we need to choose the > community goals for Wallaby. > > Myself and Nate looked over the list of goals [1][2][3], and we are > suggesting one of the following: > > > - Finish moving legacy python-*client CLIs to python-openstackclient > - Move from oslo.rootwrap to oslo.privsep > - Implement the API reference guide changes > - All API to provide a /healthcheck URL like Keystone (and others) provide > > Some of these goals have champions signed up already, but we need to > make sure they are still available to do them. If you are interested in > helping drive any of the goals, please speak up! > > We need to select goals in time for the new release cycle - so please > reply if there is goals you think should be included in this list, or > not included. > > Next steps after this will be helping people write a proposed goal > and then the TC selecting the ones we will pursue during Wallaby. > > Additionally, we have traditionally selected 2 goals per cycle - > however with the people available to do the work across projects > Nate and I briefly discussed reducing that to one for this cycle. > > What does the community think about this? > > Thanks, > > Graham > > 1 - https://etherpad.opendev.org/p/community-goals > 2 - https://governance.openstack.org/tc/goals/proposed/index.html > 3 - https://etherpad.opendev.org/p/community-w-series-goals > 4 - > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > From gmann at ghanshyammann.com Thu Sep 24 15:03:31 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 24 Sep 2020 10:03:31 -0500 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: <174c0a40e62.10a3aebf382501.4488410530424253723@ghanshyammann.com> ---- On Mon, 21 Sep 2020 12:53:17 -0500 Graham Hayes wrote ---- > Hi All > > It is that time of year / release again - and we need to choose the > community goals for Wallaby. > > Myself and Nate looked over the list of goals [1][2][3], and we are > suggesting one of the following: > > Thanks Graham, Nate for starting this. > - Finish moving legacy python-*client CLIs to python-openstackclient Are not we going with popup team first for osc work? I am fine with goal also but we should do this as multi-cycle goal with no other goal in parallel so that we actually finish this on time. > - Move from oslo.rootwrap to oslo.privsep +1, this is already proposed goal since last cycle. -gmann > - Implement the API reference guide changes > - All API to provide a /healthcheck URL like Keystone (and others) provide > > Some of these goals have champions signed up already, but we need to > make sure they are still available to do them. If you are interested in > helping drive any of the goals, please speak up! > > We need to select goals in time for the new release cycle - so please > reply if there is goals you think should be included in this list, or > not included. > > Next steps after this will be helping people write a proposed goal > and then the TC selecting the ones we will pursue during Wallaby. > > Additionally, we have traditionally selected 2 goals per cycle - > however with the people available to do the work across projects > Nate and I briefly discussed reducing that to one for this cycle. > > What does the community think about this? > > Thanks, > > Graham > > 1 - https://etherpad.opendev.org/p/community-goals > 2 - https://governance.openstack.org/tc/goals/proposed/index.html > 3 - https://etherpad.opendev.org/p/community-w-series-goals > 4 - > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > From stephenfin at redhat.com Thu Sep 24 15:09:05 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 24 Sep 2020 16:09:05 +0100 Subject: [elections][placement][blazar] Placement PTL Non-candidacy: Stepping down In-Reply-To: References: Message-ID: <31fc048e7216adc479b18f55b7bb9f8212ab213c.camel@redhat.com> On Wed, 2020-09-23 at 11:05 +0900, Tetsuro Nakamura wrote: > Hello everyone, > > Due to my current responsibilities, > I'm not able to keep up with my duties > either as a Placement PTL, core reviewer, > or as a Blazar core reviewer in Wallaby cycle. > > Thank you so much to everyone that has supported. > > I won't be able to checking ML or IRC, > but I'll still be checking my emails. > Please ping me via email if you need help. > > Thanks. > > - Tetsuro Thank you for your work on Placement over the past few years. It has been much appreciated :) Stephen From stephenfin at redhat.com Thu Sep 24 15:23:36 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 24 Sep 2020 16:23:36 +0100 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) Message-ID: Placement has been a separate project with its own governance since Stein [1]. Since then, the main drivers behind the separation have moved onto pastures new and with Tetsuro sadly declaring his non- candidacy for the PTL position for Wallaby [2], we're left in the unenviable position of potentially not having a PTL for the Wallaby cycle. As such, it's probably time to discuss the future of Placement governance. Assuming no one steps forward for the Placement PTL role, it would appear to me that we have two options. Either we look at transitioning Placement to a PTL-less project, or we move it back under nova governance. To be honest, given how important placement is to nova and other projects now, I'm uncomfortable with the idea of not having a point person who is ultimately responsible for things like cutting a release (yes, delegation is encouraged but someone needs to herd the cats). At the same time, I do realize that placement is used by more that nova now so nova cores and what's left of the separate placement core team shouldn't be the only ones making this decision. So, assuming the worst happens and placement is left without a PTL for Victoria, what do we want to do? Stephen PS: Apologies if I missed other projects with an interest in placement. I did try to catch them all /o\ [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002575.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017494.html From fungi at yuggoth.org Thu Sep 24 15:33:25 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 24 Sep 2020 15:33:25 +0000 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) In-Reply-To: References: Message-ID: <20200924153323.5dyyv3u7hw5rqnb4@yuggoth.org> On 2020-09-24 16:23:36 +0100 (+0100), Stephen Finucane wrote: [...] > Either we look at transitioning Placement to a PTL-less project, > or we move it back under nova governance. To be honest, given how > important placement is to nova and other projects now, I'm > uncomfortable with the idea of not having a point person who is > ultimately responsible for things like cutting a release (yes, > delegation is encouraged but someone needs to herd the cats). [...] The officially sanctioned PTL-less option (distributed project leadership) still lists "Release Liaison" as the very first required role: I don't think I would discount the DPL option purely over a concern for lack of point person responsible for Placement's release activity. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Sep 24 15:53:26 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 24 Sep 2020 10:53:26 -0500 Subject: [all][tc][goals] Migrate CI/CD jobs to new Ubuntu LTS Focal: Week R-4 Update In-Reply-To: <174b2d0795a.b733bbbf111764.6640064513583537393@ghanshyammann.com> References: <17468f8f785.1123a07e6307801.3844845549431858647@ghanshyammann.com> <1746feef403.c41c848411058.2421030943223377212@ghanshyammann.com> <1749990229b.bfd8d46673892.9090899423267334607@ghanshyammann.com> <174b2d0795a.b733bbbf111764.6640064513583537393@ghanshyammann.com> Message-ID: <174c0d1c44c.106545efe85314.8862328551933808703@ghanshyammann.com> ---- On Mon, 21 Sep 2020 17:37:21 -0500 Ghanshyam Mann wrote ---- > Updates: > > * Ceilometer fix is ready to merge with +A - https://review.opendev.org/#/c/752294/ > > * Barbican fix is not yet merged but I think we should move on and switch the integration testing to Focal > and keeping the barbican based job keep running on Bionic. > > * If you have any of the job failing on Focal and can not be fixed quickly then set the bionic nodeset for those to avoid > gate block. Examples are below: > - https://review.opendev.org/#/c/743079/4/.zuul.yaml at 31 > - https://review.opendev.org/#/c/743124/3/.zuul.d/base.yaml > > * I am planning to switch the devstck and tempest base job by tomorrow or the day after tomorrow, please take appropriate > action in advance if your project is not tested or failing. Devstack patch is merged which run devstack base jobs on Focal now. But Tempest patch is still in gate, so you might see tempest(-based) jobs failing on your gate, hold the recheck until this is merged - https://review.opendev.org/#/c/734700/ -gmann > > Testing Status: > =========== > * ~270 repos gate have been tested green or fixed till now. > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > * ~28 repos are failing. Need immediate action mentioned above. > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > * ~18repos fixes ready to merge: > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > -gmann > > > > > -gmann > > > > > > > > > > > -gmann > > > > > > > > > ---- On Mon, 07 Sep 2020 09:29:40 -0500 Ghanshyam Mann wrote ---- > > > > Hello Everyone, > > > > > > > > Please find the week R-4 updates on 'Ubuntu Focal migration' community goal. Its time to force the base jobs migration which can > > > > break the projects gate if not yet taken care of. Read below for the plan. > > > > > > > > Tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > Progress: > > > > ======= > > > > * We are close to V-3 release and this is time we have to complete this migration otherwise doing it in RC period can add > > > > unnecessary and last min delay. I am going to plan this migration in two-part. This will surely break some projects gate > > > > which is not yet finished the migration but we have to do at some time. Please let me know if any objection to the below > > > > plan. > > > > > > > > * Part1: Migrating tox base job tomorrow (8th Sept): > > > > > > > > ** I am going to open tox base jobs migration (doc, unit, functional, lower-constraints etc) to merge by tomorrow. which is this > > > > series (all base patches of this): https://review.opendev.org/#/c/738328/ . > > > > > > > > **There are few repos still failing on requirements lower-constraints job specifically which I tried my best to fix as many as possible. > > > > Many are ready to merge also. Please merge or work on your projects repo testing before that or fix on priority if failing. > > > > > > > > * Part2: Migrating devstack/tempest base job on 10th sept: > > > > > > > > * We have few open bugs for this which are not yet resolved, we will see how it goes but the current plan is to migrate by 10th Sept. > > > > > > > > ** Bug#1882521 > > > > ** DB migration issues, > > > > *** alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > > > > > Testing Till now: > > > > ============ > > > > * ~200 repos gate have been tested or fixed till now. > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+(status:abandoned+OR+status:merged) > > > > > > > > * ~100 repos are under test and failing. Debugging and fixing are in progress (If you would like to help, please check your > > > > project repos if I am late to fix them): > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open > > > > > > > > * ~30repos fixes ready to merge: > > > > ** https://review.opendev.org/#/q/topic:migrate-to-focal+status:open+label%3AVerified%3E%3D1%2Czuul+NOT+label%3AWorkflow%3C%3D-1 > > > > > > > > > > > > Bugs Report: > > > > ========== > > > > > > > > 1. Bug#1882521. (IN-PROGRESS) > > > > There is open bug for nova/cinder where three tempest tests are failing for > > > > volume detach operation. There is no clear root cause found yet > > > > -https://bugs.launchpad.net/cinder/+bug/1882521 > > > > We have skipped the tests in tempest base patch to proceed with the other > > > > projects testing but this is blocking things for the migration. > > > > > > > > 2. DB migration issues (IN-PROGRESS) > > > > * alembic and few on telemetry/gnocchi side https://github.com/sqlalchemy/alembic/issues/699, https://storyboard.openstack.org/#!/story/2008003 > > > > > > > > 3. We encountered the nodeset name conflict with x/tobiko. (FIXED) > > > > nodeset conflict is resolved now and devstack provides all focal nodes now. > > > > > > > > 4. Bug#1886296. (IN-PROGRESS) > > > > pyflakes till 2.1.0 is not compatible with python 3.8 which is the default python version > > > > on ubuntu focal[1]. With pep8 job running on focal faces the issue and fail. We need to bump > > > > the pyflakes to 2.1.1 as min version to run pep8 jobs on py3.8. > > > > As of now, many projects are using old hacking version so I am explicitly adding pyflakes>=2.1.1 > > > > on the project side[2] but for the long term easy maintenance, I am doing it in 'hacking' requirements.txt[3] > > > > nd will release a new hacking version. After that project can move to new hacking and do not need > > > > to maintain pyflakes version compatibility. > > > > > > > > 5. Bug#1886298. (IN-PROGRESS) > > > > 'Markupsafe' 1.0 is not compatible with the latest version of setuptools[4], > > > > We need to bump the lower-constraint for Markupsafe to 1.1.1 to make it work. > > > > There are a few more issues[5] with lower-constraint jobs which I am debugging. > > > > > > > > > > > > What work to be done on the project side: > > > > ================================ > > > > This goal is more of testing the jobs on focal and fixing bugs if any otherwise > > > > migrate jobs by switching the nodeset to focal node sets defined in devstack. > > > > > > > > 1. Start a patch in your repo by making depends-on on either of below: > > > > devstack base patch if you are using only devstack base jobs not tempest: > > > > > > > > Depends-on: https://review.opendev.org/#/c/731207/ > > > > OR > > > > tempest base patch if you are using the tempest base job (like devstack-tempest): > > > > Depends-on: https://review.opendev.org/#/c/734700/ > > > > > > > > Both have depends-on on the series where I am moving unit/functional/doc/cover/nodejs tox jobs to focal. So > > > > you can test the complete gate jobs(unit/functional/doc/integration) together. > > > > This and its base patches - https://review.opendev.org/#/c/738328/ > > > > > > > > Example: https://review.opendev.org/#/c/738126/ > > > > > > > > 2. If none of your project jobs override the nodeset then above patch will be > > > > testing patch(do not merge) otherwise change the nodeset to focal. > > > > Example: https://review.opendev.org/#/c/737370/ > > > > > > > > 3. If the jobs are defined in branchless repo and override the nodeset then you need to override the branches > > > > variant to adjust the nodeset so that those jobs run on Focal on victoria onwards only. If no nodeset > > > > is overridden then devstack being branched and stable base job using bionic/xenial will take care of > > > > this. > > > > Example: https://review.opendev.org/#/c/744056/2 > > > > > > > > 4. If no updates need you can abandon the testing patch (https://review.opendev.org/#/c/744341/). If it need > > > > updates then modify the same patch with proper commit msg, once it pass the gate then remove the Depends-On > > > > so that you can merge your patch before base jobs are switched to focal. This way we make sure no gate downtime in > > > > this migration. > > > > Example: https://review.opendev.org/#/c/744056/1..2//COMMIT_MSG > > > > > > > > Once we finish the testing on projects side and no failure then we will merge the devstack and tempest > > > > base patches. > > > > > > > > > > > > Important things to note: > > > > =================== > > > > * Do not forgot to add the story and task link to your patch so that we can track it smoothly. > > > > * Use gerrit topic 'migrate-to-focal' > > > > * Do not backport any of the patches. > > > > > > > > > > > > References: > > > > ========= > > > > Goal doc: https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > > > Storyboard tracking: https://storyboard.openstack.org/#!/story/2007865 > > > > > > > > [1] https://github.com/PyCQA/pyflakes/issues/367 > > > > [2] https://review.opendev.org/#/c/739315/ > > > > [3] https://review.opendev.org/#/c/739334/ > > > > [4] https://github.com/pallets/markupsafe/issues/116 > > > > [5] https://zuul.opendev.org/t/openstack/build/7ecd9cf100194bc99b3b70fa1e6de032 > > > > > > > > -gmann > > > > > > > > > > > > > > > > > From radoslaw.piliszek at gmail.com Thu Sep 24 16:00:10 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 24 Sep 2020 18:00:10 +0200 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) In-Reply-To: References: Message-ID: On Thu, Sep 24, 2020 at 5:24 PM Stephen Finucane wrote: > > Placement has been a separate project with its own governance since > Stein [1]. Since then, the main drivers behind the separation have > moved onto pastures new and with Tetsuro sadly declaring his non- > candidacy for the PTL position for Wallaby [2], we're left in the > unenviable position of potentially not having a PTL for the Wallaby > cycle. As such, it's probably time to discuss the future of Placement > governance. > > Assuming no one steps forward for the Placement PTL role, it would > appear to me that we have two options. Either we look at transitioning > Placement to a PTL-less project, or we move it back under nova > governance. To be honest, given how important placement is to nova and > other projects now, I'm uncomfortable with the idea of not having a > point person who is ultimately responsible for things like cutting a > release (yes, delegation is encouraged but someone needs to herd the > cats). At the same time, I do realize that placement is used by more > that nova now so nova cores and what's left of the separate placement > core team shouldn't be the only ones making this decision. > > So, assuming the worst happens and placement is left without a PTL for > Victoria, what do we want to do? Run DPL with liaisons from the interested projects perhaps? :-) > Stephen > > PS: Apologies if I missed other projects with an interest in placement. > I did try to catch them all /o\ I know of Zun. -yoctozepto From akekane at redhat.com Thu Sep 24 16:04:25 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 24 Sep 2020 21:34:25 +0530 Subject: [Glance] PTL on Vacation Message-ID: Hi All, I'm starting my vacations from 28th September and will be back on October 5th. Please direct any issues to the rest of the core team. Also there will be no glance weekly meeting on 01 October [1]. [1] https://etherpad.opendev.org/p/glance-team-meeting-agenda Thanks, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Sep 24 16:11:53 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 24 Sep 2020 11:11:53 -0500 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) In-Reply-To: References: Message-ID: <174c0e2a858.ea442d3c86287.2448021907888276285@ghanshyammann.com> ---- On Thu, 24 Sep 2020 11:00:10 -0500 Radosław Piliszek wrote ---- > On Thu, Sep 24, 2020 at 5:24 PM Stephen Finucane wrote: > > > > Placement has been a separate project with its own governance since > > Stein [1]. Since then, the main drivers behind the separation have > > moved onto pastures new and with Tetsuro sadly declaring his non- > > candidacy for the PTL position for Wallaby [2], we're left in the > > unenviable position of potentially not having a PTL for the Wallaby > > cycle. As such, it's probably time to discuss the future of Placement > > governance. > > > > Assuming no one steps forward for the Placement PTL role, it would > > appear to me that we have two options. Either we look at transitioning > > Placement to a PTL-less project, or we move it back under nova > > governance. To be honest, given how important placement is to nova and > > other projects now, I'm uncomfortable with the idea of not having a > > point person who is ultimately responsible for things like cutting a > > release (yes, delegation is encouraged but someone needs to herd the > > cats). At the same time, I do realize that placement is used by more > > that nova now so nova cores and what's left of the separate placement > > core team shouldn't be the only ones making this decision. > > > > So, assuming the worst happens and placement is left without a PTL for > > Victoria, what do we want to do? > > Run DPL with liaisons from the interested projects perhaps? :-) +1, this is good way to share the responsibilty. -gmann > > > Stephen > > > > PS: Apologies if I missed other projects with an interest in placement. > > I did try to catch them all /o\ > > I know of Zun. > > -yoctozepto > > From ltoscano at redhat.com Thu Sep 24 16:12:06 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 24 Sep 2020 18:12:06 +0200 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) In-Reply-To: References: Message-ID: <3063992.oiGErgHkdL@whitebase.usersys.redhat.com> On Thursday, 24 September 2020 17:23:36 CEST Stephen Finucane wrote: > Assuming no one steps forward for the Placement PTL role, it would > appear to me that we have two options. Either we look at transitioning > Placement to a PTL-less project, or we move it back under nova > governance. To be honest, given how important placement is to nova and > other projects now, I'm uncomfortable with the idea of not having a > point person who is ultimately responsible for things like cutting a > release (yes, delegation is encouraged but someone needs to herd the > cats). At the same time, I do realize that placement is used by more > that nova now so nova cores and what's left of the separate placement > core team shouldn't be the only ones making this decision. > > So, assuming the worst happens and placement is left without a PTL for > Victoria, what do we want to do? I mentioned this on IRC, but just for completeness, there is another option: have the Nova candidate PTL (I assume there is just one) also apply for Placement PTL, and handle the 2 realms in a personal union. Ciao -- Luigi From gr at ham.ie Thu Sep 24 16:34:36 2020 From: gr at ham.ie (Graham Hayes) Date: Thu, 24 Sep 2020 17:34:36 +0100 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <174c0a40e62.10a3aebf382501.4488410530424253723@ghanshyammann.com> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> <174c0a40e62.10a3aebf382501.4488410530424253723@ghanshyammann.com> Message-ID: On 24/09/2020 16:03, Ghanshyam Mann wrote: > ---- On Mon, 21 Sep 2020 12:53:17 -0500 Graham Hayes wrote ---- > > Hi All > > > > It is that time of year / release again - and we need to choose the > > community goals for Wallaby. > > > > Myself and Nate looked over the list of goals [1][2][3], and we are > > suggesting one of the following: > > > > > > Thanks Graham, Nate for starting this. > > > - Finish moving legacy python-*client CLIs to python-openstackclient > > Are not we going with popup team first for osc work? I am fine with goal also but > we should do this as multi-cycle goal with no other goal in parallel so that we actually > finish this on time. Yeah - this was just one of the goals we thought might have some discussion, and we didn't know where the popup team was in their work. If that work is still on going, we should leave the goal for another cycle or two. > > - Move from oslo.rootwrap to oslo.privsep > > +1, this is already proposed goal since last cycle. > > -gmann > > > - Implement the API reference guide changes > > - All API to provide a /healthcheck URL like Keystone (and others) provide > > > > Some of these goals have champions signed up already, but we need to > > make sure they are still available to do them. If you are interested in > > helping drive any of the goals, please speak up! > > > > We need to select goals in time for the new release cycle - so please > > reply if there is goals you think should be included in this list, or > > not included. > > > > Next steps after this will be helping people write a proposed goal > > and then the TC selecting the ones we will pursue during Wallaby. > > > > Additionally, we have traditionally selected 2 goals per cycle - > > however with the people available to do the work across projects > > Nate and I briefly discussed reducing that to one for this cycle. > > > > What does the community think about this? > > > > Thanks, > > > > Graham > > > > 1 - https://etherpad.opendev.org/p/community-goals > > 2 - https://governance.openstack.org/tc/goals/proposed/index.html > > 3 - https://etherpad.opendev.org/p/community-w-series-goals > > 4 - > > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > > > > From alan.davis at apogee-research.com Thu Sep 24 18:07:24 2020 From: alan.davis at apogee-research.com (Alan Davis) Date: Thu, 24 Sep 2020 14:07:24 -0400 Subject: LVM misconfiguration after openstack stackpack server hang and reboot Message-ID: This morning my CentOS 7.7 RDO packstack installation of Rocky hung. On reboot some of the VMs won't start. This is a primary system and I need to find the most expedient way to recover without losing data. I'm not using LVM thin volumes. Any help is appreciated. Looking at nova-compute.log I see errors trying to find LUN 0 during the sysfs stage. Several machines won't boot because their root disk entries in LVM are seen as PV and booting them doesn't see them in the DM subsystem. Other machines boot but there attached disks throw LVM errors about duplicate PV and preferring the cinder-volumes VG version. LVM is showing LVs that have both "bare" entries as well as entries in cinder-volumes and it's complaining about duplicate PVs, not using lvmetad and preferring some entries because they are in the dm subsystem. I've verified that, so far, I haven't lost any data. The "bare" LV not being used as part of the DM subsystem because it's server won't boot can be mounted on the openstack host and all data on it is accessible. This host has rebooted cleanly multiple times in the past. This is the first time it's shown any problems. Am I missing an LVM filter? (unlikely since it wasn't neede before) How can I reset the LVM configuration and convince it that it's not seeing duplicate PV? How do I ensure that openstack sees the right UUID and volume ID? Excerpts from error log and output of lvs : --- nova-compute.log --- during VM start 2020-09-24 11:15:27.091 13953 INFO os_brick.initiator.connectors.iscsi [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d1c8 - default default] Trying to connect to iSCSI portal 172.10.0.40:3260 2020-09-24 11:15:29.721 13953 WARNING nova.compute.manager [req-fd32e16f-c879-402f-a32c-6be45a943c34 48af9a366301467d9fec912fd1c072c6 f9fc7b412a8446d083da1356aa370eb4 - default d efault] [instance: de7d740c-786a-4aa2-aa09-d447ae7e14b6] Received unexpected event network-vif-unplugged-79aff403-d2e4-4266-bd88-d7bd19d501a9 for instance with vm_state stopped a nd task_state powering-on. 2020-09-24 11:16:21.361 13953 WARNING os_brick.initiator.connectors.iscsi [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d 1c8 - default default] LUN 0 on iSCSI portal 172.10.0.40:3260 not found on sysfs after logging in. 2020-09-24 11:16:23.482 13953 INFO os_brick.initiator.connectors.iscsi [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d1c8 - default default] Trying to connect to iSCSI portal 172.10.0.40:3260 2020-09-24 11:17:17.741 13953 WARNING os_brick.initiator.connectors.iscsi [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d 1c8 - default default] LUN 0 on iSCSI portal 172.10.0.40:3260 not found on sysfs after logging in.: VolumeDeviceNotFound: Volume device not found at . 2020-09-24 11:17:21.864 13953 INFO os_brick.initiator.connectors.iscsi [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d1c8 - default default] Trying to connect to iSCSI portal 172.10.0.40:3260 2020-09-24 11:18:16.113 13953 WARNING os_brick.initiator.connectors.iscsi [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d 1c8 - default default] LUN 0 on iSCSI portal 172.10.0.40:3260 not found on sysfs after logging in.: VolumeDeviceNotFound: Volume device not found at . 2020-09-24 11:18:17.252 13953 INFO nova.compute.manager [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d1c8 - default defa ult] [instance: de7d740c-786a-4aa2-aa09-d447ae7e14b6] Successfully reverted task state from powering-on on failure for instance. 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 54af92f2bb494355b96024076184d1c8 - defaul t default] Exception during message handling: VolumeDeviceNotFound: Volume device not found at . 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) --- lvs output --- I've annotated 1 machine's disks to illustrate the relationship between the volume-*** cinder-volumes vg entries and the "bare" lv seen as directly accessible from the host. There are 3 servers that won't boot, they are the one's who's home/vg_home and encrypted_home/encrypted_vg entries are shown. WARNING: Not using lvmetad because duplicate PVs were found. WARNING: Use multipath or vgimportclone to resolve duplicate PVs? WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad. WARNING: Not using device /dev/sdu for PV yZy8Xk-foKT-ovjV-0EZv-VxEM-GqiP-WH7k53. == backup_lv/encrypted_vg WARNING: Not using device /dev/sdv for PV tHA9ui-eSIO-MDmI-RM3u-3Bf4-Dznb-Ha3XfP. == varoptgitlab/encrypted_vg WARNING: Not using device /dev/sdm for PV 5eoyCa-sMO4-b7O4-jIfh-byZE-L5pS-3lOu0D. WARNING: Not using device /dev/sdp for PV 3BI0nV-TP0k-rgPC-PrjH-FT7z-reMe-ec1spj. WARNING: Not using device /dev/sdt for PV ILdbcY-VFCm-fnH6-Y3jc-pdWZ-fnl8-PH3TPe. == storage_lv/encrypted_vg WARNING: Not using device /dev/sdr for PV zowU2N-oaBh-r4cO-cxgX-YYiq-Kf3q-mqlHfK. WARNING: PV yZy8Xk-foKT-ovjV-0EZv-VxEM-GqiP-WH7k53 prefers device /dev/cinder-volumes/volume-c8da1abf-7143-422c-9ee5-b2724a71c8ff because device is in dm subsystem. WARNING: PV tHA9ui-eSIO-MDmI-RM3u-3Bf4-Dznb-Ha3XfP prefers device /dev/cinder-volumes/volume-0a12012f-8c2e-41fb-aa0c-a7ae99c62487 because device is in dm subsystem. WARNING: PV 5eoyCa-sMO4-b7O4-jIfh-byZE-L5pS-3lOu0D prefers device /dev/cinder-volumes/volume-990a057c-46cc-4a81-ba02-28b72c34791d because device is in dm subsystem. WARNING: PV 3BI0nV-TP0k-rgPC-PrjH-FT7z-reMe-ec1spj prefers device /dev/cinder-volumes/volume-b6a9da6e-1958-46ea-90b4-ac1aebed8c04 because device is in dm subsystem. WARNING: PV ILdbcY-VFCm-fnH6-Y3jc-pdWZ-fnl8-PH3TPe prefers device /dev/cinder-volumes/volume-302dd53b-7d05-4f6d-9ada-8f2ed6e1d4c6 because device is in dm subsystem. WARNING: PV zowU2N-oaBh-r4cO-cxgX-YYiq-Kf3q-mqlHfK prefers device /dev/cinder-volumes/volume-df006472-be7a-4957-972a-1db4463f5d67 because device is in dm subsystem. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home centos_stack3 -wi-ao---- 4.00g root centos_stack3 -wi-ao---- 50.00g swap centos_stack3 -wi-ao---- 4.00g _snapshot-05b1e46b-1ae3-4cd0-9117-3fb53a6d94b0 cinder-volumes swi-a-s--- 20.00g volume-1d0ff5d5-93a3-44e8-8bfa-a9290765c8c6 0.00 lv_filestore cinder-volumes -wi-ao---- 1.00t ... volume-c8da1abf-7143-422c-9ee5-b2724a71c8ff cinder-volumes -wi-ao---- 100.00g volume-0a12012f-8c2e-41fb-aa0c-a7ae99c62487 cinder-volumes -wi-ao---- 60.00g volume-990a057c-46cc-4a81-ba02-28b72c34791d cinder-volumes -wi-ao---- 200.00g volume-b6a9da6e-1958-46ea-90b4-ac1aebed8c04 cinder-volumes -wi-ao---- 30.00g volume-302dd53b-7d05-4f6d-9ada-8f2ed6e1d4c6 cinder-volumes -wi-ao---- 60.00g volume-df006472-be7a-4957-972a-1db4463f5d67 cinder-volumes -wi-ao---- 250.00g ... volume-f3250e15-bb9c-43d1-989d-8a8f6635a416 cinder-volumes -wi-ao---- 20.00g volume-fc1d5fcb-fda1-456b-a89d-582b7f94fb04 cinder-volumes -wi-ao---- 300.00g volume-fc50a717-0857-4da3-93cb-a55292f7ed6d cinder-volumes -wi-ao---- 20.00g volume-ff94e2d6-449b-495d-82e6-0debd694c1dd cinder-volumes -wi-ao---- 20.00g data2 data2_vg -wi-a----- <300.00g data data_vg -wi-a----- 1.79t backup_lv encrypted_vg -wi------- <100.00g == ...WH7k53 storage_lv encrypted_vg -wi------- <60.00g == ...PH3TPe varoptgitlab_lv encrypted_vg -wi------- <200.00g varoptgitlab_lv encrypted_vg -wi------- <30.00g varoptgitlab_lv encrypted_vg -wi------- <60.00g == ...Ha3XfP encrypted_home home_vg -wi-a----- <40.00g encrypted_home home_vg -wi------- <60.00g pub pub_vg -wi-a----- <40.00g pub_lv pub_vg -wi------- <250.00g rpms repo -wi-a----- 499.99g home vg_home -wi-a----- <40.00g gtri_pub vg_pub -wi-a----- 20.00g pub vg_pub -wi-a----- <40.00g -- Alan Davis Principal System Administrator Apogee Research LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.davis at apogee-research.com Thu Sep 24 19:17:50 2020 From: alan.davis at apogee-research.com (Alan Davis) Date: Thu, 24 Sep 2020 15:17:50 -0400 Subject: LVM misconfiguration after openstack stackpack server hang and reboot In-Reply-To: References: Message-ID: More info : server is actually running CentOS 7.6 (one of the few that didn't recently get updated) System has 5 disk configured in and md RAID5 set as md126 md126 : active raid5 sdf[4] sdb[0] sde[3] sdc[1] sdd[2] 11720536064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] bitmap: 6/22 pages [24KB], 65536KB chunk LVM filter excludes the sd : filter = [ "r|^/dev/sd[bcdef]|" ] boot.log has complaints about 5 dm disks [FAILED] Failed to start LVM2 PV scan on device 253:55. [FAILED] Failed to start LVM2 PV scan on device 253:47. [FAILED] Failed to start LVM2 PV scan on device 253:50. [FAILED] Failed to start LVM2 PV scan on device 253:56. [FAILED] Failed to start LVM2 PV scan on device 253:34. Typical message : [FAILED] Failed to start LVM2 PV scan on device 253:47. See 'systemctl status lvm2-pvscan at 253:47.service' for details. output of systemctl status: systemctl status lvm2-pvscan at 253:55.service ● lvm2-pvscan at 253:55.service - LVM2 PV scan on device 253:55 Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan at .service; static; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2020-09-24 09:26:58 EDT; 5h 44min ago Docs: man:pvscan(8) Process: 17395 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i (code=exited, status=5) Main PID: 17395 (code=exited, status=5) Sep 24 09:26:58 stack3 systemd[1]: Starting LVM2 PV scan on device 253:55... Sep 24 09:26:58 stack3 lvm[17395]: Multiple VGs found with the same name: skipping encrypted_vg Sep 24 09:26:58 stack3 lvm[17395]: Use --select vg_uuid= in place of the VG name. Sep 24 09:26:58 stack3 systemd[1]: lvm2-pvscan at 253:55.service: main process exited, code=exited, status=5/NOTINSTALLED Sep 24 09:26:58 stack3 systemd[1]: Failed to start LVM2 PV scan on device 253:55. Sep 24 09:26:58 stack3 systemd[1]: Unit lvm2-pvscan at 253:55.service entered failed state. Sep 24 09:26:58 stack3 systemd[1]: lvm2-pvscan at 253:55.service failed. On Thu, Sep 24, 2020 at 2:07 PM Alan Davis wrote: > This morning my CentOS 7.7 RDO packstack installation of Rocky hung. On > reboot some of the VMs won't start. This is a primary system and I need to > find the most expedient way to recover without losing data. I'm not using > LVM thin volumes. > > Any help is appreciated. > > Looking at nova-compute.log I see errors trying to find LUN 0 during the > sysfs stage. > > Several machines won't boot because their root disk entries in LVM are > seen as PV and booting them doesn't see them in the DM subsystem. > Other machines boot but there attached disks throw LVM errors about > duplicate PV and preferring the cinder-volumes VG version. > > LVM is showing LVs that have both "bare" entries as well as entries in > cinder-volumes and it's complaining about duplicate PVs, not using lvmetad > and preferring some entries because they are in the dm subsystem. > I've verified that, so far, I haven't lost any data. The "bare" LV not > being used as part of the DM subsystem because it's server won't boot can > be mounted on the openstack host and all data on it is accessible. > > This host has rebooted cleanly multiple times in the past. This is the > first time it's shown any problems. > > Am I missing an LVM filter? (unlikely since it wasn't neede before) > How can I reset the LVM configuration and convince it that it's not seeing > duplicate PV? > How do I ensure that openstack sees the right UUID and volume ID? > > Excerpts from error log and output of lvs : > --- nova-compute.log --- during VM start > 2020-09-24 11:15:27.091 13953 INFO os_brick.initiator.connectors.iscsi > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d1c8 > - default default] Trying to connect to iSCSI portal 172.10.0.40:3260 > 2020-09-24 11:15:29.721 13953 WARNING nova.compute.manager > [req-fd32e16f-c879-402f-a32c-6be45a943c34 48af9a366301467d9fec912fd1c072c6 > f9fc7b412a8446d083da1356aa370eb4 - default d > efault] [instance: de7d740c-786a-4aa2-aa09-d447ae7e14b6] Received > unexpected event network-vif-unplugged-79aff403-d2e4-4266-bd88-d7bd19d501a9 > for instance with vm_state stopped a > nd task_state powering-on. > 2020-09-24 11:16:21.361 13953 WARNING os_brick.initiator.connectors.iscsi > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d > 1c8 - default default] LUN 0 on iSCSI portal 172.10.0.40:3260 not found > on sysfs after logging in. > 2020-09-24 11:16:23.482 13953 INFO os_brick.initiator.connectors.iscsi > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d1c8 > - default default] Trying to connect to iSCSI portal 172.10.0.40:3260 > 2020-09-24 11:17:17.741 13953 WARNING os_brick.initiator.connectors.iscsi > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d > 1c8 - default default] LUN 0 on iSCSI portal 172.10.0.40:3260 not found > on sysfs after logging in.: VolumeDeviceNotFound: Volume device not found > at . > 2020-09-24 11:17:21.864 13953 INFO os_brick.initiator.connectors.iscsi > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d1c8 > - default default] Trying to connect to iSCSI portal 172.10.0.40:3260 > 2020-09-24 11:18:16.113 13953 WARNING os_brick.initiator.connectors.iscsi > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d > 1c8 - default default] LUN 0 on iSCSI portal 172.10.0.40:3260 not found > on sysfs after logging in.: VolumeDeviceNotFound: Volume device not found > at . > 2020-09-24 11:18:17.252 13953 INFO nova.compute.manager > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d1c8 - default defa > ult] [instance: de7d740c-786a-4aa2-aa09-d447ae7e14b6] Successfully > reverted task state from powering-on on failure for instance. > 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server > [req-8d15fb6a-6324-471e-9497-587885eef8f6 396aeda6552f44fdac5f878b90325ee1 > 54af92f2bb494355b96024076184d1c8 - defaul > t default] Exception during message handling: VolumeDeviceNotFound: Volume > device not found at . > 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server Traceback > (most recent call last): > 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, > in _process_incoming > 2020-09-24 11:18:17.279 13953 ERROR oslo_messaging.rpc.server res = > self.dispatcher.dispatch(message) > > > --- lvs output --- > I've annotated 1 machine's disks to illustrate the relationship between > the volume-*** cinder-volumes vg entries and the "bare" lv seen as directly > accessible from the host. > There are 3 servers that won't boot, they are the one's who's home/vg_home > and encrypted_home/encrypted_vg entries are shown. > > WARNING: Not using lvmetad because duplicate PVs were found. > WARNING: Use multipath or vgimportclone to resolve duplicate PVs? > WARNING: After duplicates are resolved, run "pvscan --cache" to enable > lvmetad. > WARNING: Not using device /dev/sdu for PV > yZy8Xk-foKT-ovjV-0EZv-VxEM-GqiP-WH7k53. == backup_lv/encrypted_vg > WARNING: Not using device /dev/sdv for PV > tHA9ui-eSIO-MDmI-RM3u-3Bf4-Dznb-Ha3XfP. == varoptgitlab/encrypted_vg > WARNING: Not using device /dev/sdm for PV > 5eoyCa-sMO4-b7O4-jIfh-byZE-L5pS-3lOu0D. > WARNING: Not using device /dev/sdp for PV > 3BI0nV-TP0k-rgPC-PrjH-FT7z-reMe-ec1spj. > WARNING: Not using device /dev/sdt for PV > ILdbcY-VFCm-fnH6-Y3jc-pdWZ-fnl8-PH3TPe. == storage_lv/encrypted_vg > WARNING: Not using device /dev/sdr for PV > zowU2N-oaBh-r4cO-cxgX-YYiq-Kf3q-mqlHfK. > WARNING: PV yZy8Xk-foKT-ovjV-0EZv-VxEM-GqiP-WH7k53 prefers device > /dev/cinder-volumes/volume-c8da1abf-7143-422c-9ee5-b2724a71c8ff because > device is in dm subsystem. > WARNING: PV tHA9ui-eSIO-MDmI-RM3u-3Bf4-Dznb-Ha3XfP prefers device > /dev/cinder-volumes/volume-0a12012f-8c2e-41fb-aa0c-a7ae99c62487 because > device is in dm subsystem. > WARNING: PV 5eoyCa-sMO4-b7O4-jIfh-byZE-L5pS-3lOu0D prefers device > /dev/cinder-volumes/volume-990a057c-46cc-4a81-ba02-28b72c34791d because > device is in dm subsystem. > WARNING: PV 3BI0nV-TP0k-rgPC-PrjH-FT7z-reMe-ec1spj prefers device > /dev/cinder-volumes/volume-b6a9da6e-1958-46ea-90b4-ac1aebed8c04 because > device is in dm subsystem. > WARNING: PV ILdbcY-VFCm-fnH6-Y3jc-pdWZ-fnl8-PH3TPe prefers device > /dev/cinder-volumes/volume-302dd53b-7d05-4f6d-9ada-8f2ed6e1d4c6 because > device is in dm subsystem. > WARNING: PV zowU2N-oaBh-r4cO-cxgX-YYiq-Kf3q-mqlHfK prefers device > /dev/cinder-volumes/volume-df006472-be7a-4957-972a-1db4463f5d67 because > device is in dm subsystem. > LV VG Attr > LSize Pool Origin Data% Meta% > Move Log Cpy%Sync Convert > home centos_stack3 -wi-ao---- > 4.00g > > root centos_stack3 -wi-ao---- > 50.00g > > swap centos_stack3 -wi-ao---- > 4.00g > > _snapshot-05b1e46b-1ae3-4cd0-9117-3fb53a6d94b0 cinder-volumes swi-a-s--- > 20.00g volume-1d0ff5d5-93a3-44e8-8bfa-a9290765c8c6 0.00 > > lv_filestore cinder-volumes -wi-ao---- > 1.00t > > ... > volume-c8da1abf-7143-422c-9ee5-b2724a71c8ff cinder-volumes -wi-ao---- > 100.00g > > volume-0a12012f-8c2e-41fb-aa0c-a7ae99c62487 cinder-volumes -wi-ao---- > 60.00g > > volume-990a057c-46cc-4a81-ba02-28b72c34791d cinder-volumes -wi-ao---- > 200.00g > > volume-b6a9da6e-1958-46ea-90b4-ac1aebed8c04 cinder-volumes -wi-ao---- > 30.00g > > volume-302dd53b-7d05-4f6d-9ada-8f2ed6e1d4c6 cinder-volumes -wi-ao---- > 60.00g > > volume-df006472-be7a-4957-972a-1db4463f5d67 cinder-volumes -wi-ao---- > 250.00g > > ... > volume-f3250e15-bb9c-43d1-989d-8a8f6635a416 cinder-volumes -wi-ao---- > 20.00g > > volume-fc1d5fcb-fda1-456b-a89d-582b7f94fb04 cinder-volumes -wi-ao---- > 300.00g > > volume-fc50a717-0857-4da3-93cb-a55292f7ed6d cinder-volumes -wi-ao---- > 20.00g > > volume-ff94e2d6-449b-495d-82e6-0debd694c1dd cinder-volumes -wi-ao---- > 20.00g > > data2 data2_vg -wi-a----- > <300.00g > > data data_vg -wi-a----- > 1.79t > > backup_lv encrypted_vg -wi------- > <100.00g == ...WH7k53 > > storage_lv encrypted_vg -wi------- > <60.00g == ...PH3TPe > > varoptgitlab_lv encrypted_vg -wi------- > <200.00g > > varoptgitlab_lv encrypted_vg -wi------- > <30.00g > > varoptgitlab_lv encrypted_vg -wi------- > <60.00g == ...Ha3XfP > encrypted_home home_vg -wi-a----- > <40.00g > > encrypted_home home_vg -wi------- > <60.00g > > pub pub_vg -wi-a----- > <40.00g > > pub_lv pub_vg -wi------- > <250.00g > > rpms repo -wi-a----- > 499.99g > > home vg_home -wi-a----- > <40.00g > > gtri_pub vg_pub -wi-a----- > 20.00g > > pub vg_pub -wi-a----- > <40.00g > -- > Alan Davis > Principal System Administrator > Apogee Research LLC > > -- Alan Davis Principal System Administrator Apogee Research LLC Office : 571.384.8941 x26 Cell : 410.701.0518 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfb29 at cam.ac.uk Thu Sep 24 19:45:07 2020 From: pfb29 at cam.ac.uk (Paul Browne) Date: Thu, 24 Sep 2020 20:45:07 +0100 Subject: [ironic] Recovering IPMI-type baremetal nodes in 'error' state In-Reply-To: References: Message-ID: Hi Julia, Thanks very much for the detailed answer and pointers, I've done some digging along the lines you suggested, results here Digging into the Ironic DB, I do see the last_error field for all 3 is " Failed to tear down. Error: IPMI call failed: power status." That makes sense, as IPMI-over-LAN was accidentally disabled on those nodes, so these calls would fail. I think that the order of operation was that the failing calls were part of an instance teardown and node cleaning. IPMI-over-LAN's been fixed so now manual ipmitool power status calls will succeed, and these correct credentials are in the node ipmi_* driver_info fields. The nodes are also out of maintenance mode. We're running Train Ironic, as an extra piece of info if that's relevant at all to how Ironic may periodically check and/or correct node states. Perhaps the next thing to try might be manual DB edit of provision_state of these 3 nodes back to 'available' On Thu, 24 Sep 2020 at 05:29, Julia Kreger wrote: > Well, somehow I accidentally clicked send! \o/ > > If you can confirm that the provision_state is ERROR, and if you can > identify how the machines got there, it would be helpful. If the > machines are still in working order in the database, you may need to > actually edit the database because we offer no explicit means to force > override the state, mainly to help prevent issues sort of exactly like > this. I suspect you may be encountering issues if the node is marked > in maintenance state. If the power state is None, maintenance is also > set automatically. Newer versions of ironic _do_ periodically check > nodes and reset that state, but again it is something to check and if > there are continued connectivity issues to the BMC then that may not > be happening. > > So: to recap: > > 1) Verify the node's provision_state is ERROR. If ERROR is coming from > Nova, that is a different situation. > 2) Ensure the node is not set in maintenance mode[3] > 3) You may also need to ensure the > ipmi_address/ipmi_username/ipmi_password is also correct for the node > that matches what can be accessed on the motherboard. > > Additionally, you may also want to externally verify that you actually > query the IPMI BMCs. If this somehow started down this path due to > power management being lost due to the BMC, some BMCs can have some > weirdness around IP networking so it is always good just to manually > check using ipmitool. > > One last thing, is target_provision_state set for these nodes? > > [3]: > https://docs.openstack.org/python-ironicclient/latest/cli/osc/v1/index.html#baremetal-node-maintenance-unset > > On Wed, Sep 23, 2020 at 9:20 PM Julia Kreger > wrote: > > > > Greetings Paul, > > > > Obviously, deleting and re-enrolling would be an action of last > > resort. The only way that I can think you could have gotten the > > machines into the provision state of ERROR is if they were somehow > > requested to be un-provisioned. > > > > The state machine diagram[0], refers to the provision state verb as > > "deleted", but the command line tool command this is undeploy[1]. > > > > > > [0]: https://docs.openstack.org/ironic/latest/_images/states.svg > > [1]: > https://docs.openstack.org/python-ironicclient/latest/cli/osc/v1/index.html#baremetal-node-undeploy > > > > > > > > On Wed, Sep 23, 2020 at 4:58 PM Paul Browne wrote: > > > > > > Hello all, > > > > > > I have a handful of baremetal nodes enrolled in Ironic that use the > IPMI hardware type, whose motherboards were recently replaced in a hardware > recall by the vendor. > > > > > > After the replacement, the BMC IPMI-over-LAN feature was accidentally > left disabled on the nodes, and future attempts to control them with Ironic > has put these nodes into the ERROR provisioning state. > > > > > > The IPMI-over-LAN feature on the boards has been enabled again as > expected, but is there now any easy way to get the BM nodes back out of > that ERROR state, without first deleting and re-enrolling them? > > > > > > -- > > > ******************* > > > Paul Browne > > > Research Computing Platforms > > > University Information Services > > > Roger Needham Building > > > JJ Thompson Avenue > > > University of Cambridge > > > Cambridge > > > United Kingdom > > > E-Mail: pfb29 at cam.ac.uk > > > Tel: 0044-1223-746548 > > > ******************* > -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29 at cam.ac.uk Tel: 0044-1223-746548 ******************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Sep 24 19:47:42 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 24 Sep 2020 12:47:42 -0700 Subject: [all][election][TC] Candidacy Message-ID: Hello Everyone! I am once again declaring my candidacy for the Technical Committee. This last year I have learned so much about OpenStack and our amazing community. There is never a shortage of work to be done and I feel there is still a lot I can do to help the community as a part of the TC. I served as TC Vice Chair during the second half of my term and I felt like I was able to do a lot to help support Mohammed, the chair and other members of the TC. My work in coordinating the new office hours and our time at the upcoming PTG will enable the future TC to be more accessible and productive. I also lead the community goal to improve the onboarding documentation, further lowering the barrier for new contributors. If elected, I will continue bridging our community with the kubernetes community. I am actively facilitating the conversations between the TC and the k8s Technical Steering Committee at the upcoming PTG meetings to share insight between both of our communities and generally share knowledge about keeping open source communities healthy. I would also like to take a more active role in pushing the unification of the community around a single CLI to simplify openstack operations to better support our users and lessen the load on developers. I think a good first step would be to work with the CLI/SDK team to build a list of items that are blocking project teams from fully implementing all their features in the openstack client. From there we can figure out what is missing from the list, which teams are affected by which blocking items and prioritize what needs to be worked on first. Thank you for your time and consideration. As I said in my platform last year, I love this community and all the awesome people in it, serving on the TC is a great experience and an honor. -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Sep 24 19:48:39 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 24 Sep 2020 21:48:39 +0200 Subject: [neutron] Drivers meeting agenda - 25.09.2020 Message-ID: <20200924194839.GC870720@p1> Hi, For tomorrows drivers meeting we again don't have any new RFEs to discuss but I wanted to use this meeting to quickly discuss about 1 bug: https://bugs.launchpad.net/neutron/+bug/1895933 So see You on tomorrows meeting :) -- Slawek Kaplonski Principal Software Engineer Red Hat From skaplons at redhat.com Thu Sep 24 20:36:52 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 24 Sep 2020 22:36:52 +0200 Subject: [election][neutron] PTL candidacy for Wallaby Message-ID: <20200924203652.GD870720@p1> Hi, I want to propose my candidacy and continue serving as Neutron PTL in the Wallaby cycle. In the Victoria cycle I proposed a couple of major goals. We achieved most of them, like for example: * add support for metadata over IPv6 in Neutron, * continue adoption of the OVN driver as one of the in-tree Neutron drivers. * find new maintainers for the networking-midonet project so we keep it the * Neutron stadium. In the upcoming cycle I would like to continue work which we started already and focus mostly on: * implement old, unfinished Blueprints like adoption of new engine facade, * continue adoption of the OVN backend in Neutron, closing more feature parity * gaps * switch the OVN driver to be the default backend in Devstack, In addition to the goals mentioned above mentioned, I want to continue work to improve our CI stability and coverage. It is my continuing desire to do my best to help our outstanding team to deliver better software and to grow. -- Slawek Kaplonski Principal Software Engineer Red Hat From zigo at debian.org Thu Sep 24 20:39:50 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 24 Sep 2020 22:39:50 +0200 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: On 9/21/20 7:53 PM, Graham Hayes wrote: > Hi All > > It is that time of year / release again - and we need to choose the > community goals for Wallaby. > > Myself and Nate looked over the list of goals [1][2][3], and we are > suggesting one of the following: > > >  - Finish moving legacy python-*client CLIs to python-openstackclient Go go go !!! :) >  - Move from oslo.rootwrap to oslo.privsep Dito. Rootwrap is painfully slow (because it takes too long to spawn a python process...). >  - Implement the API reference guide changes >  - All API to provide a /healthcheck URL like Keystone (and others) provide What about an "openstack purge " that would call all projects? We once had a "/purge" goal, I'm not sure how far it went... What I know, is that purging all resources of a project is currently still a big painpoint. > Some of these goals have champions signed up already, but we need to > make sure they are still available to do them. If you are interested in > helping drive any of the goals, please speak up! I'm still available to attempt the /healthcheck thingy, I kind of succeed in all major project but ... nova. Unfortunately, it was decided in the project that we should put this on hold until the /healthcheck can implement more check than just to know if the API is alive. 5 months forward, I believe my original patch [1] should have been approved first as a first approach. Nova team: any reaction? Any progress on your super-nice-health-check? Can this be implemented elsewhere using what you've done? Maybe that work should go in oslo.middleware too? Cheers, Thomas Goirand (zigo) [1] https://review.opendev.org/#/c/724684/ > Additionally, we have traditionally selected 2 goals per cycle - > however with the people available to do the work across projects > Nate and I briefly discussed reducing that to one for this cycle. > > What does the community think about this? The /healthcheck is super-easy to implement for any project using oslo.middleware, so please select that one (and others). It's also mostly done... Cheers, Thomas Goirand (zigo) From tonyliu0592 at hotmail.com Thu Sep 24 20:59:10 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 24 Sep 2020 20:59:10 +0000 Subject: [Neutron] Not create .2 port In-Reply-To: References: <20200918074904.GB701072@p1> Message-ID: Any comments? Thanks! Tony > -----Original Message----- > From: Tony Liu > Sent: Tuesday, September 22, 2020 11:58 AM > To: Slawek Kaplonski > Cc: openstack-discuss at lists.openstack.org > Subject: RE: [Neutron] Not create .2 port > > I create a subnet with --no-dhcp, the .2 address is not allocated, but a > port is still created without any address. Is this expected? > Since DHCP is disabled, what's this port for? > > Thanks! > Tony > > -----Original Message----- > > From: Slawek Kaplonski > > Sent: Friday, September 18, 2020 12:49 AM > > To: Tony Liu > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [Neutron] Not create .2 port > > > > Hi, > > > > On Fri, Sep 18, 2020 at 03:40:54AM +0000, Tony Liu wrote: > > > Hi, > > > > > > When create a subnet, by default, the first address is the gateway > > > and Neutron also allocates an address for serving DHCP and DNS. Is > > > there any way to NOT create such port when creating subnet? > > > > You can specify "--gateway None" if You don't want to have gateway > > configured in Your subnet. > > And for dhcp ports, You can set "--no-dhcp" for subnet so it will not > > create dhcp ports in such subnet also. > > > > > > > > > > > Thanks! > > > Tony > > > > > > > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat From whayutin at redhat.com Thu Sep 24 22:11:21 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 24 Sep 2020 16:11:21 -0600 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core Message-ID: Greetings, I really thought someone else had already sent this out. However I don't see it so here we go. I'd like to propose Yatin Karel as tripleo-ci core. You know you want to say +2 :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Thu Sep 24 22:22:01 2020 From: johfulto at redhat.com (John Fulton) Date: Thu, 24 Sep 2020 18:22:01 -0400 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +1 On Thu, Sep 24, 2020 at 6:14 PM Wesley Hayutin wrote: > > Greetings, > > I really thought someone else had already sent this out. However I don't see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) From emilien at redhat.com Thu Sep 24 22:37:51 2020 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 24 Sep 2020 18:37:51 -0400 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: wait, he wasn't core before? Huge +2 On Thu, Sep 24, 2020 at 6:27 PM John Fulton wrote: > +1 > > On Thu, Sep 24, 2020 at 6:14 PM Wesley Hayutin > wrote: > > > > Greetings, > > > > I really thought someone else had already sent this out. However I don't > see it so here we go. > > I'd like to propose Yatin Karel as tripleo-ci core. > > > > You know you want to say +2 :) > > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From beagles at redhat.com Thu Sep 24 23:23:07 2020 From: beagles at redhat.com (Brent Eagles) Date: Thu, 24 Sep 2020 20:53:07 -0230 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +1 On Thu, Sep 24, 2020 at 8:10 PM Emilien Macchi wrote: > wait, he wasn't core before? > Huge +2 > > On Thu, Sep 24, 2020 at 6:27 PM John Fulton wrote: > >> +1 >> >> On Thu, Sep 24, 2020 at 6:14 PM Wesley Hayutin >> wrote: >> > >> > Greetings, >> > >> > I really thought someone else had already sent this out. However I >> don't see it so here we go. >> > I'd like to propose Yatin Karel as tripleo-ci core. >> > >> > You know you want to say +2 :) >> >> >> > > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Thu Sep 24 23:25:59 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Fri, 25 Sep 2020 02:25:59 +0300 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +3 :) On Fri, Sep 25, 2020 at 1:16 AM Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Sep 24 23:47:38 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 24 Sep 2020 18:47:38 -0500 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: An unofficial well deserved!:) Amy (spotz) On Thu, Sep 24, 2020 at 5:14 PM Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From its-openstack at zohocorp.com Fri Sep 25 04:58:23 2020 From: its-openstack at zohocorp.com (its-openstack at zohocorp.com) Date: Fri, 25 Sep 2020 10:28:23 +0530 Subject: user read-only role not working Message-ID: <174c3a06897.bdfa9ca56831.6510718612076837121@zohocorp.com> Dear Openstack, We have deployed openstack train branch. This mail is in regards to the default role in openstack. we are trying to create a read-only user i.e, the said user can only view in the web portal(horizon)/using cli commands. the user cannot create an instance or delete an instance , the same with any resource. we created a user in a project test with reader role, but in horizon/cli able to create and delete instance and similar to other access also if you so kindly help us fix this issue would be grateful. the commands used for creation $ openstack user create --domain default --password-prompt mailto:test-reader at test.com $ openstack role add --project test --user mailto:gowtham.sankar at zohocorp.com reader Thanks and Regards sysadmin -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Sep 25 05:13:45 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 25 Sep 2020 08:13:45 +0300 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +1 with my eyes closed! On Fri, Sep 25, 2020 at 1:13 AM Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Fri Sep 25 05:29:05 2020 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 25 Sep 2020 10:59:05 +0530 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: On Fri, Sep 25, 2020 at 3:48 AM Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) > +2:) -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Fri Sep 25 05:52:51 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 25 Sep 2020 11:22:51 +0530 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +1 :-) Thanks, Chandan Kumar On Fri, Sep 25, 2020 at 11:04 AM Rabi Mishra wrote: > > > On Fri, Sep 25, 2020 at 3:48 AM Wesley Hayutin wrote: >> >> Greetings, >> >> I really thought someone else had already sent this out. However I don't see it so here we go. >> I'd like to propose Yatin Karel as tripleo-ci core. >> >> You know you want to say +2 :) > > > +2:) > > -- > Regards, > Rabi Mishra > From gfidente at redhat.com Fri Sep 25 06:53:35 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 25 Sep 2020 08:53:35 +0200 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: <233f0a88-3155-9833-d385-9c14c412c6c4@redhat.com> On 9/25/20 12:11 AM, Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core.  > > You know you want to say +2 :) If I could vote with two hands I would! +2 Thanks Yatin -- Giulio Fidente GPG KEY: 08D733BA From bdobreli at redhat.com Fri Sep 25 08:14:19 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 25 Sep 2020 10:14:19 +0200 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: On 9/25/20 12:37 AM, Emilien Macchi wrote: > wait, he wasn't core before? > Huge +2 +1 on that > > On Thu, Sep 24, 2020 at 6:27 PM John Fulton > wrote: > > +1 > > On Thu, Sep 24, 2020 at 6:14 PM Wesley Hayutin > wrote: > > > > Greetings, > > > > I really thought someone else had already sent this out. However > I don't see it so here we go. > > I'd like to propose Yatin Karel as tripleo-ci core. > > > > You know you want to say +2 :) > > > > > -- > Emilien Macchi -- Best regards, Bogdan Dobrelya, Irc #bogdando From marios at redhat.com Fri Sep 25 09:54:03 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 25 Sep 2020 12:54:03 +0300 Subject: [tripleo] PTL candidacy for Wallaby Message-ID: I would like to nominate myself for the TripleO PTL role for the Wallaby cycle. I think it is obvious that the role of the PTL has changed significantly compared to the early days of TripleO. Most importantly we are (and have been for a while) now organised into self-contained topic squads (deployment, validations, upgrades, networking, storage, ci, etc). Typically technical decisions are driven exclusively within those squads (even if they are done 'in the open' on gerrit and launchpad and opendev-discuss). The role of the PTL then is more about coordination across the squads, ensuring we deliver release targets, keeping bug backlogs in order and facilitating the resolution of any conflicts that arise. I have worked on TripleO since before the Kilo release and have collaborated with many current and past members of the community including engineers and also users/operators. My contributions thus far have been in code/docs/bugs, originally working on upgrades and more recently working on CI. I haven't had the opportunity to work on the 'admin' side of TripleO (besides in the past cutting the occasional release). I am thus especially excited by the prospect of contributing in this new way as PTL, if TripleO will have me! I sincerely promise to give it my best and I am sure that together with the kind help of the current PTL Wes Hayutin we will have a smooth Wallaby cycle, meeting our responsibilities to the foundation and the community. There is (as ever!) a lot of exciting work coming in the Victoria release and continuing in Wallaby including the continued exploration of the removal of Heat as our configuration engine, the new validations repos and framework, the new container builds process, Fast Forward Upgrades II (back with a vengeance), ci component and dependency pipelines, ci parent/child jobs, and I'm sure many more that we will discuss at the coming Wallaby PTG. Two areas I would like to focus on as PTL should you give me the opportunity are: - improve our current documentation/deployment guides - at minimum they must be updated to recent releases and there are many items missing. I am hoping to rally support from the various squads for this. - improve visibility into the various TripleO squads - this may mean re-instating a 'new' weekly/bi-weekly (or other frequency) tripleo meeting or by some new way of collaborating. Again this is really dependent on support from the tripleo community. thank you for your consideration! -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Fri Sep 25 09:55:21 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 25 Sep 2020 11:55:21 +0200 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: Hi > On 24. Sep 2020, at 22:39, Thomas Goirand wrote: > >> >> - Finish moving legacy python-*client CLIs to python-openstackclient > > Go go go !!! :) Cool, that somebody needs that. What we actually miss here is the people doing that. Amount of work is really big (while in the long run it will definitely help reducing overall efforts). My suggestion is to focus for this cycle only on nova (targeting more is simply useless). I have started switching some parts in OSC to use SDK (https://review.opendev.org/#/q/topic:osc-first ). Any help (even simply reviews) is welcome. I don’t like summer dev cycle, since the work is really close to 0 due to vacations season. But in autumn/winter I will try do more if there is acceptance. Honestly speaking I do not believe in a community goal for that any more. It was tried, but it feels there is a split in the community. My personal opinion is that we should simply start doing that from SDK/CLI side closely working with one service at a time (since nova team expressed desire I started supporting them. Hope it will bring its fruits). Teams willing that will be able to achieve the target, teams not willing are free to stay. TC might think differently and act correspondingly, this is just my personal opinion. Other approaches seem a bit of utopia to me. > > What about an "openstack purge " that would call all > projects? We once had a "/purge" goal, I'm not sure how far it went... > What I know, is that purging all resources of a project is currently > still a big painpoint. Please have a look at https://review.opendev.org/#/c/734485/ It implements an “alternative” from the OSC point of view by invoking relatively newly introduced project cleanup functionality in SDK. This supports more that original purge (except identity parts), tries to do in parallel as much as possible and even now supports filtering (i.e. drop all resources created or updated before DD.MM.YYYY). I am not sure here whether it should replace purge or come as alternative so far. Here as well - reviews and comments are welcome. For reference, back in Denver it was agreed to go this way, since no other approach seemed to be really achievable. Regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpantano at redhat.com Fri Sep 25 09:57:55 2020 From: fpantano at redhat.com (Francesco Pantano) Date: Fri, 25 Sep 2020 11:57:55 +0200 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: Big +1 On Fri, Sep 25, 2020 at 10:21 AM Bogdan Dobrelya wrote: > On 9/25/20 12:37 AM, Emilien Macchi wrote: > > wait, he wasn't core before? > > Huge +2 > > +1 on that > > > > > On Thu, Sep 24, 2020 at 6:27 PM John Fulton > > wrote: > > > > +1 > > > > On Thu, Sep 24, 2020 at 6:14 PM Wesley Hayutin > > wrote: > > > > > > Greetings, > > > > > > I really thought someone else had already sent this out. However > > I don't see it so here we go. > > > I'd like to propose Yatin Karel as tripleo-ci core. > > > > > > You know you want to say +2 :) > > > > > > > > > > -- > > Emilien Macchi > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpena at redhat.com Fri Sep 25 10:16:21 2020 From: jpena at redhat.com (Javier Pena) Date: Fri, 25 Sep 2020 06:16:21 -0400 (EDT) Subject: =?utf-8?Q?Re:_[rpm-packaging]_Proposing_H?= =?utf-8?Q?erv=C3=A9_Beraud_as_new_core_reviewer?= In-Reply-To: References: <1110488948.50802218.1600423119788.JavaMail.zimbra@redhat.com> <545442464.50802244.1600423155901.JavaMail.zimbra@redhat.com> Message-ID: <419917530.51649213.1601028981838.JavaMail.zimbra@redhat.com> > Hi Javier, > > > Hervé has been providing consistently good reviews over the last few > > months, and I think he would be a great addition to the core reviewer > > team. > > +1 happy to have him on board! > > Greetings, > Dirk > > Hi all, Hervé is now a core reviewer. Congrats! Regards, Javier From mnaser at vexxhost.com Fri Sep 25 10:59:50 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 25 Sep 2020 06:59:50 -0400 Subject: [tc] monthly meeting Message-ID: Hi everyone, Our monthly TC meeting is scheduled for next Thursday, October 1st, at 1400 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, September 30th, at 1900 UTC. Thank you, Regards, Mohammed -- Mohammed Naser VEXXHOST, Inc. From ignaziocassano at gmail.com Fri Sep 25 11:57:05 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 25 Sep 2020 13:57:05 +0200 Subject: [openstack][manila] GenericShareDriver on stein Message-ID: Hello Stackers, I am testing manila manila.share.drivers.generic.GenericShareDriver with manila.share.drivers.generic.GenericShareDriver. I created in my project an instance with nfs server and it is up and running. In my manila.conf I have specified the instance id, use and password: [locale] share_driver = manila.share.drivers.generic.GenericShareDriver driver_handles_share_servers = false share_backend_name= locale service_instance_user = manila service_instance_passwod = manila service_instance_name_or_id = 5b0fa246-e94b-4a84-b730-7066dfb31fb0 path_to_private_key=~/.ssh/id_rsa path_to_public_key=~/.ssh/id_rsa.pub I have also other backend with netapp drivers and they works fine. The generic drivers reports: 2020-09-25 13:56:13.955 4097174 WARNING manila.share.drivers.generic [req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] Waiting for the common service VM to become available. Driver is currently uninitialized. Share server: None Retry interval: 5 2020-09-25 13:56:14.143 4097174 ERROR manila.share.drivers.generic [req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] string indices must be integers 2020-09-25 13:56:19.144 4097174 WARNING manila.share.drivers.generic [req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] Waiting for the common service VM to become available. Driver is currently uninitialized. Share server: None Retry interval: 5 2020-09-25 13:56:19.340 4097174 ERROR manila.share.drivers.generic [req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] string indices must be integers Seems the instance I created is not available. Please, any help? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Sep 25 12:31:40 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 25 Sep 2020 14:31:40 +0200 Subject: =?UTF-8?Q?Re=3A_=5Brpm=2Dpackaging=5D_Proposing_Herv=C3=A9_Beraud_as_new_c?= =?UTF-8?Q?ore_reviewer?= In-Reply-To: <419917530.51649213.1601028981838.JavaMail.zimbra@redhat.com> References: <1110488948.50802218.1600423119788.JavaMail.zimbra@redhat.com> <545442464.50802244.1600423155901.JavaMail.zimbra@redhat.com> <419917530.51649213.1601028981838.JavaMail.zimbra@redhat.com> Message-ID: Thanks guys! Le ven. 25 sept. 2020 à 12:19, Javier Pena a écrit : > > > Hi Javier, > > > > > Hervé has been providing consistently good reviews over the last few > > > months, and I think he would be a great addition to the core reviewer > > > team. > > > > +1 happy to have him on board! > > > > Greetings, > > Dirk > > > > > > Hi all, > > Hervé is now a core reviewer. Congrats! > > Regards, > Javier > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Sep 25 12:38:40 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 25 Sep 2020 14:38:40 +0200 Subject: =?UTF-8?Q?=5Belection=5D=5Bplt=5D_Herv=C3=A9_Beraud_candidacy_for_Release_?= =?UTF-8?Q?Management_PTL?= Message-ID: Hello everyone, I am submitting my candidacy for Release Management team PTL in the Wallaby cycle. Indeed, I am a core Release Management team member for a few cycles now and I want to involve myself more in this team by proposing me as PTL. During this cycle the main efforts will be around the team building. Indeed, we now have a lot of great tools and automations but we need to recruit new members and mentor them to keep a strong and resilient core team. By recruiting more people we could easily spread the workload on more core members. I plan to dedicate myself on this topic during this cycle. Thanks for your consideration! Hervé -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Fri Sep 25 12:47:47 2020 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 25 Sep 2020 08:47:47 -0400 Subject: [openstack][manila] GenericShareDriver on stein In-Reply-To: References: Message-ID: <20200925124747.3d2vas53jmh6o6cc@barron.net> On 25/09/20 13:57 +0200, Ignazio Cassano wrote: >Hello Stackers, >I am testing manila manila.share.drivers.generic.GenericShareDriver with >manila.share.drivers.generic.GenericShareDriver. >I created in my project an instance with nfs server and it is up and >running. >In my manila.conf I have specified the instance id, use and password: > >[locale] >share_driver = manila.share.drivers.generic.GenericShareDriver >driver_handles_share_servers = false >share_backend_name= locale >service_instance_user = manila >service_instance_passwod = manila >service_instance_name_or_id = 5b0fa246-e94b-4a84-b730-7066dfb31fb0 >path_to_private_key=~/.ssh/id_rsa >path_to_public_key=~/.ssh/id_rsa.pub > >I have also other backend with netapp drivers and they works fine. >The generic drivers reports: >2020-09-25 13:56:13.955 4097174 WARNING manila.share.drivers.generic >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] Waiting for the common >service VM to become available. Driver is currently uninitialized. Share >server: None Retry interval: 5 >2020-09-25 13:56:14.143 4097174 ERROR manila.share.drivers.generic >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] string indices must be >integers >2020-09-25 13:56:19.144 4097174 WARNING manila.share.drivers.generic >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] Waiting for the common >service VM to become available. Driver is currently uninitialized. Share >server: None Retry interval: 5 >2020-09-25 13:56:19.340 4097174 ERROR manila.share.drivers.generic >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] string indices must be >integers > >Seems the instance I created is not available. >Please, any help? >Ignazio To debug this, start where the 'Waiting for the common service VM ...' message is emitted [1]. There you can see that the _is_share_server_active() method is returning False. Tracing the flow from that method definition [2] takes you to the ensure_service_instance [3] and _check_server_availability [4] methods. You can see what checks are being made along the way and try them yourself. If the share server VM is up and on the right network make sure it is reachable via SSH from the node where the manila-share service is running. Cheers, -- Tom Barron [1] https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/generic.py#L186 [2] https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/generic.py#L744 [3] https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/service_instance.py#L385 [4] https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/service_instance.py#L641 From mdemaced at redhat.com Fri Sep 25 13:01:58 2020 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Fri, 25 Sep 2020 15:01:58 +0200 Subject: [election][Kuryr] PTL candidacy for Wallaby Message-ID: Hello, I would like to propose my candidacy to be the PTL of Kuryr for the Wallaby cycle. I have been a contributor to Kuryr since the Queens release and was appointed to the Kuryr core team in the Ussuri cycle. I see the following goals for the Kuryr project during the Wallaby cycle: * Improve stability and extend CI: we already included great improvements in our CI, but I believe we must continuously work on it to provide better and quicker feedback when developing new features. As part of that, we should start graduating experimental jobs like the ones using OVN-Octavia driver and Network Policy, and handle issues with jobs requiring more swap. * Address testing gaps for new and existing features: this would avoid regressions and find bugs. Also, improves stability. * Continue extending and improving Kuryr functionalities, such as dual stack, SCTP support and CRDs usage. * Grow the contributor base. I would like to contribute back to the community as a PTL while following great examples of previous PTLs. Thank you, Maysa. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Fri Sep 25 14:13:01 2020 From: hjensas at redhat.com (Harald Jensas) Date: Fri, 25 Sep 2020 16:13:01 +0200 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: <04e0a57b-17fc-b651-10b0-000c578d64fb@redhat.com> On 9/25/20 12:11 AM, Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) Yes! +2 From ignaziocassano at gmail.com Fri Sep 25 14:16:01 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 25 Sep 2020 16:16:01 +0200 Subject: [openstack][manila] GenericShareDriver on stein In-Reply-To: <20200925124747.3d2vas53jmh6o6cc@barron.net> References: <20200925124747.3d2vas53jmh6o6cc@barron.net> Message-ID: Thanks, I'll check it out. Il giorno ven 25 set 2020 alle ore 14:47 Tom Barron ha scritto: > On 25/09/20 13:57 +0200, Ignazio Cassano wrote: > >Hello Stackers, > >I am testing manila manila.share.drivers.generic.GenericShareDriver with > >manila.share.drivers.generic.GenericShareDriver. > >I created in my project an instance with nfs server and it is up and > >running. > >In my manila.conf I have specified the instance id, use and password: > > > >[locale] > >share_driver = manila.share.drivers.generic.GenericShareDriver > >driver_handles_share_servers = false > >share_backend_name= locale > >service_instance_user = manila > >service_instance_passwod = manila > >service_instance_name_or_id = 5b0fa246-e94b-4a84-b730-7066dfb31fb0 > >path_to_private_key=~/.ssh/id_rsa > >path_to_public_key=~/.ssh/id_rsa.pub > > > >I have also other backend with netapp drivers and they works fine. > >The generic drivers reports: > >2020-09-25 13:56:13.955 4097174 WARNING manila.share.drivers.generic > >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] Waiting for the > common > >service VM to become available. Driver is currently uninitialized. Share > >server: None Retry interval: 5 > >2020-09-25 13:56:14.143 4097174 ERROR manila.share.drivers.generic > >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] string indices must > be > >integers > >2020-09-25 13:56:19.144 4097174 WARNING manila.share.drivers.generic > >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] Waiting for the > common > >service VM to become available. Driver is currently uninitialized. Share > >server: None Retry interval: 5 > >2020-09-25 13:56:19.340 4097174 ERROR manila.share.drivers.generic > >[req-f0a5294a-3ef2-4226-9fc9-f3b4f35c53f5 - - - - -] string indices must > be > >integers > > > >Seems the instance I created is not available. > >Please, any help? > >Ignazio > > To debug this, start where the 'Waiting for the common service VM ...' > message is emitted [1]. There you can see that the > _is_share_server_active() method is returning False. Tracing the flow > from that method definition [2] takes you to the > ensure_service_instance [3] and _check_server_availability [4] > methods. > > You can see what checks are being made along the way and try them > yourself. If the share server VM is up and on the right network make > sure it is reachable via SSH from the node where the manila-share > service is running. > > Cheers, > > -- Tom Barron > > [1] > https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/generic.py#L186 > > [2] > https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/generic.py#L744 > > [3] > https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/service_instance.py#L385 > > [4] > > https://github.com/openstack/manila/blob/stable/stein/manila/share/drivers/service_instance.py#L641 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Sep 25 14:25:26 2020 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 25 Sep 2020 09:25:26 -0500 Subject: [keystone][policy] user read-only role not working In-Reply-To: <174c3a06897.bdfa9ca56831.6510718612076837121@zohocorp.com> References: <174c3a06897.bdfa9ca56831.6510718612076837121@zohocorp.com> Message-ID: I don't believe that the reader role was respected by most projects in Train. Moving every project to support it is still a work in progress. On 9/24/20 11:58 PM, its-openstack at zohocorp.com wrote: > Dear Openstack, > > We have deployed openstack train branch. > > This mail is in regards to the default role in openstack. we are trying > to create a read-only user i.e, the said user can only view in the web > portal(horizon)/using cli commands. > the user cannot create an instance or delete an instance , the same with > any resource. > > we created a user in a project test with reader role, but in horizon/cli > able to create and delete instance and similar to other access also > if you so kindly help us fix this issue would be grateful. > > the commands used for creation > > > > $ openstack user create --domain default --password-prompt > test-reader at test.com > $ openstack role add --project test --user test-reader at test.com > reader > > > > Thanks and Regards > sysadmin > > > > > From aschultz at redhat.com Fri Sep 25 14:34:30 2020 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 25 Sep 2020 08:34:30 -0600 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +1 On Thu, Sep 24, 2020 at 4:21 PM Wesley Hayutin wrote: > > Greetings, > > I really thought someone else had already sent this out. However I don't see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) From sgolovat at redhat.com Fri Sep 25 15:06:54 2020 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Fri, 25 Sep 2020 17:06:54 +0200 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: +1 пт, 25 сент. 2020 г. в 00:13, Wesley Hayutin : > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) > -- Sergii Golovatiuk Senior Software Developer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From 373056786 at qq.com Fri Sep 25 03:47:19 2020 From: 373056786 at qq.com (Yule Sun) Date: Fri, 25 Sep 2020 03:47:19 +0000 Subject: The question about OpenStack instance fail to boot after unexpected compute node shutdown Message-ID: I saw your message on the website http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005635.html . And i have been troubled by the same problem for a long time . I tried to change the parameters resume_guests_state_on_host_boot=True and instance_usage_audit=True in the nova.conf ,but it didn't work . I wonder if you have solved this problem, looking forward your reply, Thank you and best regards. And my instance error looks like this . [cid:e8770762-d2b2-4630-9ce6-4d357dc92cee] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2020-09-25_114323.png Type: image/png Size: 16879 bytes Desc: 2020-09-25_114323.png URL: From marios at redhat.com Fri Sep 25 16:27:28 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 25 Sep 2020 19:27:28 +0300 Subject: [tripleo]] tripleo victoria rc1 Message-ID: hello TripleO folks o/ we don't normally/aren't expected to cut release candidates. However after discussion with and guidance from Wes I decided to propose an rc1 as practice since I haven't used the new releases tool before [4]. The proposal is up at [1] but I've set -1 there on request from reviewers; we can't merge that until [2] merges first. If you are interested please check [1] and add any comments. In particular let me know if you want to include a release for one of the independent repos [3] (though for validations-common/libs we need to reach out to the validations team they are managing those AFAIK). thanks, hope you enjoy your weekend! [1] https://review.opendev.org/#/c/754346/ [2] https://review.opendev.org/754347 [3] https://releases.openstack.org/teams/tripleo.html#independent [4] https://releases.openstack.org/reference/using.html#using-new-release-command -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Sep 25 18:22:30 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 25 Sep 2020 11:22:30 -0700 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> <174c0a40e62.10a3aebf382501.4488410530424253723@ghanshyammann.com> Message-ID: On Thu, Sep 24, 2020 at 9:36 AM Graham Hayes wrote: > On 24/09/2020 16:03, Ghanshyam Mann wrote: > > ---- On Mon, 21 Sep 2020 12:53:17 -0500 Graham Hayes > wrote ---- > > > Hi All > > > > > > It is that time of year / release again - and we need to choose the > > > community goals for Wallaby. > > > > > > Myself and Nate looked over the list of goals [1][2][3], and we are > > > suggesting one of the following: > > > > > > > > > > Thanks Graham, Nate for starting this. > > > > > - Finish moving legacy python-*client CLIs to > python-openstackclient > > > > Are not we going with popup team first for osc work? I am fine with goal > also but > > we should do this as multi-cycle goal with no other goal in parallel so > that we actually > > finish this on time. > > Yeah - this was just one of the goals we thought might have some > discussion, and we didn't know where the popup team was in their > work. > > If that work is still on going, we should leave the goal for > another cycle or two. > I don't think a *ton* of progress was made this last release, but I could be wrong. I am guessing we will want to wait one more cycle before making it a goal. I am 100% behind this being a goal at some point though. > > > > - Move from oslo.rootwrap to oslo.privsep > > > > +1, this is already proposed goal since last cycle. > > > > -gmann > > > > > - Implement the API reference guide changes > > > - All API to provide a /healthcheck URL like Keystone (and others) > provide > > > > > > Some of these goals have champions signed up already, but we need to > > > make sure they are still available to do them. If you are interested > in > > > helping drive any of the goals, please speak up! > > > > > > We need to select goals in time for the new release cycle - so please > > > reply if there is goals you think should be included in this list, or > > > not included. > > > > > > Next steps after this will be helping people write a proposed goal > > > and then the TC selecting the ones we will pursue during Wallaby. > > > > > > Additionally, we have traditionally selected 2 goals per cycle - > > > however with the people available to do the work across projects > > > Nate and I briefly discussed reducing that to one for this cycle. > > > > > > What does the community think about this? > > > > > > Thanks, > > > > > > Graham > > > > > > 1 - https://etherpad.opendev.org/p/community-goals > > > 2 - https://governance.openstack.org/tc/goals/proposed/index.html > > > 3 - https://etherpad.opendev.org/p/community-w-series-goals > > > 4 - > > > > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > > > > > > > > > > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Sep 25 18:25:07 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 25 Sep 2020 11:25:07 -0700 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: Hey Artem! On Fri, Sep 25, 2020 at 2:56 AM Artem Goncharov wrote: > Hi > > On 24. Sep 2020, at 22:39, Thomas Goirand wrote: > > > - Finish moving legacy python-*client CLIs to python-openstackclient > > > Go go go !!! :) > > > Cool, that somebody needs that. What we actually miss here is the people > doing that. Amount of work is really big (while in the long run it will > definitely help reducing overall efforts). My suggestion is to focus for > this cycle only on nova (targeting more is simply useless). I have started > switching some parts in OSC to use SDK ( > https://review.opendev.org/#/q/topic:osc-first). Any help (even simply > reviews) is welcome. I don’t like summer dev cycle, since the work is > really close to 0 due to vacations season. But in autumn/winter I will try > do more if there is acceptance. > > I'm resolving now to be better at reviews over the upcoming months! I have been hovering around on the peripherie and I think this release I will be able to get more involved. I plan on attending your PTG sessions and am trying to rally more help for you as well. > Honestly speaking I do not believe in a community goal for that any more. > It was tried, but it feels there is a split in the community. My personal > opinion is that we should simply start doing that from SDK/CLI side closely > working with one service at a time (since nova team expressed desire I > started supporting them. Hope it will bring its fruits). Teams willing that > will be able to achieve the target, teams not willing are free to stay. TC > might think differently and act correspondingly, this is just my personal > opinion. Other approaches seem a bit of utopia to me. > > > > What about an "openstack purge " that would call all > projects? We once had a "/purge" goal, I'm not sure how far it went... > What I know, is that purging all resources of a project is currently > still a big painpoint. > > > Please have a look at https://review.opendev.org/#/c/734485/ It > implements an “alternative” from the OSC point of view by invoking > relatively newly introduced project cleanup functionality in SDK. This > supports more that original purge (except identity parts), tries to do in > parallel as much as possible and even now supports filtering (i.e. drop all > resources created or updated before DD.MM.YYYY). I am not sure here whether > it should replace purge or come as alternative so far. Here as well - > reviews and comments are welcome. > > For reference, back in Denver it was agreed to go this way, since no other > approach seemed to be really achievable. > > > > Regards, > Artem > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Fri Sep 25 18:27:24 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 25 Sep 2020 20:27:24 +0200 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: Sounds cool, Kendall ---- typed from mobile, auto-correct typos assumed ---- On Fri, 25 Sep 2020, 20:25 Kendall Nelson, wrote: > Hey Artem! > > On Fri, Sep 25, 2020 at 2:56 AM Artem Goncharov > wrote: > >> Hi >> >> On 24. Sep 2020, at 22:39, Thomas Goirand wrote: >> >> >> - Finish moving legacy python-*client CLIs to python-openstackclient >> >> >> Go go go !!! :) >> >> >> Cool, that somebody needs that. What we actually miss here is the people >> doing that. Amount of work is really big (while in the long run it will >> definitely help reducing overall efforts). My suggestion is to focus for >> this cycle only on nova (targeting more is simply useless). I have started >> switching some parts in OSC to use SDK ( >> https://review.opendev.org/#/q/topic:osc-first). Any help (even simply >> reviews) is welcome. I don’t like summer dev cycle, since the work is >> really close to 0 due to vacations season. But in autumn/winter I will try >> do more if there is acceptance. >> >> > I'm resolving now to be better at reviews over the upcoming months! I have > been hovering around on the peripherie and I think this release I will be > able to get more involved. I plan on attending your PTG sessions and am > trying to rally more help for you as well. > > >> Honestly speaking I do not believe in a community goal for that any more. >> It was tried, but it feels there is a split in the community. My personal >> opinion is that we should simply start doing that from SDK/CLI side closely >> working with one service at a time (since nova team expressed desire I >> started supporting them. Hope it will bring its fruits). Teams willing that >> will be able to achieve the target, teams not willing are free to stay. TC >> might think differently and act correspondingly, this is just my personal >> opinion. Other approaches seem a bit of utopia to me. >> >> >> >> What about an "openstack purge " that would call all >> projects? We once had a "/purge" goal, I'm not sure how far it went... >> What I know, is that purging all resources of a project is currently >> still a big painpoint. >> >> >> Please have a look at https://review.opendev.org/#/c/734485/ It >> implements an “alternative” from the OSC point of view by invoking >> relatively newly introduced project cleanup functionality in SDK. This >> supports more that original purge (except identity parts), tries to do in >> parallel as much as possible and even now supports filtering (i.e. drop all >> resources created or updated before DD.MM.YYYY). I am not sure here whether >> it should replace purge or come as alternative so far. Here as well - >> reviews and comments are welcome. >> >> For reference, back in Denver it was agreed to go this way, since no >> other approach seemed to be really achievable. >> >> >> >> Regards, >> Artem >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Sep 25 18:58:11 2020 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 25 Sep 2020 14:58:11 -0400 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: Absolutely +2 On Fri, Sep 25, 2020 at 11:09 AM Sergii Golovatiuk wrote: > +1 > > пт, 25 сент. 2020 г. в 00:13, Wesley Hayutin : > >> Greetings, >> >> I really thought someone else had already sent this out. However I don't >> see it so here we go. >> I'd like to propose Yatin Karel as tripleo-ci core. >> >> You know you want to say +2 :) >> > > > -- > Sergii Golovatiuk > > Senior Software Developer > > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Sep 25 19:01:02 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 25 Sep 2020 19:01:02 +0000 (UTC) Subject: [InteropWG] Looking for k8s Ready OpenStack / Open Infra and BMaaS Add-on Conformance Candidates References: <1867608362.872670.1601060462287.ref@mail.yahoo.com> Message-ID: <1867608362.872670.1601060462287@mail.yahoo.com> Hi all, Here is summary of Friday alternate week calls of Interop(next Oct 9th) join us on meetpad - refer https://etherpad.opendev.org/p/interop We are looking forward to prep work for PTG on October23 Monday & October 30 Friday Interop slots ( https://ethercalc.openstack.org/7xp2pcbh1ncb ) 1. Seeking Candidate for "Kubernetes Ready OpenStack" Branding - need Ideation from different Project teams with variations you seek (see eg.) eg. Distros  RHSOP with Cinder CSIVIO with Swift CSI eg. Hosted version Canonical OpenStack Distro with Manila CSISUSE OpenStack with Octavia Load Balances Ingress Controller Add your ideas to - https://etherpad.opendev.org/p/2020-Wallaby-interop-brainstorming 2. Seeking Candidates for OpenStack ready Bare Metal as a Service  Add-ons IronicMaaS Add your ideas to - https://etherpad.opendev.org/p/2020-Wallaby-interop-brainstorming All Vendors and PTLs interested in pursuing ideas for their next releases OpenStack or Open Infra release please add your ideas to etherpads above.Reply to this chain and we will try consolidate Interop in Bare metal & K8s era. ThanksPrakash / MarkFor InteropWG -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Sep 25 19:02:26 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 25 Sep 2020 20:02:26 +0100 Subject: [election][kolla] PTL candidacy for Wallaby Message-ID: Hi, I'd like to nominate myself to serve as the Kolla PTL for the Wallaby cycle. I have been PTL for the last 3 cycles, and would like the opportunity to continue to lead the team. I am pleased with the current state of the project. We have an active community, produce interesting and useful features, and regularly receive positive feedback. As with many contributors in a less certain world, I have felt the squeeze on upstream this cycle. I'd like to thank those who have stepped up to help when downstream (or holiday) means I'm unavailable, and those who generally help out with project admin tasks. This cycle we added two new meetings to the calendar. Kolla Klub started well, with fantastic attendance and some great discussions. Over time engagement has been more variable. I think we need to assess whether to continue regular meetings, and if so, how to keep it interesting. The Kolla Kall is more development oriented, and has been useful for focussing on some specific issues. For this meeting I think we need to be sure that it is the best use of people's time, and be a bit more dynamic about content and attendance. One area where I feel we could improve as a project is in integrating contributors in non-EU/US time zones. Language, culture and time zones can make this a challenge, but everyone benefits from a connected global community. We made a start on improving our documentation this cycle, let's keep pushing on that. Thanks for reading, Mark Goddard (mgoddard) From cboylan at sapwetik.org Fri Sep 25 19:18:11 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 25 Sep 2020 12:18:11 -0700 Subject: [magnum][infra] Can We Delete the OpenDev Fedora Atomic Images Mirror Message-ID: <88fa84f0-4284-4af5-9a84-6d5b546cba62@www.fastmail.com> I'm in the process of removing our fedora-30 images and noticed that we are still mirroring Fedora Atomic 28 and 29 images here: https://mirror.dfw.rax.opendev.org/fedora/atomic/stable/. Are these images in use anywhere? I would like to clean them up as they are even more stale than the fedora-30 nodepool images. Using http://codesearch.openstack.org I don't see anything using them but that only searches the master branches of our git repos. Clark From pierre at stackhpc.com Fri Sep 25 19:42:45 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 25 Sep 2020 21:42:45 +0200 Subject: [magnum][infra] Can We Delete the OpenDev Fedora Atomic Images Mirror In-Reply-To: <88fa84f0-4284-4af5-9a84-6d5b546cba62@www.fastmail.com> References: <88fa84f0-4284-4af5-9a84-6d5b546cba62@www.fastmail.com> Message-ID: Magnum is using Fedora-AtomicHost-29-20190820.0.x86_64.qcow2 in stable/ussuri and earlier branches: https://opendev.org/openstack/magnum/src/branch/stable/ussuri/magnum/tests/contrib/gate_hook.sh#L91 On Fri, 25 Sep 2020 at 21:25, Clark Boylan wrote: > > I'm in the process of removing our fedora-30 images and noticed that we are still mirroring Fedora Atomic 28 and 29 images here: https://mirror.dfw.rax.opendev.org/fedora/atomic/stable/. Are these images in use anywhere? I would like to clean them up as they are even more stale than the fedora-30 nodepool images. > > Using http://codesearch.openstack.org I don't see anything using them but that only searches the master branches of our git repos. > > Clark > From cboylan at sapwetik.org Fri Sep 25 20:21:27 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 25 Sep 2020 13:21:27 -0700 Subject: =?UTF-8?Q?Re:_[magnum][infra]_Can_We_Delete_the_OpenDev_Fedora_Atomic_Im?= =?UTF-8?Q?ages_Mirror?= In-Reply-To: References: <88fa84f0-4284-4af5-9a84-6d5b546cba62@www.fastmail.com> Message-ID: <38d6d9c1-8648-4a96-8264-a7086ed58f28@www.fastmail.com> On Fri, Sep 25, 2020, at 12:42 PM, Pierre Riteau wrote: > Magnum is using Fedora-AtomicHost-29-20190820.0.x86_64.qcow2 in > stable/ussuri and earlier branches: > https://opendev.org/openstack/magnum/src/branch/stable/ussuri/magnum/tests/contrib/gate_hook.sh#L91 Maybe we can delete all of the Fedora Atomic 28 images and the 29 images older than the one in use. Given Fedora 29's support period is long over, is there any plan to test using some other resource on the stable branches? We intentionally choose Ubuntu LTS and CentOS as platforms for jobs we intend to run long term as they don't go away shortly after we make openstack releases. > > On Fri, 25 Sep 2020 at 21:25, Clark Boylan wrote: > > > > I'm in the process of removing our fedora-30 images and noticed that we are still mirroring Fedora Atomic 28 and 29 images here: https://mirror.dfw.rax.opendev.org/fedora/atomic/stable/. Are these images in use anywhere? I would like to clean them up as they are even more stale than the fedora-30 nodepool images. > > > > Using http://codesearch.openstack.org I don't see anything using them but that only searches the master branches of our git repos. > > > > Clark From fungi at yuggoth.org Fri Sep 25 21:12:47 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 25 Sep 2020 21:12:47 +0000 Subject: [magnum][infra] Can We Delete the OpenDev Fedora Atomic Images Mirror In-Reply-To: <38d6d9c1-8648-4a96-8264-a7086ed58f28@www.fastmail.com> References: <88fa84f0-4284-4af5-9a84-6d5b546cba62@www.fastmail.com> <38d6d9c1-8648-4a96-8264-a7086ed58f28@www.fastmail.com> Message-ID: <20200925211246.5t3cejamsvhku7pl@yuggoth.org> On 2020-09-25 13:21:27 -0700 (-0700), Clark Boylan wrote: > On Fri, Sep 25, 2020, at 12:42 PM, Pierre Riteau wrote: > > Magnum is using Fedora-AtomicHost-29-20190820.0.x86_64.qcow2 in > > stable/ussuri and earlier branches: > > https://opendev.org/openstack/magnum/src/branch/stable/ussuri/magnum/tests/contrib/gate_hook.sh#L91 > > Maybe we can delete all of the Fedora Atomic 28 images and the 29 > images older than the one in use. > > Given Fedora 29's support period is long over, is there any plan > to test using some other resource on the stable branches? We > intentionally choose Ubuntu LTS and CentOS as platforms for jobs > we intend to run long term as they don't go away shortly after we > make openstack releases. [...] Also with my paranoid VMT hat on for a moment, Atomic 29 is almost certainly riddled with known and unfixed security vulnerabilities at this point. I seriously hope nobody's encouraging our users to actually run it anywhere. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From psk.9009 at gmail.com Fri Sep 25 21:47:47 2020 From: psk.9009 at gmail.com (prem shivakumar) Date: Fri, 25 Sep 2020 17:47:47 -0400 Subject: OpenstackSwift - rclone Copy/Sync Large Objects set (150Mil objects - 80TB) from Tenant PROD1 to PROD2 Message-ID: Hi Open-stack users, Our current setup: PROD1 Container1 Dir1/Dir2/YYYYMMDD.tar Want to move to: PROD2 Container2_YYYYMM Dir1/Dir2/YYYYMMDD.tar I got all the objects metadata(Object names) using “swift list Container1” I am currently using rclone where I am passing metadata(Object names) info of “Container1”. All the Auth variables exist in rclone.conf. Example command: rclone copy PROD1: Dir1/Dir2/YYYYMMDD.tar PROD2:Container2_YYYYMM/Dir1/Dir2/ YYYYMMDD.tar Currently, I am facing slowness in copying files from PROD1 to PROD2. On an average, It takes about 5s to copy the object from PROD1 to PROD2. If I count the time against 150M its more than 10000Days. I have also tried “rclone sync with —fast-list” it is failing as it is unable to keep the listing in the memory. In this case, is there any easier and faster solution to copy data from Container1 to Container2? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Fri Sep 25 23:25:23 2020 From: feilong at catalyst.net.nz (feilong at catalyst.net.nz) Date: Sat, 26 Sep 2020 11:25:23 +1200 Subject: [magnum][infra] Can We Delete the OpenDev Fedora Atomic Images Mirror In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Sat Sep 26 06:22:49 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Sat, 26 Sep 2020 09:22:49 +0300 Subject: [openstack-ansible][election] Candidacy Message-ID: <62101601101259@mail.yandex.ru> Hey everyone! This cycle I've decided to take part in elections and run for PTL of the OpenStack-Ansible project. I feel that OSA nowadays is a solid and stable project, and we're doing a great job to get all features in time and deploy only modern software with our deployments, as well as providing deployers with a high range of different supported options and flexibility. It's rather important to keep up with the progress. And there are main things I'd love to focus on during the upcoming release: # Integration tests coverage for all roles Despite migrating tests for most of the projects to the integrated repo, we still have several projects not covered with proper testing (like Zun). I also would love to pay attention to passing appropriate tempest scenarios for all projects instead of running some default ones. # Cleaning up bugtracker I think one of our highest priorities should be project stability. And taking care of bugs might help us heavily with that. We haven't been paying enough attention to our Launchpad for a while and I think we should change that by cleaning up stale bugs, so we could focus on real ones. Returning bug triage to the meeting might be also an efficient way to handle them. # Roles cleanup We have a lot of things left in code, like workarounds for upgrades, upstream bugs, support for obsolete operating systems, etc. Removal/replacement of deprecated OpenStack options should be also included. # Usage of the new pip resolver Our venv build process relies on provided constraints heavily, which isn't valid in case of the new pip resolver usage, so we will need to make appropriate adjustments into the process to keep the ability to define specific package requirements and use modern pip. Thank you for taking into consideration my nomination. From sxmatch1986 at gmail.com Sat Sep 26 11:41:19 2020 From: sxmatch1986 at gmail.com (hao wang) Date: Sat, 26 Sep 2020 19:41:19 +0800 Subject: [election][zaqar] PTL candidacy for Wallaby Message-ID: Hi, I want to propose my candidacy and continue serving as Zaqar PTL in the Wallaby cycle. For the Victoria release, the Zaqar team continues to keep the Zaqar stable and reliable, and also some great works have been done. We designed and implemented the Encrypted message in the queue to enhance the security of Zaqar. We also fixed some bugs to meet the python3 requirements. In the Wallaby cycle, I would like to continue serving this project and bring more contributors and the user requires. There are some major works I want to do in W cycle: 1.Finish the Topic resource work, to support all backends. 2.Consider to support more backends that have widely used. 3.Continue to support more security algorithms for encrypted messages. 4.Implement some new requirements from users. 5.keep Zaqar more reliable as usual. Thanks to everyone to support my work and keep contributing to Zaqar. I will do my best in the new cycle. From romain.chanu at univ-lyon1.fr Sat Sep 26 12:48:24 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Sat, 26 Sep 2020 12:48:24 +0000 Subject: The question about OpenStack instance fail to boot after unexpected compute node shutdown In-Reply-To: References: Message-ID: <1601124512325.20684@univ-lyon1.fr> Hello, You have to remove the ceph lock: $1 = pool $2 = volume name read -r -a vars <<< $(rbd -p $1 lock list $2 | tail -1) rbd -p $1 lock remove $2 "${vars[1]} ${vars[2]}" ${vars[0]} Best regards, Romain ________________________________ From: Yule Sun <373056786 at qq.com> Sent: Friday, September 25, 2020 5:47 AM To: rony.khan at brilliant.com.bd; openstack-discuss at lists.openstack.org Subject: The question about OpenStack instance fail to boot after unexpected compute node shutdown I saw your message on the website http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005635.html . And i have been troubled by the same problem for a long time . I tried to change the parameters resume_guests_state_on_host_boot=True and instance_usage_audit=True in the nova.conf ,but it didn't work . I wonder if you have solved this problem, looking forward your reply, Thank you and best regards. And my instance error looks like this . [cid:e8770762-d2b2-4630-9ce6-4d357dc92cee] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2020-09-25_114323.png Type: image/png Size: 16879 bytes Desc: 2020-09-25_114323.png URL: From tburke at nvidia.com Sat Sep 26 23:06:14 2020 From: tburke at nvidia.com (Tim Burke) Date: Sat, 26 Sep 2020 16:06:14 -0700 Subject: [election][swift] PTL candidacy for Wallaby Message-ID: I'd like to announce my candidacy for Swift PTL for the Wallaby cycle. This past year, Swift celebrated ten years running in production. Much has changed in that time: new features have been developed and polished, versions of Python have come and gone, and clusters have grown to staggering capacities. However, our commitment to operational excellence remains the same. Recently (particularly in the last several months), I've noticed our contributors increasingly have an operator's mindset. We look for more and better ways to measure Swift. We seek to reduce client impacts from config reloads and upgrades. We take greater ownership over the health and performance of our clusters. To a large extent, we're all operators now. The benefits have been enormous. We've improved performance; we've upgraded without disrupting any client requests; we've migrated clusters to Python 3 to position them well for the next ten years. Through it all, clients put ever more data into Swift. The increases in demand bring almost incomprehensible scales. We now see individual clusters sustaining tens of thousands of requests every second. We see containers with a billion objects. We see expansions that are as large as many whole clusters were just a few years ago. This is our next great challenge: how do we move away from a world where expansions are a rarity that may require a bit of a scramble and into a world of constant expansion? How can we effectively manage clusters with thousands of nodes? How do we shift from thinking in terms of petabytes to exabytes? I can't wait to see how we rise to meet this challenge. Tim Burke From changzhi at cn.ibm.com Sun Sep 27 01:32:52 2020 From: changzhi at cn.ibm.com (Zhi CZ Chang) Date: Sun, 27 Sep 2020 01:32:52 +0000 Subject: API short reponse time between the S version and the U version Message-ID: An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Sun Sep 27 07:06:55 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Sun, 27 Sep 2020 08:06:55 +0100 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: Absolutely +1 — I always appreciated Yatin feedback on reviews, very good eye for bugs! On Thu, 24 Sep 2020 at 23:13, Wesley Hayutin wrote: > Greetings, > > I really thought someone else had already sent this out. However I don't > see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) > > > -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sun Sep 27 07:14:52 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sun, 27 Sep 2020 15:14:52 +0800 Subject: [election][heat] PTL candidacy for Wallaby Message-ID: Hi all, I would like to announce my candidacy for Heat PTL for Wallaby cycle. Over the past couple of cycles, we running Heat dynamically according to requirements. We never stop encouraging people to join us and provide consistent reviews or fixes as much as we can based on priorities. And if those priorities not working for you, that means you must join the community to help. I believe the goals for the next cycle should be community CI stability, cross-project/community scenario tests to ensure what we deliver to users will get higher quality, also to get rid of legacy mode and move everything to convergence mode, and finally container-native support to make sure we provide usable Orchestration services for the container environment. I believe in new features, so we will try to encourage more features and review them as many as possible, but will not plan for team feature goals unless we can trigger more discussion and hands-on.Thank you for taking my self-nomination into consideration. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwm2012 at gmail.com Sun Sep 27 14:22:31 2020 From: pwm2012 at gmail.com (pwm) Date: Sun, 27 Sep 2020 22:22:31 +0800 Subject: Ussuri Octavia load balancer on OVS Message-ID: Hi, I using the following setup for Octavia load balancer on OVS Ansible openstack_user_config.yml - network: container_bridge: "br-lbaas" container_type: "veth" container_interface: "eth14" host_bind_override: "eth14" ip_from_q: "octavia" type: "flat" net_name: "octavia" group_binds: - neutron_openvswitch_agent - octavia-worker - octavia-housekeeping - octavia-health-manager user_variables.yml octavia_provider_network_name: "octavia" octavia_provider_network_type: flat octavia_neutron_management_network_name: lbaas-mgmt /etc/netplan/50-cloud-init.yaml br-lbaas: dhcp4: no interfaces: [ bond10 ] addresses: [] parameters: stp: false forward-delay: 0 bond10: dhcp4: no addresses: [] interfaces: [ens16] parameters: mode: balance-tlb brctl show bridge name bridge id STP enabled interfaces br-lbaas 8000.d60e4e80f672 no 2ea34552_eth14 bond10 However, I am getting the following error when creating the load balance octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.29.233.47', port=9443 The Octavia api container unable to connect to the amphora instance. Any missing configuration, cause I need to manually add in the eth14 interface to the br-lbaas bridge in order to fix the connection issue brctl addif br-lbaas eth14 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Sun Sep 27 19:30:49 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 28 Sep 2020 04:30:49 +0900 Subject: [election][tacker] PTL candidacy for Wallaby Message-ID: Hi, I'd like to propose my candidacy for Tacker PTL in Wallaby cycle. In Victoria release, we have released great features required for the latest ETSI NFV standard and made several improvements for Tacker as and as planed in the last vPTG. We also fix issues for migrating to python3 and Ubuntu focal. As a PTL, I've led the team by proposing not only new features, but also things for driving the team such as documentation or bug tracking. It was something hard, but is running well. In Wallaby cycle, I would like to continue to make Tacker be more useful product from not only telco, but also all users interested in NFV. I believe Tacker will be a good reference implementation for NFV standard. We have planed to make Tacker more feasible not only for VM environment, but also container to meet requirements from industries. - Introduce the latest container technology with ETSI NFV standard. - Reinforce test environment for keeping the quality of the product, not only unit tests and functional tests, but also introduce more sophisticated scheme such as robot framework. - Revise wiki and documentation for covering users more. Regards, Yasufumi Ogawa From fungi at yuggoth.org Mon Sep 28 00:39:05 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 28 Sep 2020 00:39:05 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations ending soon Message-ID: <20200928003905.5ciqs5zkehsnz7hw@yuggoth.org> A quick reminder that we are in the last hours for declaring PTL and TC candidacies. Nominations are open until Sep 29, 2020 23:45 UTC. If you want to stand for election, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2020-09-22 23:45:00 UTC Nominations end @ 2020-09-29 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 1 day, 23:12:19 Nominations progress : 71.90% --------------------------------------------------- Projects[1] : 55 Projects with candidates : 18 ( 32.73%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 37 (Adjutant Barbican Blazar Cinder Cloudkitty Cyborg Designate Ec2_Api Freezer Horizon Ironic Karbor Magnum Manila Masakari Mistral Monasca Murano Octavia OpenStack_Charms OpenStack_Helm Oslo Packaging_Rpm Placement Qinling Quality_Assurance Rally Requirements Searchlight Senlin Solum Storlets Tacker Telemetry Trove Watcher Zun) =================================================== Stats gathered @ 2020-09-28 00:32:41 UTC This means that with approximately 2 days left, 37 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. Thank you, [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy [2] Any open reviews at https://review.openstack.org/#/q/is:open+project:openstack/election have not been factored into these stats. [3] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Mon Sep 28 00:46:50 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 28 Sep 2020 00:46:50 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations ending soon In-Reply-To: <20200928003905.5ciqs5zkehsnz7hw@yuggoth.org> References: <20200928003905.5ciqs5zkehsnz7hw@yuggoth.org> Message-ID: <20200928004650.7tt3yrtgyuwlufio@yuggoth.org> On 2020-09-28 00:39:05 +0000 (+0000), Jeremy Stanley wrote: > A quick reminder that we are in the last hours for declaring PTL and > TC candidacies. Nominations are open until Sep 29, 2020 23:45 UTC. [...] I meant to include in my previous message, so far we have two confirmed candidates for the four open Technical Committee seats. If you're considering participating as a member of the TC, please don't hesitate to submit your nomination. -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From adriant at catalystcloud.nz Mon Sep 28 02:41:55 2020 From: adriant at catalystcloud.nz (Adrian Turjak) Date: Mon, 28 Sep 2020 15:41:55 +1300 Subject: [adjutant][election] PTL candidacy for Wallaby Message-ID: <7aeda29d-065c-011a-0dc5-0dd31d79a57a@catalystcloud.nz> Hello All, I'm submitting myself as the PTL for Adjutant during the W cycle. The focus is as always to continue maintaining the project to ensure it is operating as expected and as bug free as possible. Wallaby will mostly be about cleaning up some internal systems, and minor features to help with custom plugin support. The hope is to also clean up policy/role in this release, and include some elements of that into the plugin layer, and to start working on more useful user management features. Depending on time and resources, we may start work on async worker support to allow some parts of tasks to be processed asynchronously. Cheers, Adrian Turjak From walsh277072 at gmail.com Mon Sep 28 07:11:56 2020 From: walsh277072 at gmail.com (WALSH CHANG) Date: Mon, 28 Sep 2020 17:11:56 +1000 Subject: Swift Installation Error Message-ID: To whom it may concern, I am very new to OpenStack. I had an error when I install the Swift service. Error trying to load config from /etc/swift/proxy-server.conf: Entry point 'proxy\naccount_autocreate = True' not found in egg 'swift' (dir: /usr/lib/python2.7/dist-packages; protocols: paste.app_factory, paste.composite_factory, paste.composit_factory; entry_points: ) Just wondering if it's ok to have storage node and controller node on the same device. I use the controller node as the storage node, and use the flash drive as the storage space. Not sure if this is the reason for the error. It will be very appreciated if someone can provide some suggestions. Kind regards, Walsh -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Sep 28 07:38:31 2020 From: marios at redhat.com (Marios Andreou) Date: Mon, 28 Sep 2020 10:38:31 +0300 Subject: [tripleo]] tripleo victoria rc1 In-Reply-To: References: Message-ID: On Mon, Sep 28, 2020 at 10:16 AM Yatin Karel wrote: > Hi Marios, > > << we don't normally/aren't expected to cut release candidates. > However after discussion with and guidance from Wes I decided to > propose an rc1 as practice since I << haven't used the new releases > tool before [4]. > << The proposal is up at [1] but I've set -1 there on request from > reviewers; we can't merge that until [2] merges first. > > Is this just about releasing RC Tags or changing the release-model > that tripleo used to follow:- cycle-with-intermediary(cycle-trailing > earlier) vs cycle-with-rc. Maybe you can share more context with the > this is just about bumping the git tags on the projects and not about changing the release model. > discussion points as to what's different this time and reasoning for > the same to have a clear picture. > nothing different wrt the release model - see the diff it is just bumping the hash and version https://review.opendev.org/gitweb?p=openstack%2Freleases.git;a=commitdiff;h=43cca15e8525992f6c3bfba68d3421c64c960bf9 . We are just making a release to coincide with the wider openstack Victoria RC1 even though we aren't expected to follow the rc model with our 'cycle with intermediary' on these projects. > And if possible can you also share what's the plan for GA release for > TripleO like how it will be this time. > So weshay is still PTL, and we haven't discussed this in any great detail, except that we may have to delay a little bit cutting branch and releasing victoria final. Generally with rc tags stable branch is also cut but i don't see that > in https://review.opendev.org/#/c/754346/. And i hope this will not be > right I guess the confusion is that this isn't a proper RC since we don't have those 'officially' .It is just a version bump that coincides with the victoria rc1 milestone end of last week. hope it helps, also, did you forget to reply-all? should i send to the list (just confirmed with ykarel and re-adding list to addresses) thanks, marios > merged until victoria tripleo jobs are ready if cutting > stable/victoria branch. > > On Fri, Sep 25, 2020 at 10:05 PM Marios Andreou wrote: > > > > hello TripleO folks o/ > > > > we don't normally/aren't expected to cut release candidates. However > after discussion with and guidance from Wes I decided to propose an rc1 as > practice since I haven't used the new releases tool before [4]. > > > > The proposal is up at [1] but I've set -1 there on request from > reviewers; we can't merge that until [2] merges first. > > > > If you are interested please check [1] and add any comments. In > particular let me know if you want to include a release for one of the > independent repos [3] (though for validations-common/libs we need to reach > out to the validations team they are managing those AFAIK). > > > > thanks, hope you enjoy your weekend! > > > > [1] https://review.opendev.org/#/c/754346/ > > [2] https://review.opendev.org/754347 > > [3] https://releases.openstack.org/teams/tripleo.html#independent > > [4] > https://releases.openstack.org/reference/using.html#using-new-release-command > > Thanks and Regards > Yatin Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Mon Sep 28 07:50:45 2020 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 28 Sep 2020 13:20:45 +0530 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: Thanks Wesley and Others for the kind words and feedback. On Fri, Sep 25, 2020 at 3:52 AM Wesley Hayutin wrote: > > Greetings, > > I really thought someone else had already sent this out. However I don't see it so here we go. > I'd like to propose Yatin Karel as tripleo-ci core. > > You know you want to say +2 :) Thanks and Regards Yatin Karel From mbultel at redhat.com Mon Sep 28 08:04:55 2020 From: mbultel at redhat.com (Mathieu Bultel) Date: Mon, 28 Sep 2020 10:04:55 +0200 Subject: [tripleo][ci] proposing Yatin as tripleo-ci core In-Reply-To: References: Message-ID: a late +2 but indeed yes. On Mon, Sep 28, 2020 at 9:54 AM Yatin Karel wrote: > Thanks Wesley and Others for the kind words and feedback. > > On Fri, Sep 25, 2020 at 3:52 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > I really thought someone else had already sent this out. However I don't > see it so here we go. > > I'd like to propose Yatin Karel as tripleo-ci core. > > > > You know you want to say +2 :) > > > Thanks and Regards > Yatin Karel > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Sep 28 08:34:23 2020 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 28 Sep 2020 15:34:23 +0700 Subject: [ptl][election][mistral] PTL candidacy In-Reply-To: <1488e425-7151-4b72-aef3-6193f96e2e35@Spark> References: <1488e425-7151-4b72-aef3-6193f96e2e35@Spark> Message-ID: <2e54bb66-dc6b-4824-97a2-e370af875efd@Spark> Hi, I'm Renat Akhmerov. I'd like to announce my PTL candidacy for Mistral in Wallaby cycle. In Victoria, we made a huge change related to how we manage actions in Mistral. Previously, all actions were stored in the database and when any Mistral subsystem needed to get info about an action (action definition) it sent a query to DB. So, there wasn't any abstraction responsible for action management. This approach is not flexible and makes refactoring incredibly hard. It also means that it's nearly impossible to deliver actions to the system or alter them in runtime w/o having to reboot a cluster node. In Victoria we introduced the new abstraction called Action Provider. Action providers are fully responsible for delivering actions to Mistral. It is possible to. register many providers in the entry point "mistral.action.providers" in setup.cfg of any Python project (installed within the same Python env as Mistral) and Mistral will be using them all to find actions. So action management is now decoupled from the rest of the system and it's now possible to move away from storing action definitions only in the DB. It's fully up to a particular action provider implentation. Actions can even be dynamically generated, for example, as wrappers around a subset of operating system commands. Another option is. requesting info about actions via some communication protocols like HTTP, AMQP etc. There's still work to polish this all and document properly but the main infrastructure is already available and everyone can implement their own action providers. For W cycle I'd like to proceed with improving Mistral usability (toolset for developing Mistral actions, docs etc.) and address several known scalability issues. As always, anyone is very welcome to join our project. It's a lot of fun to work on it. The best way to get in touch with us is IRC channel #openstack-mistral or the openstack-discuss mailing list (with [mistral] tag in email subject). [1] https://review.opendev.org/#/c/754646/ Cheers Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Mon Sep 28 08:55:50 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Mon, 28 Sep 2020 10:55:50 +0200 Subject: [openstack-ansible] OpenStack Ansible deployment fails, due to lxc containers not having network connection In-Reply-To: References: Message-ID: > I would recommend always building an All-In-One deployment in a virtual > machine > so that you have a reference to compare against when moving away from > the 'stock config'. > Documentation for the AIO can be found here > https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html I gradually decreased the complexity of my test system towards the 'stock config' to find out what caused the error but it kept showing up. Finally, it vanished when I rebooted the server and my original configuration also completed successfully afterwards. 🤦‍♂️ Thank you for your help and the insights to OSA! Kind regards, Oliver From mdulko at redhat.com Mon Sep 28 09:02:28 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 28 Sep 2020 11:02:28 +0200 Subject: [Kuryr] Proposing Roman Dobosz for kuryr-kubernetes core In-Reply-To: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> References: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> Message-ID: <56bc5dc645ddc393de42f7ce8abe5c1c6db01d3d.camel@redhat.com> Given the positive feedback I added Roman to kuryr-kubernetes-core. Congrats! On Tue, 2020-09-22 at 16:45 +0200, Michał Dulko wrote: > Hello, > > I'd like to propose Roman for the core reviewer role in kuryr- > kubernetes. Roman was leading several successful development activities > during Train and Ussuri cycles: > > * Switch to use openstacksdk as our OpenStack API client. > * Support for IPv6. > * Moving VIF data from pod annotations to a KuryrPort CR. > > He also demonstrated code reviewing skills of an experienced Python > developer. > > In the absence of objections, I'll proceed with adding Roman to the > core team next week. > > Thanks, > Michał > From gryf73 at gmail.com Mon Sep 28 09:18:50 2020 From: gryf73 at gmail.com (Roman Dobosz) Date: Mon, 28 Sep 2020 11:18:50 +0200 Subject: [Kuryr] Proposing Roman Dobosz for kuryr-kubernetes core In-Reply-To: <56bc5dc645ddc393de42f7ce8abe5c1c6db01d3d.camel@redhat.com> References: <2b71e3bd55187b6df7794dbc03d73d1604eeaba5.camel@redhat.com> <56bc5dc645ddc393de42f7ce8abe5c1c6db01d3d.camel@redhat.com> Message-ID: <20200928111850.d501d63677a01a7857f9b1ab@gmail.com> On Mon, 28 Sep 2020 11:02:28 +0200 Michał Dulko wrote: > Given the positive feedback I added Roman to kuryr-kubernetes-core. > Congrats! Thanks! I'll do my best! -- Cheers, Roman Dobosz From balazs.gibizer at est.tech Mon Sep 28 10:54:14 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 28 Sep 2020 12:54:14 +0200 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) In-Reply-To: References: Message-ID: On Thu, Sep 24, 2020 at 18:00, Radosław Piliszek wrote: > On Thu, Sep 24, 2020 at 5:24 PM Stephen Finucane > wrote: >> >> Placement has been a separate project with its own governance since >> Stein [1]. Since then, the main drivers behind the separation have >> moved onto pastures new and with Tetsuro sadly declaring his non- >> candidacy for the PTL position for Wallaby [2], we're left in the >> unenviable position of potentially not having a PTL for the Wallaby >> cycle. As such, it's probably time to discuss the future of >> Placement >> governance. >> >> Assuming no one steps forward for the Placement PTL role, it would >> appear to me that we have two options. Either we look at >> transitioning >> Placement to a PTL-less project, or we move it back under nova >> governance. To be honest, given how important placement is to nova >> and >> other projects now, I'm uncomfortable with the idea of not having a >> point person who is ultimately responsible for things like cutting a >> release (yes, delegation is encouraged but someone needs to herd the >> cats). At the same time, I do realize that placement is used by more >> that nova now so nova cores and what's left of the separate >> placement >> core team shouldn't be the only ones making this decision. >> >> So, assuming the worst happens and placement is left without a PTL >> for >> Victoria, what do we want to do? > Thanks Stephen for raising this. > > Run DPL with liaisons from the interested projects perhaps? :-) > I think that would be a nice exercise for the DPL concept. Cheers, gibi From balazs.gibizer at est.tech Mon Sep 28 11:04:48 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 28 Sep 2020 13:04:48 +0200 Subject: [placement][nova][cinder][neutron][blazar] Placement governance switch(back) In-Reply-To: <3063992.oiGErgHkdL@whitebase.usersys.redhat.com> References: <3063992.oiGErgHkdL@whitebase.usersys.redhat.com> Message-ID: <048DHQ.GTG67G0IXWTI1@est.tech> On Thu, Sep 24, 2020 at 18:12, Luigi Toscano wrote: > On Thursday, 24 September 2020 17:23:36 CEST Stephen Finucane wrote: > >> Assuming no one steps forward for the Placement PTL role, it would >> appear to me that we have two options. Either we look at >> transitioning >> Placement to a PTL-less project, or we move it back under nova >> governance. To be honest, given how important placement is to nova >> and >> other projects now, I'm uncomfortable with the idea of not having a >> point person who is ultimately responsible for things like cutting a >> release (yes, delegation is encouraged but someone needs to herd the >> cats). At the same time, I do realize that placement is used by more >> that nova now so nova cores and what's left of the separate >> placement >> core team shouldn't be the only ones making this decision. >> >> So, assuming the worst happens and placement is left without a PTL >> for >> Victoria, what do we want to do? > > I mentioned this on IRC, but just for completeness, there is another > option: > have the Nova candidate PTL (I assume there is just one) also apply > for > Placement PTL, and handle the 2 realms in a personal union. As far as I know I'm the only nova PTL candidate so basically you asking me to take the Placement PTL role as well. This is a valid option. Still, first, I would like to give a chance to the DPL concept in Placement in a way yoctozepto suggested. Cheers, gibi > > Ciao > -- > Luigi > > > > > From balazs.gibizer at est.tech Mon Sep 28 11:16:42 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 28 Sep 2020 13:16:42 +0200 Subject: [elections][placement][blazar] Placement PTL Non-candidacy: Stepping down In-Reply-To: References: Message-ID: On Wed, Sep 23, 2020 at 11:05, Tetsuro Nakamura wrote: > Hello everyone, > > Due to my current responsibilities, > I'm not able to keep up with my duties > either as a Placement PTL, core reviewer, > or as a Blazar core reviewer in Wallaby cycle. > > Thank you so much to everyone that has supported. Thank you for your work! Cheers, gibi > > I won't be able to checking ML or IRC, > but I'll still be checking my emails. > Please ping me via email if you need help. > > Thanks. > > - Tetsuro > > -- > Tetsuro Nakamura > NTT Network Service Systems Laboratories > TEL:0422 59 6914(National)/+81 422 59 6914(International) > 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan > > From juliaashleykreger at gmail.com Mon Sep 28 13:44:11 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 28 Sep 2020 06:44:11 -0700 Subject: [ironic] Proposing returning Jay Faulkner to ironic-core Message-ID: Greetings ironic contributors, I'm sure many of you have noticed JayF has been more active on IRC over the past year. Recently he started reviewing and providing feedback on changes in ironic-python-agent as well as some work to add new features and fix some very not fun bugs. Given that he was ironic-core when he departed the community, I believe it is only fair that we return those rights to him. Any objections? -Julia From iurygregory at gmail.com Mon Sep 28 13:49:03 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 28 Sep 2020 15:49:03 +0200 Subject: [ironic] Proposing returning Jay Faulkner to ironic-core In-Reply-To: References: Message-ID: +2 On Mon, Sep 28, 2020, 15:46 Julia Kreger wrote: > Greetings ironic contributors, > > I'm sure many of you have noticed JayF has been more active on IRC > over the past year. Recently he started reviewing and providing > feedback on changes in ironic-python-agent as well as some work to add > new features and fix some very not fun bugs. > > Given that he was ironic-core when he departed the community, I > believe it is only fair that we return those rights to him. > > Any objections? > > -Julia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Sep 28 14:00:50 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 28 Sep 2020 09:00:50 -0500 Subject: [release] Release countdown for week R-2 Sept 28 - Oct 2 Message-ID: <20200928140050.GA721498@sm-workstation> Development Focus ----------------- At this point we should have release candidates (RC1 or recent intermediary release) for all the victoria deliverables. Teams should be working on any release-critical bugs that would require another RC or intermediary release before the final release. Actions ------- If you have deliverables on a cycle-with-rc model but haven't had a RC1 released yet, please check the proposed RC1 releases at [1]. Without a clear answer, the release management team will be approving those on Monday morning! [1] https://review.opendev.org/#/q/topic:victoria-rc1+status:open Early in the week, the release team will be proposing stable/victoria branch creation for all deliverables that have not branched yet, using the latest available victoria release as the branch point. If your team is ready to go for creating that branch, please let us know by leaving a +1 on these patches. If you would like to wait for another release before branching, you can -1 the patch and update it later in the week with the new release you would like to use. By the end of the week the release team will merge those patches though, unless an exception is granted. Once stable/victoria branches are created, if a release-critical bug is detected, you will need to fix the issue in the master branch first, then backport the fix to the stable/victoria branch before releasing out of the stable/victoria branch. After all of the cycle-with-rc projects have branched we will branch devstack, grenade, and the requirements repos. This will effectively open them up for wallaby development, though the focus should still be on finishing up victoria until the final release. For projects with translations, watch for any translation patches coming through and merge them quickly. A new release should be produced so that translations are included in the final victoria release. Finally, now is a good time to finalize release notes. In particular, consider adding any relevant "prelude" content. Release notes are targetted for the downstream consumers of your project, so it would be great to include any useful information for those that are going to pick up and use or deploy the victoria version of your project. Upcoming Deadlines & Dates -------------------------- Final Victoria release: October 14 Open Infra Summit: October 19-23 Wallaby PTG: October 26-30 From ruslanas at lpic.lt Mon Sep 28 14:06:38 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 28 Sep 2020 17:06:38 +0300 Subject: [tripleo][ironic] fails to introspect: my fsm encountered an exception Message-ID: Hi all, I have a clean undercloud deployment. but when I add a node, it fails to introspect. in ironic logs I see interesting lines about FSM (what is it?) [1] and in the same paste, I have provided/pasted, how openstack overcloud node import instack.json looks like. [1] http://paste.openstack.org/show/uPwWVYlO3UQbrF0WzDFH/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Sep 28 14:14:50 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 28 Sep 2020 16:14:50 +0200 Subject: [ironic] Proposing returning Jay Faulkner to ironic-core In-Reply-To: References: Message-ID: Not a single objection here, welcome back Jay! On Mon, Sep 28, 2020 at 3:47 PM Julia Kreger wrote: > Greetings ironic contributors, > > I'm sure many of you have noticed JayF has been more active on IRC > over the past year. Recently he started reviewing and providing > feedback on changes in ironic-python-agent as well as some work to add > new features and fix some very not fun bugs. > > Given that he was ironic-core when he departed the community, I > believe it is only fair that we return those rights to him. > > Any objections? > > -Julia > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Sep 28 14:39:28 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 28 Sep 2020 17:39:28 +0300 Subject: [tripleo][ironic] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: Hi all, Just in case. I have executed it with --debug [1]. [1] http://paste.openstack.org/show/zIHDZ4PS8d0Oi3fmB5AD/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon Sep 28 14:59:28 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 28 Sep 2020 16:59:28 +0200 Subject: [election][magnum] Add strigazi candidancy for magnum Message-ID: Hello stackers, I would like to nominate [0] myself for Magnum PTL. In the recent releases the focus of the team was around our kubernetes driver. I would like to continue that direction for wallaby while trying to improve cloud administrators' management. More specifically: * simplifying addon deployment for our kubernetes driver (the current proposal relies on helm3) * allow control plane resizing * optionally run the clusters' control plane on the cloud operators' tenant * investigate addon upgrades The goal of the project is clearly to be very lean and rely as much as possible on the dependent communities like: kubernetes, containerd, calico, helm and so on. Thank you for considering me, Spyros Trigazis [0] https://review.opendev.org/754727 -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Mon Sep 28 15:04:04 2020 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 28 Sep 2020 11:04:04 -0400 Subject: [ironic] Proposing returning Jay Faulkner to ironic-core In-Reply-To: References: Message-ID: ++ Welcome back!!! --ruby On Mon, Sep 28, 2020 at 10:18 AM Dmitry Tantsur wrote: > Not a single objection here, welcome back Jay! > > On Mon, Sep 28, 2020 at 3:47 PM Julia Kreger > wrote: > >> Greetings ironic contributors, >> >> I'm sure many of you have noticed JayF has been more active on IRC >> over the past year. Recently he started reviewing and providing >> feedback on changes in ironic-python-agent as well as some work to add >> new features and fix some very not fun bugs. >> >> Given that he was ironic-core when he departed the community, I >> believe it is only fair that we return those rights to him. >> >> Any objections? >> >> -Julia >> >> > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Mon Sep 28 15:17:42 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Mon, 28 Sep 2020 10:17:42 -0500 Subject: =?UTF-8?Q?=5belection=5d=5bbarbican=5d_Add_Douglas_Mendiz=c3=a1bal_?= =?UTF-8?Q?candidacy_for_Barbican_PTL?= Message-ID: Hello OpenStack, I would like to nominate myself for Barbican PTL. [1] My focus for the new cycle will be to continue the maintenance and development of the project. We will continue to improve usability as well as continue to enable cross-project encryption features. We hope to deliver our first microversion revision early in the Wallaby cycle, so tha we can finally release the Secret Consumers feature. Thanks, Douglas Mendizábal [1] https://review.opendev.org/#/c/754735/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zigo at debian.org Mon Sep 28 15:23:54 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 28 Sep 2020 17:23:54 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 Message-ID: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Hi, As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. See [1] for the details. However, Debian unstable has 2.0.0. Would there be some good soul willing to help me fix this situation? I would need a patch to fix this, but I'm really not sure how to start. Cheers, Thomas Goirand (zigo) From pierre at stackhpc.com Mon Sep 28 16:02:52 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 28 Sep 2020 18:02:52 +0200 Subject: [blazar][election] PTL candidacy for Wallaby Message-ID: Hi, I would like to self-nominate for the role of PTL of Blazar for the Wallaby release cycle. I have been PTL since the Stein cycle and I am willing to continue. Unfortunately, Blazar has been suffering from low participation in the project. I will keep engaging with current users and potential new ones, in order to get more contributors involved in the project. Thank you for your support, Pierre Riteau (priteau) From radoslaw.piliszek at gmail.com Mon Sep 28 16:10:01 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 28 Sep 2020 18:10:01 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Message-ID: The [1] reference is missing. -yoctozepto On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: > > Hi, > > As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. > See [1] for the details. However, Debian unstable has 2.0.0. > > Would there be some good soul willing to help me fix this situation? I > would need a patch to fix this, but I'm really not sure how to start. > > Cheers, > > Thomas Goirand (zigo) > From mthode at mthode.org Mon Sep 28 16:21:21 2020 From: mthode at mthode.org (Matthew Thode) Date: Mon, 28 Sep 2020 11:21:21 -0500 Subject: [requirements][election] PTL candidacy for Wallaby Message-ID: <20200928162121.tvln2ma42cienfyg@mthode.org> I would like to announce my candidacy for PTL of the Requirements project for the Wallaby cycle. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. (Keep things working well) 2. Un-cap requirements where possible (stuff like botocore). 3. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. Json libs are on the short list this go around. I look forward to continue working with you in this cycle, as your PTL or not. Thanks for your time, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From haleyb.dev at gmail.com Mon Sep 28 16:29:53 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 28 Sep 2020 12:29:53 -0400 Subject: [neutron] Bug deputy report for week of September 21st Message-ID: <7f3c4c11-d42b-24dd-9a41-372aab2bc404@gmail.com> Hi, I was Neutron bug deputy last week. Below is a short summary about the 21 (!) reported bugs. -Brian Critical bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1896603 - ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation - https://review.opendev.org/#/c/753302/ * https://bugs.launchpad.net/bugs/1896678 - [OVN Octavia Provider] test_port_forwarding failing in gate - https://review.opendev.org/#/c/753419/ * https://bugs.launchpad.net/neutron/+bug/1896766 - OVN jobs failing due to failed OVS compilation - https://review.opendev.org/#/c/751882/ * https://bugs.launchpad.net/neutron/+bug/1897326 - scenario test test_floating_ip_update is failing often on Ubuntu 20.04 - Possibly related to https://bugs.launchpad.net/neutron/+bug/1896735 - Slawek took ownership High bugs --------- * https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1896506 - keepalived_use_no_track default=True breaks bionic deploys - Needs owner * https://bugs.launchpad.net/neutron/+bug/1896677 - [OVN Octavia Provider] OVN provider fails creating pool without subnet ID - Assigned to haleyb * https://bugs.launchpad.net/neutron/+bug/1896735 - Scenario tests from neutron_tempest_plugin.scenario.test_port_forwardings.PortForwardingTestJSON failing due to ssh failure - https://review.opendev.org/#/c/753552/ proposed * https://bugs.launchpad.net/neutron/+bug/1896827 - port update api should not lose foreign external_ids - https://review.opendev.org/#/c/753833/ proposed * https://bugs.launchpad.net/neutron/+bug/1896945 - dnsmasq >= 2.81 not responding to DHCP requests with current q-dhcp configs - Looks similar to https://bugs.launchpad.net/neutron/+bug/1876094 but that fix is merged - Needs owner * https://bugs.launchpad.net/neutron/+bug/1897095 - [OVN] ARP/MAC handling for routers connected to external network is scaling poorly - https://review.opendev.org/#/c/752678/ proposed Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1896470 - OVN migration infrared plugin does not remove ML2OVS-specific templates - https://review.opendev.org/#/c/752918/ proposed * https://bugs.launchpad.net/neutron/+bug/1896587 - iptables firewall driver don't drops invalid packets which match some SG rule - Fix for https://bugs.launchpad.net/neutron/+bug/1460741 moved this iptables chain lower, so more investigation required. * https://bugs.launchpad.net/tempest/+bug/1896592 - [neutron-tempest-plugin] test_dhcpv6_stateless_* clashing when creating a IPv6 subnet - Actually a bug in the tempest repo, added component - Fixed in neutron-tempest-plugin with https://review.opendev.org/#/c/560465/ - Marked Incomplete by Rodolfo as he noticed server issues (500's) in the logs * https://bugs.launchpad.net/neutron/+bug/1896733 - [FT] Error while executing "remove_nodes_from_host" method - Needs owner * https://bugs.launchpad.net/neutron/+bug/1896850 - dhcp release fails when client_id is specified - https://review.opendev.org/#/c/753865/ proposed * https://bugs.launchpad.net/neutron/+bug/1896933 - Exception when plugin creates a network without specifying the MTU - https://review.opendev.org/#/c/753717/ proposed Low bugs -------- * https://bugs.launchpad.net/neutron/+bug/1896588 - [Security Groups] When using neutron CLI, if non-existing project is given when listing the SGs, a default SG is created - Rodolfo working on it * https://bugs.launchpad.net/neutron/+bug/1897100 - Improve port listing command - https://review.opendev.org/#/c/754117/ proposed * https://bugs.launchpad.net/neutron/+bug/1897423 - [L3] Let agent extension do delete router first - https://review.opendev.org/#/c/736258/ proposed * https://bugs.launchpad.net/neutron/+bug/1895876 - When accounts.yaml is used neutron_tempest_plugin fails with "Invalid input for tenant_id. Reason: 'None' is not a valid string." - https://review.opendev.org/754782 proposed Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1896920 - Unnecessary error log when checking if a device is ready - https://review.opendev.org/#/c/754005/ proposed Further triage required ----------------------- * None From amy at demarco.com Mon Sep 28 17:23:56 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 28 Sep 2020 12:23:56 -0500 Subject: TC Candidacy for Wallaby Message-ID: Hey all, I am declaring my candidacy for the Technical Committee. I have served in various leadership positions within the community over the years and feel I have developed a well rounded view of what goes on within OpenStack from the project level up to the OSF Board level. I feel I can represent all members of the community from the folks running small private clouds to the teams running large public clouds. I also feel my experiences helping with OpenStack Upstream Institute, the OpenStack mentoring program, and the Git and Gerrit workshops help me to connect with contributers from those just starting out to those who have been here from the beginning. While I'm admittedly not a developer by profession, I have code and documentation contributions in multiple projects which I think is important in seeing the big picture as to how pieces fit together in the larger OpenStack codebase. Thanks, Amy Marrich (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Sep 28 17:44:10 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 28 Sep 2020 13:44:10 -0400 Subject: [tc] weekly update Message-ID: Hi everyone,Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. We've also included a few references to some important mailing list threads that you should check out. # Patches ## Open Reviews - Add assert:supports-standalone https://review.opendev.org/722399 - Add election schedule exceptions in charter https://review.opendev.org/751941 - Remove tc:approved-release tag https://review.opendev.org/749363 - Clarify impact on releases for SIGs https://review.opendev.org/752699 - Migrate rpm-packaging to a SIG https://review.opendev.org/752661 - Retire openstack/os-loganalyze https://review.opendev.org/753834 - Retire devstack-plugin-pika project https://review.opendev.org/748730 - Reorder repos alphabetically https://review.opendev.org/754097 - Add Ironic charms to OpenStack charms https://review.opendev.org/754099 - Define TC-approved release in a resolution https://review.opendev.org/752256 # Other Reminders - PTG Brainstorming: https://etherpad.opendev.org/p/tc-wallaby-ptg - PTG Registration: https://october2020ptg.eventbrite.com Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From dpeacock at redhat.com Mon Sep 28 18:47:24 2020 From: dpeacock at redhat.com (David Peacock) Date: Mon, 28 Sep 2020 14:47:24 -0400 Subject: [tripleo][validations] new Validation Framework demos In-Reply-To: References: Message-ID: Hey Mathieu, others, On Wed, Jul 1, 2020 at 11:07 AM Mathieu Bultel wrote: > So it shows in this demo how the deploy steps playbook can be logged, > parsed and shown with the VF CLI. This can be improve, modify & so on of > course... it's basic usage. > https://asciinema.org/a/344484 > https://asciinema.org/a/344509 > Thank you for running with this. Based on your work, I've put together a Vagrantfile to allow developers to get up to speed very quickly with a working validations environment. This is a minimal environment and is not TripleO dependent, meaning it's very light, and ready to go in seconds. I can see that this might grow as we think of other tooling required to be productive, so does anyone have any interest in me pushing it ot a repo somewhere, and if so, where would be most suitable? Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Sep 28 19:36:16 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 28 Sep 2020 12:36:16 -0700 Subject: [election][designate] PTL candidacy for Wallaby Message-ID: My fellow OpenStack community, I would like to announce my candidacy for PTL of Designate for the Wallaby cycle. As you probably know I have been the Octavia PTL five times since the Pike release. I will continue to be involved in the Octavia project, but it is time for a change! I feel that allowing for leadership change on projects is healthy and gives others the opportunity to lead. In my downstream day job I am becoming more involved in supporting Designate as part of our OpenStack offering. This means I will be contributing more to Designate and supporting the Designate community. One of my Designate goals for the Wallaby release will be expanding the documentation available. For example, I would like to create some user cookbooks, similar to those in the Octavia documentation, that would guide users through scenarios using Designate. I am also excited to see a proposed patch for DNS “views” in Designate, so I will be supporting that development effort in any way I can help. Thank you for your support and your consideration for Wallaby, Michael Johnson (johnsom) From rosmaita.fossdev at gmail.com Mon Sep 28 19:37:11 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 28 Sep 2020 15:37:11 -0400 Subject: [cinder] this week's meeting in video+IRC Message-ID: <5bfd6087-2800-50e7-fd81-a4e6cc27d7af@gmail.com> I forgot to mention at last week's Cinder meeting that the meeting for Wednesday 30 September, being the last meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. Here's a quick reminder of the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. connection info: https://bluejeans.com/3228528973 cheers, brian From satish.txt at gmail.com Mon Sep 28 19:43:25 2020 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 28 Sep 2020 15:43:25 -0400 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: Hi, I was dealing with the same issue a few weeks back so curious are you having this problem on AIO or 3 node controllers? ~S On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: > > Hi, > I using the following setup for Octavia load balancer on OVS > Ansible openstack_user_config.yml > - network: > container_bridge: "br-lbaas" > container_type: "veth" > container_interface: "eth14" > host_bind_override: "eth14" > ip_from_q: "octavia" > type: "flat" > net_name: "octavia" > group_binds: > - neutron_openvswitch_agent > - octavia-worker > - octavia-housekeeping > - octavia-health-manager > > user_variables.yml > octavia_provider_network_name: "octavia" > octavia_provider_network_type: flat > octavia_neutron_management_network_name: lbaas-mgmt > > /etc/netplan/50-cloud-init.yaml > br-lbaas: > dhcp4: no > interfaces: [ bond10 ] > addresses: [] > parameters: > stp: false > forward-delay: 0 > bond10: > dhcp4: no > addresses: [] > interfaces: [ens16] > parameters: > mode: balance-tlb > > brctl show > bridge name bridge id STP enabled interfaces > br-lbaas 8000.d60e4e80f672 no 2ea34552_eth14 > bond10 > > However, I am getting the following error when creating the load balance > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.29.233.47', port=9443 > > The Octavia api container unable to connect to the amphora instance. > Any missing configuration, cause I need to manually add in the eth14 interface to the br-lbaas bridge in order to fix the connection issue > brctl addif br-lbaas eth14 > > Thanks From jungleboyj at gmail.com Mon Sep 28 20:03:54 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 28 Sep 2020 15:03:54 -0500 Subject: [all][election][tc] Candidacy Message-ID: Dear OpenStack Community, This note is to officially announce my intention to be re-elected to the OpenStack TC in the Wallaby cycle. At this point, I think that many of you know me, but for those who don't I will briefly introduce myself. I have been a part of the OpenStack Community since early in 2013. At that time I was working for IBM and acted as the liaison between the storage teams developing Cinder drivers for IBM's storage solutions and the Community. I am currently working at Lenovo as the Principal Cloud Architect, leading a team that is developing automated infrastructure deployment and cloud deploy solutions for several different cloud platforms. During my time in the OpenStack Community I have been a Cinder core team member and served as PTL for Cinder for two years. I have been an active part of on-boarding for new Community members, helping and supporting the Upstream Institute sessions since the 2016 session in Barcelona. I also have been active in projects outside of Cinder, acting as Oslo liaison for Cinder and helping the documentation team back in the days that that was a thing. Most recently, I have served a year on the OpenStack TC. I will admit that the experience has come with a learning curve. My time has also been during a time of change for the TC and for OpenStack in general. During my time on the TC I was proud to organize the first review of Technical Committee questions in the annual user survey. The exercise was interesting and I feel it brought good information to the TC with regards to what the community needs to be doing to best serve our users. I also have been active in the discussions around how to handle leaderless projects. I feel that my experience as a PTL for a notable period of time and core team member for over 6 years gives me perspective to talk about how we continue to develop OpenStack project structure. During my last year, my focus at work has changed from pure development to being more of a designer/consumer of cloud solutions. The solutions are not exclusively OpenStack, but now include OpenShift and VMware based clouds. The change in focus has given me new perspective on how OpenStack fits into the cloud ecosystem and I think it is perspective that will be helpful in the coming year as we now have a combined Technical Committee and User Committee. If re-elected I will support getting the OpenStack community connected with other cloud communities, especially the k8s community. I will continue to support efforts to adapt OpenStack's processes and projects to make things easier for projects both large and small. I also hope to help to set goals for the community that will help it continue to be a vibrant and relevant cloud solution for years to come! Working with the OpenStack Community has always been something that I have been very proud to do in my career. I thank you all for the opportunity to serve as a TC member the last year and hope that I can continue to serve the community in this capacity. Sincerely, Jay Bryant IRC (Freenode): jungleboyj -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Sep 28 20:08:02 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 28 Sep 2020 16:08:02 -0400 Subject: [election][cinder] PTL candidacy for Wallaby Message-ID: Hello everyone, I hereby announce my candidacy for Cinder PTL for the Wallaby cycle. I've been PTL for two cycles, so you have a pretty good idea by now of what it's like to work on the Cinder project with me in that role, and whether or not it would be a good idea for me to continue. There are a few things I'd like to see the project emphasize during the Wallaby development cycle; hopefully the team will self-organize around these themes. * Continued development of the cinder-tempest-plugin. We need more automated tests for more complicated scenarios, partly to prevent regressions for fixed bugs, but also to detect some problems before they are reported by users. * Better understanding of why some of the gate jobs are intermittently failing, particularly the backup-related tests in the tempest-storage suite. * Better review bandwidth. The core team we carried over from Ussuri to Victoria is still active, but as their careers have progressed, they have taken on more responsibilities in their day jobs, and their review counts have declined a bit. We added Lucio as a new core during Victoria; it would be good to add another person or two during the Wallaby cycle. Anyone working on the cinder project who's interested in working to get themselves into a position where they could be nominated as a cinder core, please contact me (or any of the current cores) to discuss what the expectations are. Those are what's been on my mind lately. As far as specific features, etc., those will emerge from our PTG discussions, to which I encourage you to contribute: https://etherpad.opendev.org/p/wallaby-ptg-cinder-planning We've had productive virtual mid-cycle meetings for two cycles now, and the cinder-weekly-meeting-once-a-month-in-videoconference seems to help keep the team connected, so I'd like to continue that. The team adapted well to the virtual PTG format for Victoria, so I'm confident we'll have a productive virtual Wallaby PTG, though I sincerely hope we'll once again have the opportunity to meet face-to-face for the 'X' PTG. As far as external interest in the Cinder project goes, we've added some new drivers in Victoria and already have one new driver proposed for Wallaby, with at least one more on the way, which is nice. Thanks for reading this far, and thank you for your consideration. Brian Rosmaita (rosmaita) From tburke at nvidia.com Mon Sep 28 22:35:06 2020 From: tburke at nvidia.com (Tim Burke) Date: Mon, 28 Sep 2020 15:35:06 -0700 Subject: Swift Installation Error In-Reply-To: References: Message-ID: <1cfcf73b-958f-48d7-1ffc-4012cf2a3fc7@nvidia.com> On 9/28/20 12:11 AM, WALSH CHANG wrote: > *External email: Use caution opening links or attachments* > > > To whom it may concern, > > I am very new to OpenStack. > I had an error when I install the Swift service. > Error trying to load config from /etc/swift/proxy-server.conf: Entry > point 'proxy\naccount_autocreate = True' not found in egg 'swift' (dir: > /usr/lib/python2.7/dist-packages; protocols: paste.app_factory, > paste.composite_factory, paste.composit_factory; entry_points: ) > > Just wondering if it's ok to have storage node and controller node on > the same device. > I use the controller node as the storage node, and use the flash drive > as the storage space. > > Not sure if this is the reason for the error. > It will be very appreciated if someone can provide some suggestions. > > Kind regards, > Walsh > Any chance there's an extra space at the start of the line? Like [app:proxy-server] use = egg:swift#proxy allow_account_management = True instead of [app:proxy-server] use = egg:swift#proxy allow_account_management = True ? Looks like the leading space will invoke some line folding in the config parser. As far as running storage and proxy services on the same box (or VM, or whatever), that should work fine -- in fact, it's a fairly common setup even for large clusters. Hope it helps, Tim From masayuki.igawa at gmail.com Mon Sep 28 23:45:15 2020 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Tue, 29 Sep 2020 08:45:15 +0900 Subject: [election][qa] PTL candidacy for Wallaby Message-ID: Hi everyone, I want to propose my candidacy and continue serving as Quality Assurance PTL for the Wallaby cycle. First off, I would like to thank you all the contributors, core reviewers, and anyone involved and makes the OpenStack better. In Victoria cycle, we've added a job for generating doc from docstrings and added many docstrings for that already. We can see the outcomes here[2]. And we've also made a workflow[3][4] to get along with the upper-constraints in tox.ini. We still have some more planned work pending for Victoria, such as making tempest scenario manager a stable interface, improving tempest cleanup, etc. I think we will accomplish them in Victoria or Wallaby cycle. Along with daily QA activities, my priorities for QA for the next cycle will be: * Guiding and motivating more contributors to QA projects, improving documentation and advertising OpenStack QA projects. * Stability of Tempest scenario manager. * Completing Victoria priority items if it's still remaining. It's hard to accomplish without our collaboration. So, let's do it together! [1] http://stackalytics.com/?user_id=igawa [2] https://docs.openstack.org/tempest/latest/tests/modules.html [3] https://docs.openstack.org/tempest/latest/requirement_upper_constraint_for_tempest.html [4] https://wiki.openstack.org/wiki/QA/releases#Project_with_release_mode:_cycle-with-intermediary Thanks for your consideration! -- Masayuki Igawa (masayukig) From tkajinam at redhat.com Mon Sep 28 23:57:54 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 29 Sep 2020 08:57:54 +0900 Subject: [election][storlets] PTL candidacy for Wallaby Message-ID: Hi All, I'd like to announce my candidacy to run PTL role for Storlets project in the next Wallaby cycles, to continue PTL role from the previous Victoria cycle. In this cycle, I'll focus on the existing proposals to improve stability and operability of Storlets. Since we have discussion slots in coming PTG, I'll moderate the discussion there and will try to make these proposals land during this cycle. Thank you for your consideration ! Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 29 00:17:00 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 Sep 2020 00:17:00 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations ending in less than 24 hours Message-ID: <20200929001700.d4i77liiamza4v5s@yuggoth.org> Another reminder that we are in the last hours for declaring PTL and TC candidacies. Nominations are open until Sep 29, 2020 23:45 UTC. If you want to stand for election, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2020-09-22 23:45:00 UTC Nominations end @ 2020-09-29 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 23:38:26 Nominations progress : 85.93% --------------------------------------------------- Projects[1] : 55 Projects with candidates : 25 ( 45.45%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 30 (Barbican* Blazar* Cinder* Cloudkitty Cyborg Designate* Ec2_Api Freezer Horizon Ironic* Karbor Magnum* Manila Masakari Monasca Murano Octavia OpenStack_Charms Oslo Packaging_Rpm Placement Qinling Quality_Assurance* Requirements* Searchlight Senlin Solum Storlets* Telemetry Watcher) =================================================== Stats gathered @ 2020-09-29 00:06:34 UTC (* indicates teams with pending nominations which haven't been confirmed by more than one election official yet) This means that with approximately 2 days left, 30 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. There are also currently 2 confirmed candidates and 3 additional pending nominations for the 4 open Technical Committee seats, indicating it's likely there will be a runoff poll for the TC election. Thank you, [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy [2] Any open reviews at https://review.openstack.org/#/q/is:open+project:openstack/election have not been factored into these stats. [3] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -- Jeremy Stanley on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From arne.wiebalck at cern.ch Tue Sep 29 06:05:21 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 29 Sep 2020 08:05:21 +0200 Subject: [ironic] Proposing returning Jay Faulkner to ironic-core In-Reply-To: References: Message-ID: <505cdfae-2a10-654b-08ad-693ec5a4ec55@cern.ch> Of course no objections, welcome back, Jay! On 28.09.20 15:44, Julia Kreger wrote: > Greetings ironic contributors, > > I'm sure many of you have noticed JayF has been more active on IRC > over the past year. Recently he started reviewing and providing > feedback on changes in ironic-python-agent as well as some work to add > new features and fix some very not fun bugs. > > Given that he was ironic-core when he departed the community, I > believe it is only fair that we return those rights to him. > > Any objections? > > -Julia > From ykarel at redhat.com Tue Sep 29 07:35:17 2020 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 29 Sep 2020 13:05:17 +0530 Subject: [tripleo]] tripleo victoria rc1 In-Reply-To: References: Message-ID: Hi, On Mon, Sep 28, 2020 at 1:08 PM Marios Andreou wrote: > > > > On Mon, Sep 28, 2020 at 10:16 AM Yatin Karel wrote: >> >> Hi Marios, >> >> << we don't normally/aren't expected to cut release candidates. >> However after discussion with and guidance from Wes I decided to >> propose an rc1 as practice since I << haven't used the new releases >> tool before [4]. >> << The proposal is up at [1] but I've set -1 there on request from >> reviewers; we can't merge that until [2] merges first. >> >> Is this just about releasing RC Tags or changing the release-model >> that tripleo used to follow:- cycle-with-intermediary(cycle-trailing >> earlier) vs cycle-with-rc. Maybe you can share more context with the > > > this is just about bumping the git tags on the projects and not about changing the release model. > >> >> discussion points as to what's different this time and reasoning for >> the same to have a clear picture. > > > nothing different wrt the release model - see the diff it is just bumping the hash and version https://review.opendev.org/gitweb?p=openstack%2Freleases.git;a=commitdiff;h=43cca15e8525992f6c3bfba68d3421c64c960bf9 . We are just making a release to coincide with the wider openstack Victoria RC1 even though we aren't expected to follow the rc model with our 'cycle with intermediary' on these projects. > >> >> And if possible can you also share what's the plan for GA release for >> TripleO like how it will be this time. > > > So weshay is still PTL, and we haven't discussed this in any great detail, except that we may have to delay a little bit cutting branch and releasing victoria final. > >> Generally with rc tags stable branch is also cut but i don't see that >> in https://review.opendev.org/#/c/754346/. And i hope this will not be > > > right I guess the confusion is that this isn't a proper RC since we don't have those 'officially' .It is just a version bump that coincides with the victoria rc1 milestone end of last week. > To me what caused the confusion is releasing an RC candidate without a stable/ branch for branched projects like Tripleo's and the missing why's on doing differently this time and how doing differently this time will help. > hope it helps, also, did you forget to reply-all? should i send to the list (just confirmed with ykarel and re-adding list to addresses) > > thanks, > > marios > >> >> merged until victoria tripleo jobs are ready if cutting >> stable/victoria branch. >> >> On Fri, Sep 25, 2020 at 10:05 PM Marios Andreou wrote: >> > >> > hello TripleO folks o/ >> > >> > we don't normally/aren't expected to cut release candidates. However after discussion with and guidance from Wes I decided to propose an rc1 as practice since I haven't used the new releases tool before [4]. >> > >> > The proposal is up at [1] but I've set -1 there on request from reviewers; we can't merge that until [2] merges first. >> > >> > If you are interested please check [1] and add any comments. In particular let me know if you want to include a release for one of the independent repos [3] (though for validations-common/libs we need to reach out to the validations team they are managing those AFAIK). >> > >> > thanks, hope you enjoy your weekend! >> > >> > [1] https://review.opendev.org/#/c/754346/ >> > [2] https://review.opendev.org/754347 >> > [3] https://releases.openstack.org/teams/tripleo.html#independent >> > [4] https://releases.openstack.org/reference/using.html#using-new-release-command >> >> Thanks and Regards >> Yatin Karel >> Thanks and Regards Yatin Karel From marios at redhat.com Tue Sep 29 07:56:55 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 29 Sep 2020 10:56:55 +0300 Subject: [tripleo]] tripleo victoria rc1 In-Reply-To: References: Message-ID: On Tue, Sep 29, 2020 at 10:36 AM Yatin Karel wrote: > Hi, > > On Mon, Sep 28, 2020 at 1:08 PM Marios Andreou wrote: > > > > > > > > On Mon, Sep 28, 2020 at 10:16 AM Yatin Karel wrote: > >> > >> Hi Marios, > >> > >> << we don't normally/aren't expected to cut release candidates. > >> However after discussion with and guidance from Wes I decided to > >> propose an rc1 as practice since I << haven't used the new releases > >> tool before [4]. > >> << The proposal is up at [1] but I've set -1 there on request from > >> reviewers; we can't merge that until [2] merges first. > >> > >> Is this just about releasing RC Tags or changing the release-model > >> that tripleo used to follow:- cycle-with-intermediary(cycle-trailing > >> earlier) vs cycle-with-rc. Maybe you can share more context with the > > > > > > this is just about bumping the git tags on the projects and not about > changing the release model. > > > >> > >> discussion points as to what's different this time and reasoning for > >> the same to have a clear picture. > > > > > > nothing different wrt the release model - see the diff it is just > bumping the hash and version > https://review.opendev.org/gitweb?p=openstack%2Freleases.git;a=commitdiff;h=43cca15e8525992f6c3bfba68d3421c64c960bf9 > . We are just making a release to coincide with the wider openstack > Victoria RC1 even though we aren't expected to follow the rc model with our > 'cycle with intermediary' on these projects. > > > > >> > >> And if possible can you also share what's the plan for GA release for > >> TripleO like how it will be this time. > > > > > > So weshay is still PTL, and we haven't discussed this in any great > detail, except that we may have to delay a little bit cutting branch and > releasing victoria final. > > > >> Generally with rc tags stable branch is also cut but i don't see that > >> in https://review.opendev.org/#/c/754346/. And i hope this will not be > > > > > > right I guess the confusion is that this isn't a proper RC since we > don't have those 'officially' .It is just a version bump that coincides > with the victoria rc1 milestone end of last week. > > > > To me what caused the confusion is releasing an RC candidate without a > stable/ branch for branched projects like Tripleo's and the missing > why's on doing differently this time and how doing differently this > time will help. > Sure - thanks for helping me clarify it. It is my fault for calling it an RC since it is not a proper RC. I updated the commit message https://review.opendev.org/#/c/754346/ so it is clearer now thanks, marios > > > > hope it helps, also, did you forget to reply-all? should i send to the > list (just confirmed with ykarel and re-adding list to addresses) > > > > thanks, > > > > marios > > > >> > >> merged until victoria tripleo jobs are ready if cutting > >> stable/victoria branch. > >> > >> On Fri, Sep 25, 2020 at 10:05 PM Marios Andreou > wrote: > >> > > >> > hello TripleO folks o/ > >> > > >> > we don't normally/aren't expected to cut release candidates. However > after discussion with and guidance from Wes I decided to propose an rc1 as > practice since I haven't used the new releases tool before [4]. > >> > > >> > The proposal is up at [1] but I've set -1 there on request from > reviewers; we can't merge that until [2] merges first. > >> > > >> > If you are interested please check [1] and add any comments. In > particular let me know if you want to include a release for one of the > independent repos [3] (though for validations-common/libs we need to reach > out to the validations team they are managing those AFAIK). > >> > > >> > thanks, hope you enjoy your weekend! > >> > > >> > [1] https://review.opendev.org/#/c/754346/ > >> > [2] https://review.opendev.org/754347 > >> > [3] https://releases.openstack.org/teams/tripleo.html#independent > >> > [4] > https://releases.openstack.org/reference/using.html#using-new-release-command > >> > >> Thanks and Regards > >> Yatin Karel > >> > > Thanks and Regards > Yatin Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Sep 29 08:43:41 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 29 Sep 2020 10:43:41 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Message-ID: On 9/28/20 6:10 PM, Radosław Piliszek wrote: > On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: >> >> Hi, >> >> As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. >> See [1] for the details. However, Debian unstable has 2.0.0. >> >> Would there be some good soul willing to help me fix this situation? I >> would need a patch to fix this, but I'm really not sure how to start. >> >> Cheers, >> >> Thomas Goirand (zigo) > > The [1] reference is missing. > > -yoctozepto Indeed, sorry. So: [1] https://github.com/eventlet/eventlet/issues/619 I've already integrated this patch in the Debian package: https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec However, there's still this failure, related to #619 (linked above): ERROR: test_noraise_dns_tcp (tests.greendns_test.TinyDNSTests) -------------------------------------------------------------- Traceback (most recent call last): File "/<>/.pybuild/cpython3_3.8_eventlet/build/tests/greendns_test.py", line 904, in test_noraise_dns_tcp self.assertEqual(response.rrset.items[0].address, expected_ip) KeyError: 0 Can anyone solve this? Cheers, Thomas Goirand (zigo) From e0ne at e0ne.info Tue Sep 29 09:02:55 2020 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 29 Sep 2020 12:02:55 +0300 Subject: [horizon][dev] Horizon.next In-Reply-To: <20200918165821.4mhoe5cpt2rlqxti@yuggoth.org> References: <20200918165821.4mhoe5cpt2rlqxti@yuggoth.org> Message-ID: Hi Chris, Thanks a lot for your help. It sounds amazing and promising. Feel free to reach me in IRC (e0ne at #openstack-horizon) channel. We've got this topic to discuss it during our Virtual PTG [1], so I hope you'll join us there. [1] https://etherpad.opendev.org/p/horizon-w-ptg Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Fri, Sep 18, 2020 at 7:59 PM Jeremy Stanley wrote: > Related, there's a suggestion in the StarlingX community at the > moment about the possibility of incorporating SAP's Elektra UI > ( https://github.com/sapcc/elektra ) as an alternative to Horizon: > > > http://lists.starlingx.io/pipermail/starlingx-discuss/2020-September/009659.html > > Looks like it's a mix of Apache and MIT/Expat (Apache-compatible) > licensed Javascript and Ruby/Rails. Given there's not really any > Ruby in OpenStack proper (except for Ruby-based configuration > management and maybe SDKs), I expect that may not be a great > starting point for a next-generation Horizon, but it could still > serve as a source of some inspiration. Also this highlights an > interest from some of the StarlingX contributors in Horizon > replacements, so maybe they'd be willing to contribute... or at > least help with requirements gathering. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Tue Sep 29 09:31:37 2020 From: mbultel at redhat.com (Mathieu Bultel) Date: Tue, 29 Sep 2020 11:31:37 +0200 Subject: [tripleo][validations] new Validation Framework demos In-Reply-To: References: Message-ID: On Mon, Sep 28, 2020 at 8:47 PM David Peacock wrote: > Hey Mathieu, others, > > On Wed, Jul 1, 2020 at 11:07 AM Mathieu Bultel wrote: > >> So it shows in this demo how the deploy steps playbook can be logged, >> parsed and shown with the VF CLI. This can be improve, modify & so on of >> course... it's basic usage. >> > https://asciinema.org/a/344484 >> https://asciinema.org/a/344509 >> > > Thank you for running with this. Based on your work, I've put together a > Vagrantfile to allow developers to get up to speed very quickly with a > working validations environment. This is a minimal environment and is not > TripleO dependent, meaning it's very light, and ready to go in seconds. > > I can see that this might grow as we think of other tooling required to be > productive, so does anyone have any interest in me pushing it ot a repo > somewhere, and if so, where would be most suitable? > Thank you very much David, I think you can push your work in validations-libs. I will probably push a DockerFile as well. Those kinds of files are really nice to allow people to hack the framework in a few minutes without getting a very huge and complex deployment. > > Cheers, > David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Sep 29 09:34:01 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 29 Sep 2020 11:34:01 +0200 Subject: [all] Switch to python3-pycryptodome Message-ID: <68158e5c-419d-8fe8-7afb-c5ca385f2510@debian.org> Hi, A number of bug reports have been filled against the Debian OpenStack packages in Sid: - keystone - python-openstackclient - python-manilaclient - python-ironic-lib - python-oauth2client - python-pysaml2 The content of the bugs is: > Source: python-pysaml2 > Version: 4.5.0-8 > Severity: important > Tags: sid bullseye > Usertags: pycrypto > > Dear maintainer, > > python-pysaml2 currently Build-Depends or Depends on python3-crypto > from PyCrypto. This project is no longer maintained and PyCryptodome > (https://www.pycryptodome.org/en/latest/) provides a drop in > replacement. Please switch to python3-pycryptodome. I'd like to > remove python-crypto before the release of bullseye. > > Cheers So my question is: - Is it really a drop-in replacement, and can it be replaced in the packages (maybe, delta the imports?)? - If not, how much work is this? - Can the OpenStack project also move to pycryptodome? Cheers, Thomas Goirand (zigo) From ccamacho at redhat.com Tue Sep 29 09:43:06 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Tue, 29 Sep 2020 11:43:06 +0200 Subject: [tripleo][validations] new Validation Framework demos In-Reply-To: References: Message-ID: Hi Mathieu! I just checked the demo videos and I have a quick question. Is it possible to put them in somehow that we can upload them to the TripleO youtube channel? Thanks, Carlos. On Tue, Sep 29, 2020 at 11:40 AM Mathieu Bultel wrote: > > > > On Mon, Sep 28, 2020 at 8:47 PM David Peacock wrote: >> >> Hey Mathieu, others, >> >> On Wed, Jul 1, 2020 at 11:07 AM Mathieu Bultel wrote: >>> >>> So it shows in this demo how the deploy steps playbook can be logged, parsed and shown with the VF CLI. This can be improve, modify & so on of course... it's basic usage. >>> >>> https://asciinema.org/a/344484 >>> https://asciinema.org/a/344509 >> >> >> Thank you for running with this. Based on your work, I've put together a Vagrantfile to allow developers to get up to speed very quickly with a working validations environment. This is a minimal environment and is not TripleO dependent, meaning it's very light, and ready to go in seconds. >> >> I can see that this might grow as we think of other tooling required to be productive, so does anyone have any interest in me pushing it ot a repo somewhere, and if so, where would be most suitable? > > Thank you very much David, > I think you can push your work in validations-libs. > I will probably push a DockerFile as well. > Those kinds of files are really nice to allow people to hack the framework in a few minutes without getting a very huge and complex deployment. >> >> >> Cheers, >> David From thierry at openstack.org Tue Sep 29 10:08:08 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 29 Sep 2020 12:08:08 +0200 Subject: [all] Switch to python3-pycryptodome In-Reply-To: <68158e5c-419d-8fe8-7afb-c5ca385f2510@debian.org> References: <68158e5c-419d-8fe8-7afb-c5ca385f2510@debian.org> Message-ID: <72d39385-12b0-3ebf-b0ff-a5cbcc284da8@openstack.org> Thomas Goirand wrote: > [...] > So my question is: > - Is it really a drop-in replacement, and can it be replaced in the > packages (maybe, delta the imports?)? > - If not, how much work is this? > - Can the OpenStack project also move to pycryptodome? Hmm, I don't think we depend on python3-crypto at all... We depend on pysaml2, but I don't think it depends on python3-crypto either. Looking at sid's python3-pysaml2 4.5.0-8 (there is no python-pysaml2 in sid) it appears to have a Depends on python3-cryptography, which is not the same thing as python3-crypto, and looks fully maintained: https://pypi.org/project/cryptography/ Maybe those bug reports were filed by someone blindly grepping for python3-crypto ? -- Thierry Carrez (ttx) From dpeacock at redhat.com Tue Sep 29 11:20:41 2020 From: dpeacock at redhat.com (David Peacock) Date: Tue, 29 Sep 2020 07:20:41 -0400 Subject: [tripleo][validations] new Validation Framework demos In-Reply-To: References: Message-ID: On Tue, Sep 29, 2020 at 5:31 AM Mathieu Bultel wrote: > I can see that this might grow as we think of other tooling required to be >> productive, so does anyone have any interest in me pushing it ot a repo >> somewhere, and if so, where would be most suitable? >> > I think you can push your work in validations-libs. > I will probably push a DockerFile as well. > Those kinds of files are really nice to allow people to hack the framework > in a few minutes without getting a very huge and complex deployment. > Thanks - excelllent - done. https://review.opendev.org/#/c/754975/ I also added a section to the readme to highlight - not sure if that's appropriate there but I figure usage wouldn't hurt and that's a home for future bootstrapping items like Dockerfiles etc. Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwm2012 at gmail.com Tue Sep 29 12:27:36 2020 From: pwm2012 at gmail.com (pwm) Date: Tue, 29 Sep 2020 20:27:36 +0800 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: Hi, I'm testing on AIO before moving to a 3 nodes controller. Haven't tested on 3 nodes controller yet but I do think it will get the same issue. On Tue, Sep 29, 2020 at 3:43 AM Satish Patel wrote: > Hi, > > I was dealing with the same issue a few weeks back so curious are you > having this problem on AIO or 3 node controllers? > > ~S > > On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: > > > > Hi, > > I using the following setup for Octavia load balancer on OVS > > Ansible openstack_user_config.yml > > - network: > > container_bridge: "br-lbaas" > > container_type: "veth" > > container_interface: "eth14" > > host_bind_override: "eth14" > > ip_from_q: "octavia" > > type: "flat" > > net_name: "octavia" > > group_binds: > > - neutron_openvswitch_agent > > - octavia-worker > > - octavia-housekeeping > > - octavia-health-manager > > > > user_variables.yml > > octavia_provider_network_name: "octavia" > > octavia_provider_network_type: flat > > octavia_neutron_management_network_name: lbaas-mgmt > > > > /etc/netplan/50-cloud-init.yaml > > br-lbaas: > > dhcp4: no > > interfaces: [ bond10 ] > > addresses: [] > > parameters: > > stp: false > > forward-delay: 0 > > bond10: > > dhcp4: no > > addresses: [] > > interfaces: [ens16] > > parameters: > > mode: balance-tlb > > > > brctl show > > bridge name bridge id STP enabled > interfaces > > br-lbaas 8000.d60e4e80f672 no > 2ea34552_eth14 > > > bond10 > > > > However, I am getting the following error when creating the load balance > > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.29.233.47', port=9443 > > > > The Octavia api container unable to connect to the amphora instance. > > Any missing configuration, cause I need to manually add in the eth14 > interface to the br-lbaas bridge in order to fix the connection issue > > brctl addif br-lbaas eth14 > > > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Sep 29 14:01:06 2020 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 29 Sep 2020 10:01:06 -0400 Subject: Ussuri Octavia load balancer on OVS In-Reply-To: References: Message-ID: Perfect, Now try same way on 3 nodes and tell me how it goes? Because I was having issue in my environment to make it work so I used different method which is here on my blog https://satishdotpatel.github.io//openstack-ansible-octavia/ Sent from my iPhone > On Sep 29, 2020, at 8:27 AM, pwm wrote: > >  > Hi, > I'm testing on AIO before moving to a 3 nodes controller. Haven't tested on 3 nodes controller yet but I do think it will get the same issue. > >> On Tue, Sep 29, 2020 at 3:43 AM Satish Patel wrote: >> Hi, >> >> I was dealing with the same issue a few weeks back so curious are you >> having this problem on AIO or 3 node controllers? >> >> ~S >> >> On Sun, Sep 27, 2020 at 10:27 AM pwm wrote: >> > >> > Hi, >> > I using the following setup for Octavia load balancer on OVS >> > Ansible openstack_user_config.yml >> > - network: >> > container_bridge: "br-lbaas" >> > container_type: "veth" >> > container_interface: "eth14" >> > host_bind_override: "eth14" >> > ip_from_q: "octavia" >> > type: "flat" >> > net_name: "octavia" >> > group_binds: >> > - neutron_openvswitch_agent >> > - octavia-worker >> > - octavia-housekeeping >> > - octavia-health-manager >> > >> > user_variables.yml >> > octavia_provider_network_name: "octavia" >> > octavia_provider_network_type: flat >> > octavia_neutron_management_network_name: lbaas-mgmt >> > >> > /etc/netplan/50-cloud-init.yaml >> > br-lbaas: >> > dhcp4: no >> > interfaces: [ bond10 ] >> > addresses: [] >> > parameters: >> > stp: false >> > forward-delay: 0 >> > bond10: >> > dhcp4: no >> > addresses: [] >> > interfaces: [ens16] >> > parameters: >> > mode: balance-tlb >> > >> > brctl show >> > bridge name bridge id STP enabled interfaces >> > br-lbaas 8000.d60e4e80f672 no 2ea34552_eth14 >> > bond10 >> > >> > However, I am getting the following error when creating the load balance >> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.29.233.47', port=9443 >> > >> > The Octavia api container unable to connect to the amphora instance. >> > Any missing configuration, cause I need to manually add in the eth14 interface to the br-lbaas bridge in order to fix the connection issue >> > brctl addif br-lbaas eth14 >> > >> > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Tue Sep 29 15:26:15 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Tue, 29 Sep 2020 17:26:15 +0200 Subject: [election][telemetry] PTL candidacy for Wallaby Message-ID: <387bc16d-facd-dd0a-620a-9359f9a7eb1f@matthias-runge.de> Hi there, I'd like to announce my candidacy to become the next PTL for OpenStack Telemetry for the Wallaby cycle. In the past cycles, we have seen a decrease in contribution for Telemetry related projects. We have also seen the lengthy discussion around gnocchi or the lack of a TSDB in OpenStack. A few months ago, I joined the project and helped with reviews and bug fixes. For the next cycle, I would like to address the a few short term but also some longer term issues such as: - finish cleaning up the python2 to python3 migration. - get a better understanding (and a solution) for the various telemetry gates being blocked - recently, heat stopped testing auto-scaling via aodh and gnocchi, because it was too unstable. I'd like to address that as well. - We haven't had a planning or a roadmap for about the last 1.5 cycles. That is something to do in the future. Thank you for your support and your consideration. Matthias Runge From ruslanas at lpic.lt Tue Sep 29 16:01:22 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 29 Sep 2020 19:01:22 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: Hi all, I have a clean undercloud deployment. but when I add a node, it fails to introspect. in ironic logs I see interesting lines about FSM (what is it?) [1] and in the same paste, I have provided/pasted, how openstack overcloud node import instack.json looks like. Also introspection -- provide with debug [2]. It do not change when I change driver (idrac, ipmi, redfish) or I specify exact host for introspection. here [3] is images and containers with ironic. Here is my latest instack file [4] Any ideas? What I could do? [1] http://paste.openstack.org/show/uPwWVYlO3UQbrF0WzDFH/ [2] http://paste.openstack.org/show/zIHDZ4PS8d0Oi3fmB5AD/ [3] http://paste.openstack.org/show/87pn8i1QGJj2JQyPEJbl/ [4] http://paste.openstack.org/show/a8UqmlI6yT6sd0503kVn/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thode at fsi.io Mon Sep 28 15:42:20 2020 From: thode at fsi.io (Matthew Thode) Date: Mon, 28 Sep 2020 10:42:20 -0500 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Message-ID: <20200928154220.m6lv7c2xhhuoxwpt@mthode.org> On 20-09-28 17:23:54, Thomas Goirand wrote: > Hi, > > As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. > See [1] for the details. However, Debian unstable has 2.0.0. > > Would there be some good soul willing to help me fix this situation? I > would need a patch to fix this, but I'm really not sure how to start. > > Cheers, > > Thomas Goirand (zigo) > Missing link? That said, Gentoo has eventlet-0.26.1 and dnspython-2.0.0 and dnspython-1.16.0-r1 marked stable with a cap on dnspython within eventlet. https://github.com/eventlet/eventlet/issues/619 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From stefan.bujack at desy.de Tue Sep 29 10:04:33 2020 From: stefan.bujack at desy.de (Bujack, Stefan) Date: Tue, 29 Sep 2020 12:04:33 +0200 (CEST) Subject: [Octavia] Please help with amphorav2 provider populate db command Message-ID: <1344937278.133047189.1601373873142.JavaMail.zimbra@desy.de> Hello, I think I need a little help again with the configuration of the amphora v2 provider. I get an error when I try to populate the database. It seems that the name of the localhost is used for the DB host and not what I configured in octavia.conf as DB host root at octavia04:~# octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade_persistence 2020-09-29 11:45:01.911 818313 WARNING taskflow.persistence.backends.impl_sqlalchemy [-] Engine connection (validate) failed due to '(pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@'octavia04.desy.de' (using password: YES)") (Background on this error at: http://sqlalche.me/e/e3q8)' 2020-09-29 11:45:01.912 818313 CRITICAL octavia-db-manage [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@'octavia04.desy.de' (using password: YES)") (Background on this error at: http://sqlalche.me/e/e3q8) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most recent call last): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in unique_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionFairy._checkout(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in _checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = _ConnectionRecord.checkout(pool) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = pool._do_get() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._dec_overflow() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage compat.reraise(exc_type, exc_value, exc_tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._create_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in _create_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionRecord(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.__connect(first_connect_check=True) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in __connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = pool._invoke_creator(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return dialect.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.dbapi.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return Connection(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._request_authentication() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in _request_authentication 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = self._read_packet() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage packet.check_error() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage err.raise_mysql_exception(self._data) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise errorclass(errno, errval) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage pymysql.err.OperationalError: (1045, "Access denied for user 'octavia'@'octavia04.desy.de' (using password: YES)") 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage The above exception was the direct cause of the following exception: 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Traceback (most recent call last): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/bin/octavia-db-manage", line 8, in 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sys.exit(main()) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line 156, in main 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage CONF.command.func(config, CONF.command.name) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/db/migration/cli.py", line 98, in do_persistence_upgrade 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage persistence.initialize() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/local/lib/python3.8/dist-packages/octavia/controller/worker/v2/taskflow_jobboard_driver.py", line 50, in initialize 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with contextlib.closing(backend.get_connection()) as connection: 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 335, in get_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage conn.validate(max_retries=self._max_retries) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 394, in validate 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage _try_connect(self._engine) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 311, in wrapped_f 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.call(f, *args, **kw) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 391, in call 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage do = self.iter(retry_state=retry_state) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 338, in iter 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fut.result() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.__get_result() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise self._exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/tenacity/__init__.py", line 394, in call 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage result = fn(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/taskflow/persistence/backends/impl_sqlalchemy.py", line 391, in _try_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage with contextlib.closing(engine.connect()): 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2209, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._connection_cls(self, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 103, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage else engine.raw_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2306, in raw_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._wrap_pool_connect( 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2279, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage Connection._handle_dbapi_exception_noconnection( 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1547, in _handle_dbapi_exception_noconnection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage util.raise_from_cause(sqlalchemy_exception, exc_info) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage reraise(type(exception), exception, tb=exc_tb, cause=cause) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value.with_traceback(tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return fn() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 303, in unique_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionFairy._checkout(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 760, in _checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage fairy = _ConnectionRecord.checkout(pool) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 492, in checkout 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage rec = pool._do_get() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 139, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._dec_overflow() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage compat.reraise(exc_type, exc_value, exc_tb) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise value 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/impl.py", line 136, in _do_get 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self._create_connection() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 308, in _create_connection 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return _ConnectionRecord(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 437, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.__connect(first_connect_check=True) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/pool/base.py", line 639, in __connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage connection = pool._invoke_creator(self) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 114, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return dialect.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 482, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return self.dbapi.connect(*cargs, **cparams) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 94, in Connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage return Connection(*args, **kwargs) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 325, in __init__ 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self.connect() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 599, in connect 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage self._request_authentication() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 861, in _request_authentication 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage auth_packet = self._read_packet() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 684, in _read_packet 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage packet.check_error() 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/protocol.py", line 220, in check_error 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage err.raise_mysql_exception(self._data) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage File "/usr/lib/python3/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage raise errorclass(errno, errval) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, "Access denied for user 'octavia'@'octavia04.desy.de' (using password: YES)") 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage (Background on this error at: http://sqlalche.me/e/e3q8) 2020-09-29 11:45:01.912 818313 ERROR octavia-db-manage root at octavia04:~# cat /etc/octavia/octavia.conf [DEFAULT] transport_url = rabbit://openstack:password at rabbit-intern.desy.de use_journal = True [api_settings] bind_host = 0.0.0.0 bind_port = 9876 [certificates] cert_generator = local_cert_generator ca_certificate = /etc/octavia/certs/server_ca.cert.pem ca_private_key = /etc/octavia/certs/server_ca.key.pem ca_private_key_passphrase = passphrase [controller_worker] amp_image_owner_id = f89517ee676f4618bd55849477442aca amp_image_tag = amphora amp_ssh_key_name = octaviakey amp_secgroup_list = 2236e82c-13fe-42e3-9fcf-bea43917f231 amp_boot_network_list = 9f7fefc4-f262-4d8d-9465-240f94a7e87b amp_flavor_id = 200 network_driver = allowed_address_pairs_driver compute_driver = compute_nova_driver amphora_driver = amphora_haproxy_rest_driver client_ca = /etc/octavia/certs/client_ca.cert.pem [database] connection = mysql+pymysql://octavia:password at maria-intern.desy.de/octavia [haproxy_amphora] client_cert = /etc/octavia/certs/client.cert-and-key.pem server_ca = /etc/octavia/certs/server_ca.cert.pem [health_manager] bind_port = 5555 bind_ip = 172.16.0.2 controller_ip_port_list = 172.16.0.2:5555 [keystone_authtoken] www_authenticate_uri = https://keystone-intern.desy.de:5000/v3 auth_url = https://keystone-intern.desy.de:5000/v3 memcached_servers = nova-intern.desy.de:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = octavia password = password service_token_roles_required = True [oslo_messaging] topic = octavia_prov [service_auth] auth_url = https://keystone-intern.desy.de:5000/v3 memcached_servers = nova-intern.desy.de:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = octavia password = password [task_flow] persistence_connection = mysql+pymysql://octavia:woxSGH45cdZL1Sa4 at maria-intern.desy.de/octavia_persistence jobboard_backend_driver = 'redis_taskflow_driver' jobboard_backend_hosts = 10.254.28.113 jobboard_backend_port = 6379 jobboard_backend_password = password jobboard_backend_namespace = 'octavia_jobboard' root at octavia04:~# octavia-db-manage current 2020-09-29 12:02:23.159 819432 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2020-09-29 12:02:23.160 819432 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. fbd705961c3a (head) We have an Openstack Ussuri deployment on Ubuntu 20.04. Thanks in advance, Stefan Bujack From arne.wiebalck at cern.ch Tue Sep 29 16:21:06 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 29 Sep 2020 18:21:06 +0200 Subject: [baremetal-sig][ironic] Meeting Tue Oct 6, 2pm UTC In-Reply-To: References: Message-ID: Dear all, The meetings of the Bare Metal SIG will for now be on the first Tuesday of the month at 2pm-3pm UTC. The next meeting is scheduled for: Tue Oct 6, 2020 at 2pm UTC via zoom [0]. The (tentative) agenda and everything about the SIG can be found on its etherpad available from [1]. "Topic of the day" will be network setups, so come along and share how you do this, everyone is welcome! Any question or suggestion, let us know! Cheers, Arne [0] https://cern.zoom.us/j/94959408650?pwd=dW55WHNaeUd4OGhwSU5BZmR1K2FEZz09 [1] https://etherpad.opendev.org/p/bare-metal-sig From juliaashleykreger at gmail.com Tue Sep 29 16:42:24 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 29 Sep 2020 09:42:24 -0700 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: If the code is off of master, you may want to refresh the ironic-inspector code. We (ironic) merged a patch in an attempt to fix an issue where people were using the tripleo set of tools to force reinspection, however that fix also apparently broke TripleO's ansible playbooks. For the record, fsm is short for Finite State Machine. The ironic-inspector logging seems to indicate an error is occurring in _do_inspection internally which is consistent with what some of TripleO's CI was encountering. On Tue, Sep 29, 2020 at 9:04 AM Ruslanas Gžibovskis wrote: > > Hi all, > > I have a clean undercloud deployment. > but when I add a node, it fails to introspect. in ironic logs I see interesting lines about FSM (what is it?) [1] and in the same paste, I have provided/pasted, how openstack overcloud node import instack.json looks like. Also introspection -- provide with debug [2]. > It do not change when I change driver (idrac, ipmi, redfish) or I specify exact host for introspection. > here [3] is images and containers with ironic. > Here is my latest instack file [4] > > Any ideas? What I could do? > > [1] http://paste.openstack.org/show/uPwWVYlO3UQbrF0WzDFH/ > [2] http://paste.openstack.org/show/zIHDZ4PS8d0Oi3fmB5AD/ > [3] http://paste.openstack.org/show/87pn8i1QGJj2JQyPEJbl/ > [4] http://paste.openstack.org/show/a8UqmlI6yT6sd0503kVn/ From radoslaw.piliszek at gmail.com Tue Sep 29 16:42:43 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 29 Sep 2020 18:42:43 +0200 Subject: [masakari][tc][elections] yoctozepto mode on Message-ID: Hello, Folks! Letting you know I have proposed myself for TC member [1] and Masakari PTL positions [2]. Please find the relevant letters in references. Thank you for your time. [1] https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/TC/radoslaw.piliszek%40gmail.com [2] https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/Masakari/radoslaw.piliszek%40gmail.com -yoctozepto From ruslanas at lpic.lt Tue Sep 29 16:46:04 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 29 Sep 2020 19:46:04 +0300 Subject: [ironic][ussuri][centos8] fails to introspect: my fsm encountered an exception In-Reply-To: References: Message-ID: Hi Julia, If you could share, should I do this in container or on podman side? Which container? And git pull? I did podman image pull and all images. And it is 24 hours old now. If you could give a file or repo or anything to pull, I would appreciate it. Thank you On Tue, 29 Sep 2020, 19:42 Julia Kreger, wrote: > If the code is off of master, you may want to refresh the > ironic-inspector code. We (ironic) merged a patch in an attempt to fix > an issue where people were using the tripleo set of tools to force > reinspection, however that fix also apparently broke TripleO's ansible > playbooks. > > For the record, fsm is short for Finite State Machine. The > ironic-inspector logging seems to indicate an error is occurring in > _do_inspection internally which is consistent with what some of > TripleO's CI was encountering. > > On Tue, Sep 29, 2020 at 9:04 AM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I have a clean undercloud deployment. > > but when I add a node, it fails to introspect. in ironic logs I see > interesting lines about FSM (what is it?) [1] and in the same paste, I have > provided/pasted, how openstack overcloud node import instack.json looks > like. Also introspection -- provide with debug [2]. > > It do not change when I change driver (idrac, ipmi, redfish) or I > specify exact host for introspection. > > here [3] is images and containers with ironic. > > Here is my latest instack file [4] > > > > Any ideas? What I could do? > > > > [1] http://paste.openstack.org/show/uPwWVYlO3UQbrF0WzDFH/ > > [2] http://paste.openstack.org/show/zIHDZ4PS8d0Oi3fmB5AD/ > > [3] http://paste.openstack.org/show/87pn8i1QGJj2JQyPEJbl/ > > [4] http://paste.openstack.org/show/a8UqmlI6yT6sd0503kVn/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Tue Sep 29 17:05:16 2020 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Tue, 29 Sep 2020 19:05:16 +0200 Subject: [masakari][tc][elections] yoctozepto mode on In-Reply-To: References: Message-ID: +1 :) Thanks a lot! Fabian Radosław Piliszek schrieb am Di., 29. Sept. 2020, 18:48: > Hello, Folks! > > Letting you know I have proposed myself for TC member [1] and Masakari > PTL positions [2]. > Please find the relevant letters in references. > Thank you for your time. > > [1] > https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/TC/radoslaw.piliszek%40gmail.com > [2] > https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/Masakari/radoslaw.piliszek%40gmail.com > > -yoctozepto > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Sep 29 17:35:01 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 29 Sep 2020 10:35:01 -0700 Subject: [election][manila] PTL candidacy for Wallaby Message-ID: Hello Zorillas and all interested stackers, I'd like to submit my candidacy to be the PTL for the OpenStack Manila team through the Wallaby cycle. I am no longer a stranger to this role, and I've had an excellent team to work with during the past two cycles that serving as PTL has become an absolute pleasure besides being an honor. I've learned by experience that a successful project leader is one that has a great team. I seek to lead an aspirational team that goes all distances to grow, as is evidenced by every member spending a substantial amount of their time mentoring new contributors, and providing an on-boarding ramp to OpenStack - whether this is through informal introductions along the sidelines of conferences, or stray explorers finding us on IRC/email, or engaging with invaluable supporters of diversity and curiosity in the likes of Outreachy, Google Summer of Code, and the Grace Hopper Celebration. I seek to lead a committed team that wants to learn and better itself continuously. We've adopted a proactive policy to bug triage that has kept us rooted to users and their pain points; These bug triages have led to effective (and fun) community "doc-a-thon" and "bug-a-thon" events. Many a times, great effort does not result in great glory. I am proud of the work we do to enhance e2e testing. There are no press releases that reflect this, but I assure you that this work matters a lot to our users. I seek to lead an informed team. Innovation arrives here often by standing on the shoulders of the giants. The team works with technology that's been perfected over decades, as well as with novel breakthroughs of the recent years. We participate in adjacent communities, and adapt while also advocating the lessons we learned in this one. So, if you will have me, I wish to serve you through Wallaby and get things done. Apart from our commitment to OpenStack-wide goals, and the improvements we've charted out, I wish to prioritize a few more work items: - Improve fault tolerance - we've identified some gaps in resource health when certain kinds of service disruptions occur, we'll fix these up. - Achieve OpenStackCLI/SDK/UI parity - there's a lot left to do in this space, and we've actively been chipping away at this for the past two cycles. - API policy improvements - work on the "admin-everywhere" problem with the pop-up team and support fine grained policies in tune with the model of multi-tenancy we cater to. Thank you for your support, Goutham Pacha Ravi IRC: gouthamr From gaetan.trellu at incloudus.com Tue Sep 29 19:11:29 2020 From: gaetan.trellu at incloudus.com (gaetan.trellu at incloudus.com) Date: Tue, 29 Sep 2020 15:11:29 -0400 Subject: [masakari][tc][elections] yoctozepto mode on In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Tue Sep 29 20:49:45 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 29 Sep 2020 22:49:45 +0200 Subject: Ussuri CentOS 8 add mptsas driver to introspection initramfs In-Reply-To: References: <55c5b908-3d0e-4d92-8f8f-95443fbefb9f@me.com> Message-ID: <09685398-20fb-305d-d413-ef799241c0c7@me.com> Hi again, I managed to install the kmod rpms in the overcloud-full.qcow2 image: |sudo yum -y install https:||//www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm| |sudo yum -y install --downloadonly --downloaddir=. kmod-mptsas| |sudo yum -y install --downloadonly --downloaddir=. kmod-megaraid_sas| | | |The deployment works fine, but as soon as the node reboots it is just stuck with a blinking cursor after loading the kernel.| | | |Do I have to add something else to the overcloud-full image as well? | | | Am 15.09.2020 um 00:15 schrieb Donny Davis: > > > On Fri, Sep 11, 2020 at 3:25 PM Oliver Weinmann > > wrote: > > Hi, > > I already asked this question on serverfault. But I guess here is > a better place. > > I have a very ancient hardware with a MPTSAS controller. I use > this for TripleO deployment testing. With the release of Ussuri > which is running CentOS8, I can no longer provision my overcloud > nodes as the MPTSAS driver has been removed in CentOS8: > > https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ > > > I managed to include the driver provided from ELrepo in the > introspection image but It is not loaded automatically: > > All commands are run as user "stack". > > Extract the introspection image: > > cd ~ > mkdir imagesnew > cd imagesnew > tar xvf ../ironic-python-agent.tar > mkdir ~/ipa-tmp > cd ~/ipa-tmp > /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs > | zcat | cpio -ivd | pax -r > > Extract the contents of the mptsas driver rpm: > > rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r > > Put the kernel module in the right places. To figure out where the > module has to reside I installed the rpm on a already deployed > node and used find to locate it. > > xz -c > ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz > mkdir > ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas > sudo ln -sf > /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko > sudo chown root . -R > find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8  > > ~/images/ironic-python-agent.initramfs > > Upload the new image > > cd ~/images > openstack overcloud image upload --update-existing --image-path > /home/stack/images/ > > Now when I start the introspection and ssh into the host I see no > disks: > > [root at localhost ~]# fdisk -l > [root at localhost ~]# lsmod | grep mptsas > > Once i manually load the driver, I can see the disks: > > > [root at localhost ~]# modprobe mptsas > [root at localhost ~]# lsmod | grep mptsas > mptsas                 69632  0 > mptscsih               45056  1 mptsas > mptbase                98304  2 mptsas,mptscsih > scsi_transport_sas     45056  1 mptsas > [root at localhost ~]# fdisk -l > Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors > Units: sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > > But how can I make it so that it will automatically load on boot? > > Best Regards, > > Oliver > > > I guess you could try using modules-load to load the module at boot. > > > sudo ln -sf > /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko > echo "mptsas" > ./etc/modules-load.d/mptsas.conf > > sudo chown root . -R > > Also I would have a look see at these docs to build an image using ipa > builder > https://docs.openstack.org/ironic-python-agent-builder/latest/ > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Tue Sep 29 21:02:21 2020 From: donny at fortnebula.com (Donny Davis) Date: Tue, 29 Sep 2020 17:02:21 -0400 Subject: Ussuri CentOS 8 add mptsas driver to introspection initramfs In-Reply-To: <09685398-20fb-305d-d413-ef799241c0c7@me.com> References: <55c5b908-3d0e-4d92-8f8f-95443fbefb9f@me.com> <09685398-20fb-305d-d413-ef799241c0c7@me.com> Message-ID: I think maybe you want to check out diskimage-builder Donny Davis c: 805 814 6800 On Tue, Sep 29, 2020, 4:49 PM Oliver Weinmann wrote: > Hi again, > > I managed to install the kmod rpms in the overcloud-full.qcow2 image: > > sudo yum -y install https:// > www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm > sudo yum -y install --downloadonly --downloaddir=. kmod-mptsas > sudo yum -y install --downloadonly --downloaddir=. kmod-megaraid_sas > > The deployment works fine, but as soon as the node reboots it is just > stuck with a blinking cursor after loading the kernel. > > Do I have to add something else to the overcloud-full image as well? > > Am 15.09.2020 um 00:15 schrieb Donny Davis: > > > > On Fri, Sep 11, 2020 at 3:25 PM Oliver Weinmann < > oliver.weinmann at icloud.com> wrote: > >> Hi, >> >> I already asked this question on serverfault. But I guess here is a >> better place. >> >> I have a very ancient hardware with a MPTSAS controller. I use this for >> TripleO deployment testing. With the release of Ussuri which is running >> CentOS8, I can no longer provision my overcloud nodes as the MPTSAS driver >> has been removed in CentOS8: >> >> >> https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ >> >> I managed to include the driver provided from ELrepo in the introspection >> image but It is not loaded automatically: >> >> All commands are run as user "stack". >> >> Extract the introspection image: >> >> cd ~ >> mkdir imagesnew >> cd imagesnew >> tar xvf ../ironic-python-agent.tar >> mkdir ~/ipa-tmp >> cd ~/ipa-tmp >> /usr/lib/dracut/skipcpio ~/imagesnew/ironic-python-agent.initramfs | zcat >> | cpio -ivd | pax -r >> >> Extract the contents of the mptsas driver rpm: >> >> rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r >> >> Put the kernel module in the right places. To figure out where the module >> has to reside I installed the rpm on a already deployed node and used find >> to locate it. >> >> xz -c ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > >> ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz >> mkdir ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas >> sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko >> lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko >> sudo chown root . -R >> find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8 > >> ~/images/ironic-python-agent.initramfs >> >> Upload the new image >> >> cd ~/images >> openstack overcloud image upload --update-existing --image-path >> /home/stack/images/ >> >> Now when I start the introspection and ssh into the host I see no disks: >> >> [root at localhost ~]# fdisk -l >> [root at localhost ~]# lsmod | grep mptsas >> >> Once i manually load the driver, I can see the disks: >> >> >> [root at localhost ~]# modprobe mptsas >> [root at localhost ~]# lsmod | grep mptsas >> mptsas 69632 0 >> mptscsih 45056 1 mptsas >> mptbase 98304 2 mptsas,mptscsih >> scsi_transport_sas 45056 1 mptsas >> [root at localhost ~]# fdisk -l >> Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors >> Units: sectors of 1 * 512 = 512 bytes >> Sector size (logical/physical): 512 bytes / 512 bytes >> I/O size (minimum/optimal): 512 bytes / 512 bytes >> >> But how can I make it so that it will automatically load on boot? >> >> Best Regards, >> >> Oliver >> > > I guess you could try using modules-load to load the module at boot. > > > sudo ln -sf /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko > lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko > echo "mptsas" > ./etc/modules-load.d/mptsas.conf > > sudo chown root . -R > > Also I would have a look see at these docs to build an image using ipa > builder > https://docs.openstack.org/ironic-python-agent-builder/latest/ > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Tue Sep 29 21:16:34 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 29 Sep 2020 23:16:34 +0200 Subject: Ussuri CentOS 8 add mptsas driver to introspection initramfs In-Reply-To: References: <55c5b908-3d0e-4d92-8f8f-95443fbefb9f@me.com> <09685398-20fb-305d-d413-ef799241c0c7@me.com> Message-ID: <63aa9f6b-ec2b-8a75-c425-25a7d8c939db@me.com> I remember I tried something similar using one of the RHOSP guides and it failed at one step to build the image. I will try it again tomorrow. Am 29.09.2020 um 23:02 schrieb Donny Davis: > I think maybe you want to check out diskimage-builder > > Donny Davis > c: 805 814 6800 > > On Tue, Sep 29, 2020, 4:49 PM Oliver Weinmann > wrote: > > Hi again, > > I managed to install the kmod rpms in the overcloud-full.qcow2 image: > > |sudo yum -y install > https:||//www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm > | > |sudo yum -y install --downloadonly --downloaddir=. kmod-mptsas| > |sudo yum -y install --downloadonly --downloaddir=. kmod-megaraid_sas| > | > | > |The deployment works fine, but as soon as the node reboots it is > just stuck with a blinking cursor after loading the kernel.| > | > | > |Do I have to add something else to the overcloud-full image as well? > | > | > | > Am 15.09.2020 um 00:15 schrieb Donny Davis: >> >> >> On Fri, Sep 11, 2020 at 3:25 PM Oliver Weinmann >> > >> wrote: >> >> Hi, >> >> I already asked this question on serverfault. But I guess >> here is a better place. >> >> I have a very ancient hardware with a MPTSAS controller. I >> use this for TripleO deployment testing. With the release of >> Ussuri which is running CentOS8, I can no longer provision my >> overcloud nodes as the MPTSAS driver has been removed in CentOS8: >> >> https://www.reddit.com/r/CentOS/comments/d93unk/centos8_and_removal_mpt2sas_dell_sas_drivers/ >> >> >> I managed to include the driver provided from ELrepo in the >> introspection image but It is not loaded automatically: >> >> All commands are run as user "stack". >> >> Extract the introspection image: >> >> cd ~ >> mkdir imagesnew >> cd imagesnew >> tar xvf ../ironic-python-agent.tar >> mkdir ~/ipa-tmp >> cd ~/ipa-tmp >> /usr/lib/dracut/skipcpio >> ~/imagesnew/ironic-python-agent.initramfs | zcat | cpio -ivd >> | pax -r >> >> Extract the contents of the mptsas driver rpm: >> >> rpm2cpio ~/kmod-mptsas-3.04.20-3.el8_2.elrepo.x86_64.rpm | pax -r >> >> Put the kernel module in the right places. To figure out >> where the module has to reside I installed the rpm on a >> already deployed node and used find to locate it. >> >> xz -c >> ./usr/lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko >> > >> ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/message/fusion/mptsas.ko.xz >> mkdir >> ./usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas >> sudo ln -sf >> /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko >> lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko >> sudo chown root . -R >> find . 2>/dev/null | sudo cpio --quiet -c -o | gzip -8  > >> ~/images/ironic-python-agent.initramfs >> >> Upload the new image >> >> cd ~/images >> openstack overcloud image upload --update-existing >> --image-path /home/stack/images/ >> >> Now when I start the introspection and ssh into the host I >> see no disks: >> >> [root at localhost ~]# fdisk -l >> [root at localhost ~]# lsmod | grep mptsas >> >> Once i manually load the driver, I can see the disks: >> >> >> [root at localhost ~]# modprobe mptsas >> [root at localhost ~]# lsmod | grep mptsas >> mptsas                 69632  0 >> mptscsih               45056  1 mptsas >> mptbase                98304  2 mptsas,mptscsih >> scsi_transport_sas     45056  1 mptsas >> [root at localhost ~]# fdisk -l >> Disk /dev/sda: 67.1 GiB, 71999422464 bytes, 140623872 sectors >> Units: sectors of 1 * 512 = 512 bytes >> Sector size (logical/physical): 512 bytes / 512 bytes >> I/O size (minimum/optimal): 512 bytes / 512 bytes >> >> But how can I make it so that it will automatically load on boot? >> >> Best Regards, >> >> Oliver >> >> >> I guess you could try using modules-load to load the module at boot. >> >> > sudo ln -sf >> /lib/modules/4.18.0-193.el8.x86_64/extra/mptsas/mptsas.ko >> lib/modules/4.18.0-193.6.3.el8_2.x86_64/weak-updates/mptsas.ko >> echo "mptsas" > ./etc/modules-load.d/mptsas.conf >> > sudo chown root . -R >> >> Also I would have a look see at these docs to build an image >> using ipa builder >> https://docs.openstack.org/ironic-python-agent-builder/latest/ >> >> -- >> ~/DonnyD >> C: 805 814 6800 >> "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Sep 29 22:21:46 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 29 Sep 2020 17:21:46 -0500 Subject: [Airship-discuss] [Diversity] Community feedback on Divisive Language stance In-Reply-To: References: Message-ID: Bob, Thanks for the feedback and the link. At this time we're leaving the new wording to the individual projects vs saying all OSF projects must use Y instead of X. In part it's because projects have found that words have different meanings in the context of where they're located within their code, at least in the case of blacklist/whitelist. Ultimately we're hoping to have a list of suggested replacements but realize what may work for one project may not for another. Thanks again! Amy Marrich (spotz) On Tue, Sep 29, 2020 at 5:09 PM Monkman, Bob wrote: > Amy, et al, > > Thanks for sharing this and I think Draft 5 seems like a > good statement to build from. I am working on a Linux Foundation Networking > (LFN) task force looking at this issue for the projects under that > umbrella. > > As a part of that process, on our wiki page for this task > force, we have compiled some links and a table of example recommended > replacement terms that could be of interest to your OSF work as well. > > Please see this reference at > https://wiki.lfnetworking.org/display/LN/Inclusive+Language+Initiative > > If this is of interest. > > Best regards, > > Bob Monkman > > LFN Contributor, Strategic Planning Committee member, Marketing Chair > CNTT/OPNFV > > > > > > > > *From:* Amy Marrich > *Sent:* Monday, September 21, 2020 11:36 AM > *To:* starlingx-discuss at lists.starlingx.io; > airship-discuss at lists.airshipit.org; kata-dev at lists.katacontainers.io; > zuul-discuss at lists.zuul-ci.org; openstack-discuss < > openstack-discuss at lists.openstack.org>; foundation at lists.openstack.org > *Subject:* [Airship-discuss] [Diversity] Community feedback on Divisive > Language stance > > > > The OSF Diversity & Inclusion WG has been working on creating the OSF's > stance concerning divisive language. We will be holding one more meeting > before sending the stance to the OSF Board for any changes before bringing > it back to the Community. > > > > Our goal however is to get your input now to reduce any concerns in the > future! Please check out Draft 4 on the etherpad[0] and place your comments > there and join us on October 5th (meeting information will be sent out > closer to the meeting) > > > > Thanks, > > > > Amy (spotz) > > 0 - https://etherpad.opendev.org/p/divisivelanguage > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tburke at nvidia.com Tue Sep 29 22:43:54 2020 From: tburke at nvidia.com (Tim Burke) Date: Tue, 29 Sep 2020 15:43:54 -0700 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Message-ID: On 9/29/20 1:43 AM, Thomas Goirand wrote: > > On 9/28/20 6:10 PM, Radosław Piliszek wrote: >> On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: >>> >>> Hi, >>> >>> As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. >>> See [1] for the details. However, Debian unstable has 2.0.0. >>> >>> Would there be some good soul willing to help me fix this situation? I >>> would need a patch to fix this, but I'm really not sure how to start. >>> >>> Cheers, >>> >>> Thomas Goirand (zigo) >> >> The [1] reference is missing. >> >> -yoctozepto > > Indeed, sorry. So: > > [1] https://github.com/eventlet/eventlet/issues/619 > > I've already integrated this patch in the Debian package: > https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec > > However, there's still this failure, related to #619 (linked above): > > ERROR: test_noraise_dns_tcp (tests.greendns_test.TinyDNSTests) > -------------------------------------------------------------- > Traceback (most recent call last): > File > "/<>/.pybuild/cpython3_3.8_eventlet/build/tests/greendns_test.py", > line 904, in test_noraise_dns_tcp > self.assertEqual(response.rrset.items[0].address, expected_ip) > KeyError: 0 > > Can anyone solve this? > > Cheers, > > Thomas Goirand (zigo) > Swapping out the assertion for one like self.assertEqual( [rr.address for rr in response.rrset.items], [expected_ip]) should at least get tests passing, but the other issues identified are still problems: > Eventlet will need to detect which version of dnspython is running > (import dns.version) and monkey patch appropriately. Note that the > raise_on_truncation is not the only change, as the af parameter is > now gone, and you can also pass sockets to both udp() and tcp(). IDK how different udp()/tcp() should be between the pre/post-2.0.0 versions, though. Part of me is tempted to try patching out dns.query._wait_for instead... Tim From fungi at yuggoth.org Tue Sep 29 23:59:26 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 Sep 2020 23:59:26 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations End Message-ID: <20200929235926.3m76mer75wt2syf7@yuggoth.org> The PTL and TC Nomination period is now over. The official candidate list for PTLs is available on the election website[0] as is the candidate list for TC seats[1]. There are 10 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Cloudkitty, Karbor, Octavia, OpenStack_Charms, Oslo, Packaging_Rpm, Placement, Qinling, Searchlight, Senlin. There is 1 project which will have elections: Telemetry. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Oct 06, 2020 23:45 UTC. Thank you, [0] https://governance.openstack.org/election/#wallaby-ptl-candidates [1] https://governance.openstack.org/election/#wallaby-tc-candidates [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From suzhengwei at inspur.com Wed Sep 30 01:19:29 2020 From: suzhengwei at inspur.com (=?utf-8?B?U2FtIFN1ICjoi4/mraPkvJ8p?=) Date: Wed, 30 Sep 2020 01:19:29 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1SZTogW21hc2Fr?= =?utf-8?Q?ari][tc][elections]_yoctozepto_mode_on?= In-Reply-To: References: <9af4a48805637f6d198b42ecc1fd498b@sslemail.net> Message-ID: <4f2ed8ea5e4d4d208bbdb822d8654e84@inspur.com> +1 :) 发件人: gaetan.trellu at incloudus.com [mailto:gaetan.trellu at incloudus.com] 发送时间: 2020年9月30日 3:11 收件人: Fabian Zimmermann 抄送: Radosław Piliszek ; openstack-discuss 主题: [lists.openstack.org代发]Re: [masakari][tc][elections] yoctozepto mode on +1 :) On Sep. 29, 2020 1:05 p.m., Fabian Zimmermann > wrote: +1 :) Thanks a lot! Fabian Radosław Piliszek > schrieb am Di., 29. Sept. 2020, 18:48: Hello, Folks! Letting you know I have proposed myself for TC member [1] and Masakari PTL positions [2]. Please find the relevant letters in references. Thank you for your time. [1] https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/TC/radoslaw.piliszek%40gmail.com [2] https://opendev.org/openstack/election/raw/branch/master/candidates/wallaby/Masakari/radoslaw.piliszek%40gmail.com -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3606 bytes Desc: not available URL: From zigo at debian.org Wed Sep 30 08:33:32 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 30 Sep 2020 10:33:32 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Message-ID: On 9/30/20 12:43 AM, Tim Burke wrote: > > On 9/29/20 1:43 AM, Thomas Goirand wrote: >> >> On 9/28/20 6:10 PM, Radosław Piliszek wrote: >>> On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: >>>> >>>> Hi, >>>> >>>> As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. >>>> See [1] for the details. However, Debian unstable has 2.0.0. >>>> >>>> Would there be some good soul willing to help me fix this situation? I >>>> would need a patch to fix this, but I'm really not sure how to start. >>>> >>>> Cheers, >>>> >>>> Thomas Goirand (zigo) >>> >>> The [1] reference is missing. >>> >>> -yoctozepto >> >> Indeed, sorry. So: >> >> [1] https://github.com/eventlet/eventlet/issues/619 >> >> I've already integrated this patch in the Debian package: >> https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec >> >> >> However, there's still this failure, related to #619 (linked above): >> >> ERROR: test_noraise_dns_tcp (tests.greendns_test.TinyDNSTests) >> -------------------------------------------------------------- >> Traceback (most recent call last): >>    File >> "/<>/.pybuild/cpython3_3.8_eventlet/build/tests/greendns_test.py", >> >> line 904, in test_noraise_dns_tcp >>      self.assertEqual(response.rrset.items[0].address, expected_ip) >> KeyError: 0 >> >> Can anyone solve this? >> >> Cheers, >> >> Thomas Goirand (zigo) >> > > Swapping out the assertion for one like > >   self.assertEqual( >       [rr.address for rr in response.rrset.items], >       [expected_ip]) > > should at least get tests passing, but the other issues identified are > still problems: > >> Eventlet will need to detect which version of dnspython is running >> (import dns.version) and monkey patch appropriately. Note that the >> raise_on_truncation is not the only change, as the af parameter is >> now gone, and you can also pass sockets to both udp() and tcp(). > > IDK how different udp()/tcp() should be between the pre/post-2.0.0 > versions, though. Part of me is tempted to try patching out > dns.query._wait_for instead... > > Tim Hi Tim, Thanks for your follow-up. Your advice above fixed the issue, however at least 2 tests aren't deterministic: - tests.subprocess_test.test_communicate_with_poll - tests.subprocess_test.test_communicate_with_poll They both failed at least once with as error: "BlockingIOError: [Errno 11] Resource temporarily unavailable". Any idea what's going on? I've uploaded eventlet 0.26.1 to Experimental anyways, but I really hope we can fix the remaining issues. Cheers, Thomas Goirand (zigo) From gr at ham.ie Wed Sep 30 09:50:40 2020 From: gr at ham.ie (Graham Hayes) Date: Wed, 30 Sep 2020 10:50:40 +0100 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> <174c0a40e62.10a3aebf382501.4488410530424253723@ghanshyammann.com> Message-ID: <278f70b4-c257-b983-f753-4b00772ef4d0@ham.ie> On 25/09/2020 19:22, Kendall Nelson wrote: > > > On Thu, Sep 24, 2020 at 9:36 AM Graham Hayes > wrote: > > On 24/09/2020 16:03, Ghanshyam Mann wrote: > >   ---- On Mon, 21 Sep 2020 12:53:17 -0500 Graham Hayes > wrote ---- > >   > Hi All > >   > > >   > It is that time of year / release again - and we need to > choose the > >   > community goals for Wallaby. > >   > > >   > Myself and Nate looked over the list of goals [1][2][3], and > we are > >   > suggesting one of the following: > >   > > >   > > > > > Thanks Graham, Nate for starting this. > > > >   >   - Finish moving legacy python-*client CLIs to > python-openstackclient > > > > Are not we going with popup team first for osc work? I am fine > with goal also but > > we should do this as multi-cycle goal with no other goal in > parallel so that we actually > > finish this on time. > > Yeah - this was just one of the goals we thought might have some > discussion, and we didn't know where the popup team was in their > work. > > If that work is still on going, we should leave the goal for > another cycle or two. > > > I don't think a *ton* of progress was made this last release, but I > could be wrong. I am guessing we will want to wait one more cycle before > making it a goal. I am 100% behind this being a goal at some point though. > Yeah, that makes sense to postpone then. > > >   >   - Move from oslo.rootwrap to oslo.privsep > > > > +1, this is already proposed goal since last cycle. > > > > -gmann > > > >   >   - Implement the API reference guide changes > >   >   - All API to provide a /healthcheck URL like Keystone (and > others) provide > >   > > >   > Some of these goals have champions signed up already, but we > need to > >   > make sure they are still available to do them. If you are > interested in > >   > helping drive any of the goals, please speak up! > >   > > >   > We need to select goals in time for the new release cycle - > so please > >   > reply if there is goals you think should be included in this > list, or > >   > not included. > >   > > >   > Next steps after this will be helping people write a proposed > goal > >   > and then the TC selecting the ones we will pursue during Wallaby. > >   > > >   > Additionally, we have traditionally selected 2 goals per cycle - > >   > however with the people available to do the work across projects > >   > Nate and I briefly discussed reducing that to one for this cycle. > >   > > >   > What does the community think about this? > >   > > >   > Thanks, > >   > > >   > Graham > >   > > >   > 1 - https://etherpad.opendev.org/p/community-goals > > >   > 2 - > https://governance.openstack.org/tc/goals/proposed/index.html > > >   > 3 - https://etherpad.opendev.org/p/community-w-series-goals > > >   > 4 - > >   > > https://governance.openstack.org/tc/goals/index.html#goal-selection-schedule > > >   > > >   > > > > > > > -Kendall (diablo_rojo) From smooney at redhat.com Wed Sep 30 10:29:52 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 30 Sep 2020 11:29:52 +0100 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: On Thu, 2020-09-24 at 22:39 +0200, Thomas Goirand wrote: > On 9/21/20 7:53 PM, Graham Hayes wrote: > > Hi All > > > > It is that time of year / release again - and we need to choose the > > community goals for Wallaby. > > > > Myself and Nate looked over the list of goals [1][2][3], and we are > > suggesting one of the following: > > > > > > - Finish moving legacy python-*client CLIs to python-openstackclient > > Go go go !!! :) > > > - Move from oslo.rootwrap to oslo.privsep > > Dito. Rootwrap is painfully slow (because it takes too long to spawn a > python process...). > > > - Implement the API reference guide changes > > - All API to provide a /healthcheck URL like Keystone (and others) provide > > What about an "openstack purge " that would call all > projects? We once had a "/purge" goal, I'm not sure how far it went... > What I know, is that purging all resources of a project is currently > still a big painpoint. > > > Some of these goals have champions signed up already, but we need to > > make sure they are still available to do them. If you are interested in > > helping drive any of the goals, please speak up! > > I'm still available to attempt the /healthcheck thingy, I kind of > succeed in all major project but ... nova. Unfortunately, it was decided > in the project that we should put this on hold until the /healthcheck > can implement more check than just to know if the API is alive. 5 months > forward, I believe my original patch [1] should have been approved first > as a first approach. Nova team: any reaction? i actually dont think the healthcheck enpoint is userful in it current from for any porject that has distibuted process like nova, neutron or cinder. that was not the only concern raised either as the default content of the detail responce wich include package infomation was considerd a possible security vulnerablity so with out agreeing on what kindo fo info can be retruned, its format and wether this would be a admin only endpoint or a public endpoint tenant can check potentially without auth i dont think we should be procedding with this as a goal. https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/healthcheck/__init__.py#L150-L152 ^ is concerning from a security point of view. nova support configurable middelway still through the api-paste.ini file so untill we have a useful health check im not sure we should merge anything since operators can just enable it themselves if they want too. > Any progress on your > super-nice-health-check? Can this be implemented elsewhere using what > you've done? Maybe that work should go in oslo.middleware too? > > Cheers, > > Thomas Goirand (zigo) > > [1] https://review.opendev.org/#/c/724684/ > > > Additionally, we have traditionally selected 2 goals per cycle - > > however with the people available to do the work across projects > > Nate and I briefly discussed reducing that to one for this cycle. > > > > What does the community think about this? > > The /healthcheck is super-easy to implement for any project using > oslo.middleware, so please select that one (and others). It's also > mostly done... > > Cheers, > > Thomas Goirand (zigo) > From smooney at redhat.com Wed Sep 30 10:58:02 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 30 Sep 2020 11:58:02 +0100 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> Message-ID: <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> On Wed, 2020-09-30 at 10:33 +0200, Thomas Goirand wrote: > On 9/30/20 12:43 AM, Tim Burke wrote: > > > > On 9/29/20 1:43 AM, Thomas Goirand wrote: > > > > > > On 9/28/20 6:10 PM, Radosław Piliszek wrote: > > > > On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: > > > > > > > > > > Hi, > > > > > > > > > > As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. > > > > > See [1] for the details. However, Debian unstable has 2.0.0. > > > > > > > > > > Would there be some good soul willing to help me fix this situation? I > > > > > would need a patch to fix this, but I'm really not sure how to start. > > > > > > > > > > Cheers, > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > The [1] reference is missing. > > > > > > > > -yoctozepto > > > > > > Indeed, sorry. So: > > > > > > [1] https://github.com/eventlet/eventlet/issues/619 > > > > > > I've already integrated this patch in the Debian package: > > > https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec > > > > > > > > > However, there's still this failure, related to #619 (linked above): > > > > > > ERROR: test_noraise_dns_tcp (tests.greendns_test.TinyDNSTests) > > > -------------------------------------------------------------- > > > Traceback (most recent call last): > > > File > > > "/<>/.pybuild/cpython3_3.8_eventlet/build/tests/greendns_test.py", > > > > > > line 904, in test_noraise_dns_tcp > > > self.assertEqual(response.rrset.items[0].address, expected_ip) > > > KeyError: 0 > > > > > > Can anyone solve this? > > > > > > Cheers, > > > > > > Thomas Goirand (zigo) > > > > > > > Swapping out the assertion for one like > > > > self.assertEqual( > > [rr.address for rr in response.rrset.items], > > [expected_ip]) > > > > should at least get tests passing, but the other issues identified are > > still problems: > > > > > Eventlet will need to detect which version of dnspython is running > > > (import dns.version) and monkey patch appropriately. Note that the > > > raise_on_truncation is not the only change, as the af parameter is > > > now gone, and you can also pass sockets to both udp() and tcp(). > > > > IDK how different udp()/tcp() should be between the pre/post-2.0.0 > > versions, though. Part of me is tempted to try patching out > > dns.query._wait_for instead... > > > > Tim > > Hi Tim, > > Thanks for your follow-up. Your advice above fixed the issue, however at > least 2 tests aren't deterministic: > > - tests.subprocess_test.test_communicate_with_poll > - tests.subprocess_test.test_communicate_with_poll > > They both failed at least once with as error: "BlockingIOError: [Errno > 11] Resource temporarily unavailable". Any idea what's going on? > > I've uploaded eventlet 0.26.1 to Experimental anyways, but I really hope > we can fix the remaining issues. just to be clear that is not the only incompatiably with dnspython >=2 https://github.com/eventlet/eventlet/issues/632 so we cannot uncap this even if https://github.com/eventlet/eventlet/issues/619 is fixed we also need to resolve teh issue wtih ssl which breaks the nova console proxy that is breaking nova via websockify which gets a TypeError: _wrap_socket() argument 1 must be _socket.socket, not GreenSSLSocket when you use dnspython it seams to monkey patch the ssl sockets incorrectly. so we cannot support dnspython2 untill that is also resolved or nova console wont work anymore. > > Cheers, > > Thomas Goirand (zigo) > From stendulker at gmail.com Wed Sep 30 11:19:44 2020 From: stendulker at gmail.com (Shivanand Tendulker) Date: Wed, 30 Sep 2020 16:49:44 +0530 Subject: [ironic] Proposing returning Jay Faulkner to ironic-core In-Reply-To: References: Message-ID: ++. Welcome back Jay !! On Mon, Sep 28, 2020 at 7:15 PM Julia Kreger wrote: > Greetings ironic contributors, > > I'm sure many of you have noticed JayF has been more active on IRC > over the past year. Recently he started reviewing and providing > feedback on changes in ironic-python-agent as well as some work to add > new features and fix some very not fun bugs. > > Given that he was ironic-core when he departed the community, I > believe it is only fair that we return those rights to him. > > Any objections? > > -Julia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrhosseini at hotmail.com Wed Sep 30 12:24:14 2020 From: hrhosseini at hotmail.com (Hamidreza Hosseini) Date: Wed, 30 Sep 2020 12:24:14 +0000 Subject: Make Full Disks read-only in swift object storage Message-ID: Hi, I've some disks on my swift object storage that they filled and I made them read only by /etc/fstab I think this isn't true adjustment, Anyway, can I solve this problem (I mean full disks ) without changing the ring and just by configuring the proxy to send put /get requests to empty disks but just Get requests to empty disk? Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob.monkman at intel.com Tue Sep 29 22:09:02 2020 From: bob.monkman at intel.com (Monkman, Bob) Date: Tue, 29 Sep 2020 22:09:02 +0000 Subject: [Airship-discuss] [Diversity] Community feedback on Divisive Language stance In-Reply-To: References: Message-ID: Amy, et al, Thanks for sharing this and I think Draft 5 seems like a good statement to build from. I am working on a Linux Foundation Networking (LFN) task force looking at this issue for the projects under that umbrella. As a part of that process, on our wiki page for this task force, we have compiled some links and a table of example recommended replacement terms that could be of interest to your OSF work as well. Please see this reference at https://wiki.lfnetworking.org/display/LN/Inclusive+Language+Initiative If this is of interest. Best regards, Bob Monkman LFN Contributor, Strategic Planning Committee member, Marketing Chair CNTT/OPNFV From: Amy Marrich Sent: Monday, September 21, 2020 11:36 AM To: starlingx-discuss at lists.starlingx.io; airship-discuss at lists.airshipit.org; kata-dev at lists.katacontainers.io; zuul-discuss at lists.zuul-ci.org; openstack-discuss ; foundation at lists.openstack.org Subject: [Airship-discuss] [Diversity] Community feedback on Divisive Language stance The OSF Diversity & Inclusion WG has been working on creating the OSF's stance concerning divisive language. We will be holding one more meeting before sending the stance to the OSF Board for any changes before bringing it back to the Community. Our goal however is to get your input now to reduce any concerns in the future! Please check out Draft 4 on the etherpad[0] and place your comments there and join us on October 5th (meeting information will be sent out closer to the meeting) Thanks, Amy (spotz) 0 - https://etherpad.opendev.org/p/divisivelanguage -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesus-maria.aransay at unirioja.es Wed Sep 30 10:38:21 2020 From: jesus-maria.aransay at unirioja.es (Jesus Aransay) Date: Wed, 30 Sep 2020 12:38:21 +0200 Subject: Question about faafo Message-ID: Dear all, I know that faafo ( https://opendev.org/openstack/faafo ) is no longer being maintained, but I still use it to present some service deployment in my teaching about cloud systems. The application still runs nicely in Ubuntu1804, but I tried to configure the RabbitMQ queue to be created in the publisher (Service API), instead of in the consumers (Service Worker), but I failed. My intention was to show that messages could be published in the queue without any workers having been deployed yet. I thought adding a "maybe_declare" statement in the publisher "would do the trick": https://github.com/jmaransay/faafo/blob/478428f11d92cc22d1be396afec3c787136d35f7/faafo/api/service.py#L135 But still the message queue is only created when the first worker machine is deployed. Does anybody still have some clue about why the application was configured like that, or how the queue could be created in the publisher? Any hints would be highly appreciated, thank you very much in advance, Jesús -- Jesús María Aransay Azofra Universidad de La Rioja Dpto. de Matemáticas y Computación tlf.: (+34) 941299438 fax: (+34) 941299460 mail: jesus-maria.aransay at unirioja.es web: http://www.unirioja.es/cu/jearansa Edificio Científico Tecnológico-CCT, c/ Madre de Dios, 53 26006, Logroño, España -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Wed Sep 30 13:19:04 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Wed, 30 Sep 2020 10:19:04 -0300 Subject: [cloudkitty][election][ptl] PTL non-candidacy In-Reply-To: References: Message-ID: Hello Pierre, I am not that familiar with the PTL duties (yet), but I would like to volunteer to be the next CloudKitty PTL. On Tue, Sep 22, 2020 at 12:35 PM Pierre Riteau wrote: > Hello, > > Late in the Victoria cycle, I volunteered to help with the then > inactive CloudKitty project, which resulted in becoming its PTL. While > I plan to continue contributing to CloudKitty, I will have very > limited availability during the beginning of the Wallaby cycle. In > particular, I may not even be able to join the PTG. > > Thus it would be best if someone else ran for CloudKitty PTL this > cycle. If you are interested in nominating yourself but aren't sure > what is involved, don't hesitate to reach out to me by email or IRC. > > Thanks, > Pierre Riteau (priteau) > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Wed Sep 30 13:24:53 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 30 Sep 2020 15:24:53 +0200 Subject: [cloudkitty][election][ptl] PTL non-candidacy In-Reply-To: References: Message-ID: Hi Rafael, Thank you for volunteering! I'll be happy to help you. The window for submitting nominations for PTL has ended, so the Technical Committee will have to consider your proposal, following [1]. Best wishes, Pierre Riteau (priteau) [1] https://governance.openstack.org/tc/resolutions/20141128-elections-process-for-leaderless-programs.html On Wed, 30 Sep 2020 at 15:19, Rafael Weingärtner wrote: > > Hello Pierre, > I am not that familiar with the PTL duties (yet), but I would like to volunteer to be the next CloudKitty PTL. > > On Tue, Sep 22, 2020 at 12:35 PM Pierre Riteau wrote: >> >> Hello, >> >> Late in the Victoria cycle, I volunteered to help with the then >> inactive CloudKitty project, which resulted in becoming its PTL. While >> I plan to continue contributing to CloudKitty, I will have very >> limited availability during the beginning of the Wallaby cycle. In >> particular, I may not even be able to join the PTG. >> >> Thus it would be best if someone else ran for CloudKitty PTL this >> cycle. If you are interested in nominating yourself but aren't sure >> what is involved, don't hesitate to reach out to me by email or IRC. >> >> Thanks, >> Pierre Riteau (priteau) >> > > > -- > Rafael Weingärtner From mrunge at matthias-runge.de Wed Sep 30 13:25:17 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Wed, 30 Sep 2020 15:25:17 +0200 Subject: [election][telemetry] PTL candidacy for Wallaby In-Reply-To: <387bc16d-facd-dd0a-620a-9359f9a7eb1f@matthias-runge.de> References: <387bc16d-facd-dd0a-620a-9359f9a7eb1f@matthias-runge.de> Message-ID: <5ea2f3b4-9d3d-18aa-2873-877dd793c60e@matthias-runge.de> On 29/09/2020 17:26, Matthias Runge wrote: > Hi there, > > I'd like to announce my candidacy to become the next > PTL for OpenStack Telemetry for the Wallaby cycle. On this note, I wouldn't want this to be seen as not trusting the current PTL Rong Zhu or stating of doing a bad job. However, Rong also stepped up to be the PTL of Solum and to be the PTL of Murano for the next cycle. I'm tipping my hat to the courage, but taking care of three projects at the same time seems to be a bit too much in my eyes. With that, I see stepping up myself to help telemetry a bit more would be in the best interest of the project. Matthias From amy at demarco.com Wed Sep 30 14:46:08 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 30 Sep 2020 09:46:08 -0500 Subject: Mentors needed Tomorrow!!!! Message-ID: Hey all, Found out over night the OpenStack session at Grace Hopper Conference's Open Source Day has 78 mentees registered and we have 6-8 stackers mentoring through the day. If you've ever wanted to help mentor the next hopeful generation of Stackers please let me know. We are working to adapt our plans for the day and do have the option of sending some to another track but that's not the OpenStack way! Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Sep 30 14:47:44 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 30 Sep 2020 16:47:44 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> Message-ID: <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> On 9/30/20 12:58 PM, Sean Mooney wrote: > On Wed, 2020-09-30 at 10:33 +0200, Thomas Goirand wrote: >> On 9/30/20 12:43 AM, Tim Burke wrote: >>> >>> On 9/29/20 1:43 AM, Thomas Goirand wrote: >>>> >>>> On 9/28/20 6:10 PM, Radosław Piliszek wrote: >>>>> On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. >>>>>> See [1] for the details. However, Debian unstable has 2.0.0. >>>>>> >>>>>> Would there be some good soul willing to help me fix this situation? I >>>>>> would need a patch to fix this, but I'm really not sure how to start. >>>>>> >>>>>> Cheers, >>>>>> >>>>>> Thomas Goirand (zigo) >>>>> >>>>> The [1] reference is missing. >>>>> >>>>> -yoctozepto >>>> >>>> Indeed, sorry. So: >>>> >>>> [1] https://github.com/eventlet/eventlet/issues/619 >>>> >>>> I've already integrated this patch in the Debian package: >>>> https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec >>>> >>>> >>>> However, there's still this failure, related to #619 (linked above): >>>> >>>> ERROR: test_noraise_dns_tcp (tests.greendns_test.TinyDNSTests) >>>> -------------------------------------------------------------- >>>> Traceback (most recent call last): >>>> File >>>> "/<>/.pybuild/cpython3_3.8_eventlet/build/tests/greendns_test.py", >>>> >>>> line 904, in test_noraise_dns_tcp >>>> self.assertEqual(response.rrset.items[0].address, expected_ip) >>>> KeyError: 0 >>>> >>>> Can anyone solve this? >>>> >>>> Cheers, >>>> >>>> Thomas Goirand (zigo) >>>> >>> >>> Swapping out the assertion for one like >>> >>> self.assertEqual( >>> [rr.address for rr in response.rrset.items], >>> [expected_ip]) >>> >>> should at least get tests passing, but the other issues identified are >>> still problems: >>> >>>> Eventlet will need to detect which version of dnspython is running >>>> (import dns.version) and monkey patch appropriately. Note that the >>>> raise_on_truncation is not the only change, as the af parameter is >>>> now gone, and you can also pass sockets to both udp() and tcp(). >>> >>> IDK how different udp()/tcp() should be between the pre/post-2.0.0 >>> versions, though. Part of me is tempted to try patching out >>> dns.query._wait_for instead... >>> >>> Tim >> >> Hi Tim, >> >> Thanks for your follow-up. Your advice above fixed the issue, however at >> least 2 tests aren't deterministic: >> >> - tests.subprocess_test.test_communicate_with_poll >> - tests.subprocess_test.test_communicate_with_poll >> >> They both failed at least once with as error: "BlockingIOError: [Errno >> 11] Resource temporarily unavailable". Any idea what's going on? >> >> I've uploaded eventlet 0.26.1 to Experimental anyways, but I really hope >> we can fix the remaining issues. > just to be clear that is not the only incompatiably with dnspython >=2 > https://github.com/eventlet/eventlet/issues/632 > so we cannot uncap this even if https://github.com/eventlet/eventlet/issues/619 > is fixed we also need to resolve teh issue wtih ssl which breaks the nova console proxy > that is breaking nova via websockify which gets a > TypeError: _wrap_socket() argument 1 must be _socket.socket, not GreenSSLSocket > > when you use dnspython it seams to monkey patch the ssl sockets incorrectly. > so we cannot support dnspython2 untill that is also resolved or nova console wont work anymore. If this cannot be resolved before February, there's the risk that Debian Bullseye will have a broken OpenStack then. I don't think it's going to be easy to convince the maintainer of dnspython in Debian to revert his upload when it's been already 3 months dnspython 2.0 was released. Do we know if there is anything apart from the Nova console that will be broken, somehow? Cheers, Thomas Goirand (zigo) From radoslaw.piliszek at gmail.com Wed Sep 30 15:19:18 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 30 Sep 2020 17:19:18 +0200 Subject: Mentors needed Tomorrow!!!! In-Reply-To: References: Message-ID: Hi Amy, How can one help? -yoctozepto On Wed, Sep 30, 2020 at 4:49 PM Amy Marrich wrote: > > Hey all, > > Found out over night the OpenStack session at Grace Hopper Conference's Open Source Day has 78 mentees registered and we have 6-8 stackers mentoring through the day. > > If you've ever wanted to help mentor the next hopeful generation of Stackers please let me know. We are working to adapt our plans for the day and do have the option of sending some to another track but that's not the OpenStack way! > > Thanks, > > Amy (spotz) From amy at demarco.com Wed Sep 30 15:24:45 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 30 Sep 2020 10:24:45 -0500 Subject: Mentors needed Tomorrow!!!! In-Reply-To: References: Message-ID: This is an all day virtual event using Zoom and Slack starting at 9:00am PST, with mentors arriving a little earlier. Tentative Agenda for tomorrow - 9:30 - 10:00 Introductions Introduce ourselves and mentors Upcoming opportunities (events, mentoring, etc) Collect Emails Go over agenda for the day (make adjustments based on feedback of any sessions attendees want to attend, breaks needed, etc) Break up into Peer Programming groups based on interest or open slots for late comers 10:00 - 10:30 Quick overview of OpenStack using a pre-installed devstack 10:30-10:45 Break 10:45 - 12:00 Peer Programming Based on the interest of your group set up for Git and Gerrit or just Peer Program 12:00 - 12:30 - lunch 12:30 - 1 - speed mentoring 1:00 - 3:00 More Peer Programming 3:00 - 3:15 Break 3:15 - 4:00 Devstack installs or Setting up Git and Gerrit? (edited) For the peer programming we were basically planning on having Stackers sharing their screens as they work through the process of looking through the bugs, 'finding' one, and working through the process of working on it. Thanks, Amy On Wed, Sep 30, 2020 at 10:19 AM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > Hi Amy, > > How can one help? > > -yoctozepto > > On Wed, Sep 30, 2020 at 4:49 PM Amy Marrich wrote: > > > > Hey all, > > > > Found out over night the OpenStack session at Grace Hopper Conference's > Open Source Day has 78 mentees registered and we have 6-8 stackers > mentoring through the day. > > > > If you've ever wanted to help mentor the next hopeful generation of > Stackers please let me know. We are working to adapt our plans for the day > and do have the option of sending some to another track but that's not the > OpenStack way! > > > > Thanks, > > > > Amy (spotz) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Sep 30 15:30:26 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 30 Sep 2020 17:30:26 +0200 Subject: Mentors needed Tomorrow!!!! In-Reply-To: References: Message-ID: On Wed, Sep 30, 2020 at 5:24 PM Amy Marrich wrote: > > This is an all day virtual event using Zoom and Slack starting at 9:00am PST, with mentors arriving a little earlier. Uh-oh, the timezone is a blocker for me, sorry! :-( I hope someone else can help there. -yoctozepto From frode.nordahl at canonical.com Wed Sep 30 15:37:42 2020 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Wed, 30 Sep 2020 17:37:42 +0200 Subject: [election][charms] PTL candidacy for Wallaby Message-ID: Hello all, I hereby announce my candidacy for PTL for the Charms project. There is a lot of good movement in the community and I'm really excited about what the next cycle might bring. Apologies for missing the nomination deadline. -- Frode Nordahl From smooney at redhat.com Wed Sep 30 15:50:46 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 30 Sep 2020 16:50:46 +0100 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> Message-ID: On Wed, 2020-09-30 at 16:47 +0200, Thomas Goirand wrote: > On 9/30/20 12:58 PM, Sean Mooney wrote: > > On Wed, 2020-09-30 at 10:33 +0200, Thomas Goirand wrote: > > > On 9/30/20 12:43 AM, Tim Burke wrote: > > > > > > > > On 9/29/20 1:43 AM, Thomas Goirand wrote: > > > > > > > > > > On 9/28/20 6:10 PM, Radosław Piliszek wrote: > > > > > > On Mon, Sep 28, 2020 at 5:30 PM Thomas Goirand wrote: > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > As you may know, eventlet is incompatible with dnspython >= 2.0.0.0rc1. > > > > > > > See [1] for the details. However, Debian unstable has 2.0.0. > > > > > > > > > > > > > > Would there be some good soul willing to help me fix this situation? I > > > > > > > would need a patch to fix this, but I'm really not sure how to start. > > > > > > > > > > > > > > Cheers, > > > > > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > The [1] reference is missing. > > > > > > > > > > > > -yoctozepto > > > > > > > > > > Indeed, sorry. So: > > > > > > > > > > [1] https://github.com/eventlet/eventlet/issues/619 > > > > > > > > > > I've already integrated this patch in the Debian package: > > > > > https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec > > > > > > > > > > > > > > > However, there's still this failure, related to #619 (linked above): > > > > > > > > > > ERROR: test_noraise_dns_tcp (tests.greendns_test.TinyDNSTests) > > > > > -------------------------------------------------------------- > > > > > Traceback (most recent call last): > > > > > File > > > > > "/<>/.pybuild/cpython3_3.8_eventlet/build/tests/greendns_test.py", > > > > > > > > > > line 904, in test_noraise_dns_tcp > > > > > self.assertEqual(response.rrset.items[0].address, expected_ip) > > > > > KeyError: 0 > > > > > > > > > > Can anyone solve this? > > > > > > > > > > Cheers, > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > > Swapping out the assertion for one like > > > > > > > > self.assertEqual( > > > > [rr.address for rr in response.rrset.items], > > > > [expected_ip]) > > > > > > > > should at least get tests passing, but the other issues identified are > > > > still problems: > > > > > > > > > Eventlet will need to detect which version of dnspython is running > > > > > (import dns.version) and monkey patch appropriately. Note that the > > > > > raise_on_truncation is not the only change, as the af parameter is > > > > > now gone, and you can also pass sockets to both udp() and tcp(). > > > > > > > > IDK how different udp()/tcp() should be between the pre/post-2.0.0 > > > > versions, though. Part of me is tempted to try patching out > > > > dns.query._wait_for instead... > > > > > > > > Tim > > > > > > Hi Tim, > > > > > > Thanks for your follow-up. Your advice above fixed the issue, however at > > > least 2 tests aren't deterministic: > > > > > > - tests.subprocess_test.test_communicate_with_poll > > > - tests.subprocess_test.test_communicate_with_poll > > > > > > They both failed at least once with as error: "BlockingIOError: [Errno > > > 11] Resource temporarily unavailable". Any idea what's going on? > > > > > > I've uploaded eventlet 0.26.1 to Experimental anyways, but I really hope > > > we can fix the remaining issues. > > > > just to be clear that is not the only incompatiably with dnspython >=2 > > https://github.com/eventlet/eventlet/issues/632 > > so we cannot uncap this even if https://github.com/eventlet/eventlet/issues/619 > > is fixed we also need to resolve teh issue wtih ssl which breaks the nova console proxy > > that is breaking nova via websockify which gets a > > TypeError: _wrap_socket() argument 1 must be _socket.socket, not GreenSSLSocket > > > > when you use dnspython it seams to monkey patch the ssl sockets incorrectly. > > so we cannot support dnspython2 untill that is also resolved or nova console wont work anymore. > > If this cannot be resolved before February, there's the risk that Debian > Bullseye will have a broken OpenStack then. I don't think it's going to > be easy to convince the maintainer of dnspython in Debian to revert his > upload when it's been already 3 months dnspython 2.0 was released. > > Do we know if there is anything apart from the Nova console that will be > broken, somehow? we do not know if there are other failrue neutron has a spereate issue which was tracked by https://github.com/eventlet/eventlet/issues/619 and nova hit the ssl issue with websockify and eventlets tracked by https://github.com/eventlet/eventlet/issues/632 so the issue is really eventlets is not compatiabley with dnspython 2.0 so before openstack can uncap dnspython eventlets need to gain support for dnspython 2.0 that should hopefully resolve the issues that nova, neutron and other projects are now hitting. it is unlikely that this is something we can resolve in openstack alone, not unless we are willing to monkeyptych eventlets and other dependcies so really we need to work with eventlets and or dnspython to resolve the incompatiablity caused by the dnspython changes in 2.0 > > Cheers, > > Thomas Goirand (zigo) > From gouthampravi at gmail.com Wed Sep 30 15:51:28 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 30 Sep 2020 08:51:28 -0700 Subject: [manila] GHC / Canceling the IRC Meeting this week (1st Oct 2020) Message-ID: Hello Zorillas, A number of us are mentoring attendees of the "Open Source Day" at the Grace Hopper Celebration tomorrow (1st Oct 2020). Since the agenda [1] had no new items, we'll forego the meeting slot and convene next week instead. If there's anything urgent that needs to be discussed, please share your thoughts on #openstack-manila, or as an email here. If you're available tomorrow, and would like to help with GHC as a mentor, please see Amy's email to the list. [2] Thanks, Goutham [1] https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017681.html From e0ne at e0ne.info Wed Sep 30 19:10:41 2020 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 30 Sep 2020 22:10:41 +0300 Subject: [election][horizon] PTL candidacy for Wallaby Message-ID: Hi team, I would like to announce my candidacy for Horizon PTL for the Wallaby cycle. During Victoria cycle we did a good job with stability improvements and feature development but there are still some areas we need to pay attention during Wallaby development cycle: * More cross-project testing. * Find a solution on how to improve UX and make feature gap smaller. * Involve more contributors into the community. Thanks for reading this and helping me making Horizon better during each release, Ivan Kolodyazhny (e0ne) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Sep 30 19:56:37 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 30 Sep 2020 15:56:37 -0400 Subject: [tc] monthly meeting Message-ID: Hi everyone, Here’s the agenda for our monthly TC meeting. It will happen tomorrow (Thursday the 1st) at 1400 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * ACTIVE INITIATIVES - Follow up on past action items - OpenStack User-facing APIs and CLIs (belmoreira) - W cycle goal selection start - Completion of retirement cleanup (gmann) https://etherpad.opendev.org/p/tc-retirement-cleanup - Audit and clean-up tags (gmann) - Remove tc:approved-release tag https://review.opendev.org/#/c/749363/ Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. From zigo at debian.org Wed Sep 30 20:08:35 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 30 Sep 2020 22:08:35 +0200 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> Message-ID: <145a7b7a-6022-08f0-e647-255dcca4fe6e@debian.org> Hi Sean, Thanks for your reply and sharing your concerns. I still don't agree with you, and here's why. On 9/30/20 12:29 PM, Sean Mooney wrote: > On Thu, 2020-09-24 at 22:39 +0200, Thomas Goirand wrote: >> I'm still available to attempt the /healthcheck thingy, I kind of >> succeed in all major project but ... nova. Unfortunately, it was decided >> in the project that we should put this on hold until the /healthcheck >> can implement more check than just to know if the API is alive. 5 months >> forward, I believe my original patch [1] should have been approved first >> as a first approach. Nova team: any reaction? > i actually dont think the healthcheck enpoint is userful in it current from for any porject > that has distibuted process like nova, neutron or cinder. With my operator's hat on, I assure you that it is very useful to wire-up haproxy. This is btw exactly what it was designed for. The way haproxy works, is that it actually has to perform an http connection to check if the web service is alive. Without a specific URL, we can't filter-out that /healthcheck URL from the saved logs in Elastic search, which is very annoying. Not doing a real HTTP check means one falls back to a TCP check, which means that your logs are polluted with so many "client disconnected unexpectedly" (that's not the actual message, but that's what it is doing) since haproxy otherwise does a TCP connection, then closes it before what a normal HTTP query would do. I've been told to use other URLs, which isn't the way to go. I very much think the /healthcheck URL was well designed, and should be used. > that was not the only concern raised > either as the default content of the detail responce wich include package infomation was considerd > a possible security vulnerablity so with out agreeing on what kindo fo info can be retruned, its format and > wether this would be a admin only endpoint or a public endpoint tenant can check potentially without auth > i dont think we should be procedding with this as a goal. Are you aware that one could simply use "/" of most projects without auth, and get the version of the project? Example with Nova: curl -s https://api.example.com/compute/ | jq . { "versions": [ { "id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [ { "rel": "self", "href": "https://api.example.com/v2/" } ] }, { "id": "v2.1", "status": "CURRENT", "version": "2.79", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [ { "rel": "self", "href": "https://clint1-api.cloud.infomaniak.ch/v2.1/" } ] } ] } I believe "version": "2.79" is the microversion of the Nova API, which therefore, exposes what version of Nova (here: Train). Am I correct? I believe we also must leave this, because clients must be able to discover the micro-version of the API, right? Then, if I'm right, isn't this a much more annoying problem than having a /healtcheck URL which could anyway be filtered by an HAProxy in front? (note: I don't think the above is really a problem, since the micro-version doesn't tell if Nova has been patched for security or not...) > https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/healthcheck/__init__.py#L150-L152 > ^ is concerning from a security point of view. This needs to be explicitly enabled in the api-paste.ini, and is otherwise not displayed. Here's what I get in my Train deployment with the query suggested in the example you gave: { "detailed": false, "reasons": [] } The code in here: https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/healthcheck/__init__.py#L387 shows I'm right. That's also well documented here: https://docs.openstack.org/oslo.middleware/latest/reference/healthcheck_plugins.html See where it says this: # set to True to enable detailed output, False is the default detailed = False in the api-paste.ini example. So here, the point you are making isn't IMO valid. > nova support configurable middelway still through the api-paste.ini file > so untill we have a useful health check As we discussed earlier, what is *not* useful, is any healthcheck that would do more than what oslo.middleware does right now without caching things, because that /healthcheck URL is typically called multiple times per seconds (in our deployment: at least 6 times per seconds), so it needs to reply fast. So I hope that the super-nice-healthcheck thingy wont fry my server CPUs ... :) What is *not* useful as well, is delaying such a trivial patch for more than 6 months, just in the hope that in a distant future, we may have something better. Sure, take your time, get something implemented that does a nice healtcheck with db access and rabbitmq connectivity checks. But that should in no way get in the path of having a configuration which works for everyone by default. Cheers, Thomas Goirand (zigo) From openstack at nemebean.com Wed Sep 30 21:35:10 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 30 Sep 2020 16:35:10 -0500 Subject: [oslo] Project leadership Message-ID: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> Hi all, As you may have noticed, we had no nominees for Oslo PTL again this cycle. I said in my non-candidacy email that I don't think it would be appropriate for me to continue in the position given the lack of attention I've been paying over the past couple of cycles. My feelings on that have not changed, so we need to figure out what we're going to do from here on. The general consensus from the people I've talked to seems to be a distributed model. To kick off that discussion, here's a list of roles that I think should be filled in some form: * Release liaison * Security point-of-contact * TC liaison * Cross-project point-of-contact * PTG/Forum coordinator * Meeting chair * Community goal liaison (almost forgot this one since I haven't actually been doing it ;-). I've probably missed a few, but those are the the ones I came up with to start. Obviously there's overlap between some of them, and we don't have that many people on the team so there's going to be some combined roles. It's also possible some roles could be handled cooperatively by the team (we already have a coresec team, for example), but in general I favor explicit assignments over vague "somebody will do it" statements. That said, given my reduced role on the team I don't know how much say I get in the matter. :-) Anyway, I wanted to get this sent out because I'm going to be out on PTO next week so I won't be available for the meeting. Feel free to discuss it in my absence though. Thanks. -Ben From fungi at yuggoth.org Wed Sep 30 21:48:25 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 30 Sep 2020 21:48:25 +0000 Subject: [oslo] Project leadership In-Reply-To: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> References: <328f380e-c16a-7fd4-a1fd-154b07ede01d@nemebean.com> Message-ID: <20200930214825.v7hvjec2ejffay55@yuggoth.org> On 2020-09-30 16:35:10 -0500 (-0500), Ben Nemec wrote: [...] > The general consensus from the people I've talked to seems to be a > distributed model. To kick off that discussion, here's a list of roles that > I think should be filled in some form: > > * Release liaison > * Security point-of-contact > * TC liaison > * Cross-project point-of-contact > * PTG/Forum coordinator > * Meeting chair > * Community goal liaison (almost forgot this one since I haven't actually > been doing it ;-). > > I've probably missed a few, but those are the the ones I came up with to > start. [...] The TC resolution on distributed project leadership also includes a recommended list of liaison roles, though it's basically what you outlined above: https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From smooney at redhat.com Wed Sep 30 22:13:42 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 30 Sep 2020 23:13:42 +0100 Subject: [tc][all] Wallaby Cycle Community Goals In-Reply-To: <145a7b7a-6022-08f0-e647-255dcca4fe6e@debian.org> References: <40b8e159-826c-cc07-d0f8-a9b21532d544@ham.ie> <145a7b7a-6022-08f0-e647-255dcca4fe6e@debian.org> Message-ID: <25c8cc54dc2fae53f16420e52bca9395eaa88c79.camel@redhat.com> On Wed, 2020-09-30 at 22:08 +0200, Thomas Goirand wrote: > Hi Sean, > > Thanks for your reply and sharing your concerns. I still don't agree > with you, and here's why. > > On 9/30/20 12:29 PM, Sean Mooney wrote: > > On Thu, 2020-09-24 at 22:39 +0200, Thomas Goirand wrote: > > > I'm still available to attempt the /healthcheck thingy, I kind of > > > succeed in all major project but ... nova. Unfortunately, it was decided > > > in the project that we should put this on hold until the /healthcheck > > > can implement more check than just to know if the API is alive. 5 months > > > forward, I believe my original patch [1] should have been approved first > > > as a first approach. Nova team: any reaction? > > > > i actually dont think the healthcheck enpoint is userful in it current from for any porject > > that has distibuted process like nova, neutron or cinder. > > With my operator's hat on, I assure you that it is very useful to > wire-up haproxy. This is btw exactly what it was designed for. > > The way haproxy works, is that it actually has to perform an http > connection to check if the web service is alive. Without a specific URL, > we can't filter-out that /healthcheck URL from the saved logs in Elastic > search, which is very annoying. cant you just hit the root of the service? that is unauthenticated for microversion version discovery so haproxy could simple use / for a http check if its just bing used to test if the rest api is running. > > Not doing a real HTTP check means one falls back to a TCP check, which > means that your logs are polluted with so many "client disconnected > unexpectedly" (that's not the actual message, but that's what it is > doing) since haproxy otherwise does a TCP connection, then closes it > before what a normal HTTP query would do. > > I've been told to use other URLs, which isn't the way to go. I very much > think the /healthcheck URL was well designed, and should be used. > > > that was not the only concern raised > > either as the default content of the detail responce wich include package infomation was considerd > > a possible security vulnerablity so with out agreeing on what kindo fo info can be retruned, its format and > > wether this would be a admin only endpoint or a public endpoint tenant can check potentially without auth > > i dont think we should be procedding with this as a goal. > > Are you aware that one could simply use "/" of most projects without > auth, and get the version of the project? Example with Nova: > > curl -s https://api.example.com/compute/ | jq . > > { > "versions": [ > { > "id": "v2.0", > "status": "SUPPORTED", > "version": "", > "min_version": "", > "updated": "2011-01-21T11:33:21Z", > "links": [ > { > "rel": "self", > "href": "https://api.example.com/v2/" > } > ] > }, > { > "id": "v2.1", > "status": "CURRENT", > "version": "2.79", > "min_version": "2.1", > "updated": "2013-07-23T11:33:21Z", > "links": [ > { > "rel": "self", > "href": "https://clint1-api.cloud.infomaniak.ch/v2.1/" > } > ] > } > ] > } > yes so this is the endpoint i would expect peopel to use as an alternitive to /healtcheck. im not suggesting we do not > I believe "version": "2.79" is the microversion of the Nova API, which > therefore, exposes what version of Nova (here: Train). Am I correct? no you are not. it does not expose the package infomation it tells you the make microversion the api support but that is a different thing. we dont always bump the microversion in a release. ussuri and victoria but share the same microversion https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-ussuri-and-victoria the microverion also wont change on a stable branch no matter what bugs exist or have been patched. > I > believe we also must leave this, because clients must be able to > discover the micro-version of the API, right? yes without this no client can determin what api version is supported by a specific cloud. this is intened to be a public endpoint with no auth for that reason. > > Then, if I'm right, isn't this a much more annoying problem than having > a /healtcheck URL which could anyway be filtered by an HAProxy in front? im not following this was intended to be public form the start and does not ahve the same issue as the health check api > (note: I don't think the above is really a problem, since the > micro-version doesn't tell if Nova has been patched for security or not...) correct its not and it is the endpoint that i would suggest using in the absence of a /healtcheck endpoint until we ca develop one that actuly reports the healt of the service and not just the api. > > > https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/healthcheck/__init__.py#L150-L152 > > ^ is concerning from a security point of view. > > This needs to be explicitly enabled in the api-paste.ini, and is > otherwise not displayed. Here's what I get in my Train deployment with > the query suggested in the example you gave: > > { > "detailed": false, > "reasons": [] > } > > The code in here: > https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/healthcheck/__init__.py#L387 > > shows I'm right. That's also well documented here: > https://docs.openstack.org/oslo.middleware/latest/reference/healthcheck_plugins.html > > See where it says this: > > # set to True to enable detailed output, False is the default > detailed = False > > in the api-paste.ini example. > > So here, the point you are making isn't IMO valid. > > > nova support configurable middelway still through the api-paste.ini file > > so untill we have a useful health check > > As we discussed earlier, what is *not* useful, is any healthcheck that > would do more than what oslo.middleware does right now without caching > things, because that /healthcheck URL is typically called multiple times > per seconds (in our deployment: at least 6 times per seconds), so it > needs to reply fast. So I hope that the super-nice-healthcheck thingy > wont fry my server CPUs ... :) what we were thinking was basically checking that the from the api that services that is handeling the request we would confirm the db and message bus were accessable, and that api instance could reach the conductor and maybe the schedler services or assert tehy were active via a db check. if the generic oslo ping rpc was added we coudl use that but i think dansmith had a simpler proposal for caching it based on if we were able to connect during normal operation and jsut have the api check look at teh in memeory value. i.e. if the last attempt to read form the db failed to connect we would set a global variable e.g. DB_ACCESSIBLE=FALSE and then the next time it succeded we set it to True. the health check woudl just read the global so there should be little to no overhead vs what oslo does this would basically cache the last knon state and the health check is just doing the equivalent of return DB_ACCESSIBLE and RPC_ACCESSIBLE if detail=true was set it could do a more advanced check and check the service status exctra which would be a dbquery but detail=false is just a memory read of two variables. > > What is *not* useful as well, is delaying such a trivial patch for more > than 6 months, just in the hope that in a distant future, we may have > something better. but as you yourself pointed out almost every service has a / enpoint that is used for microverion discovery that is public so not implementing /healtcheck in nova does not block you using / as the healthcheck url and you can enable the oslo endpoint if you chose too by enable the middle ware in your local deployment. > > Sure, take your time, get something implemented that does a nice > healtcheck with db access and rabbitmq connectivity checks. But that > should in no way get in the path of having a configuration which works > for everyone by default. there is nothing stopping install tools providing that experience by default today. at least as long as nova support configurable middleware they can enable or even enable the /healthcheck endpoint by default without requiring nova code change. i have looked at enough customer bug to know that network partions are common in real envionments where someone trying to use the /healthcheck endpoint to know if nova is healty would be severly disapointed when it says its healty and they cant boot any vms because rabbitmq is not reachable. for usecause outside fo haproxy failover a bad health check is arguable worse then no healthcheck. im not unsympathic to your request but with what oslo does by default we would basically have to document that this should not be used to monitor the healthy of the nova service to prempt the bug reports we would get from customers related to this. we have already got several bug reports to the status of vm not matching reality when connectivity to the cell is down. e.g. when we cant connect to the cell database if the vm is stoped say via a power off vis ssh then its state will not be reflected in a nova show. if we were willing to add a big warning and clearly call out that this is just saying the api is accesable but not necessarily functional then i would be more ok with what olso provides but it does not tell you anything about the health of nova or if any other api request will actually work. i would suggest adding this to the nova ptg etherpad if you want to move this forward in nova in particular. > > Cheers, > > Thomas Goirand (zigo) >