From miguel at mlavalle.com Mon Sep 2 03:26:47 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 1 Sep 2019 22:26:47 -0500 Subject: [openstack-dev] [neutron] Cancelling Neutron weekly meeting on September 2nd Message-ID: Hi Neutrinos, September 2nd is a holiday in the USA, so we will cancel our weekly meeting. We will resume on Tuesday 10th Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbechtold at suse.com Mon Sep 2 04:12:27 2019 From: tbechtold at suse.com (Thomas Bechtold) Date: Mon, 2 Sep 2019 06:12:27 +0200 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: Message-ID: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Hi, On 8/27/19 7:58 AM, Akihiro Motoki wrote: [...] > How to get started > ------------------ > > - "How to get started" section in the PDF goal etherpad [1] explains > the minimum steps. > You can find useful examples there too. This is a bit confusing because the goal[1] mentions that there should be no extra tox target declared for the gate job. But the etherpad explains that there should be a new tox target[2]. So do we need a new tox target in the project repo? Or is that optional and just for local testing? Cheers, Tom [1] https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html#completion-criteria [2] https://etherpad.openstack.org/p/train-pdf-support-goal > - To build PDF docs locally, you need to install LaTex related > packages. See "To test locally" in the etherpad [1]. > - If you hit problems during PDF build, check the common problems > etherpad [2]. We are collecting knowledges there. > - If you have questions, feel free to ask #openstack-doc IRC channel. > > Also Please sign up your name to "Project volunteers" in [1]. > > Useful links > ------------ > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems > [3] Ongoing reviews: > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) > > Thanks, > Akihiro Motoki (amotoki) > > From sundar.nadathur at intel.com Mon Sep 2 04:52:24 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 2 Sep 2019 04:52:24 +0000 Subject: [cyborg][election][ptl] PTL candidacy for Ussuri Message-ID: <1CC272501B5BC543A05DB90AA509DED5276073B3@fmsmsx122.amr.corp.intel.com> Hello all, I would like to announce my candidacy for the PTL role of Cyborg for the Ussuri cycle. I have been involved with Cyborg since 2018 Rocky PTG, and have had the privilege of serving as Cyborg PTL for the Train cycle. In the Train cycle, Cyborg saw some important developments. We reached an agreement on integration with Nova at the PTG, and the spec that I wrote based on that agreement has been merged. We have seen new developers join the community. We have seen existing Cyborg drivers getting updated and new Cyborg drivers being proposed. We are also in the process of developing a tempest plugin for Cyborg. In the U cycle, I'd aim to build on this foundation. While we may support a certain set of VM operations with accelerators with Nova in Train, we can expand on that set in U. We should also focus on Day 2 operations like performance monitoring and health monitoring for accelerator devices. I would like to formalize and expand on the driver addition/development process. Thank you for your support. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Mon Sep 2 06:25:03 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 2 Sep 2019 08:25:03 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> Of course +1 ! On 8/30/19 2:28 PM, Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- Cédric Jeanneret (He/Him/His) Software Engineer - OpenStack Platform Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From amotoki at gmail.com Mon Sep 2 07:07:19 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 2 Sep 2019 16:07:19 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: On Mon, Sep 2, 2019 at 1:12 PM Thomas Bechtold wrote: > > Hi, > > On 8/27/19 7:58 AM, Akihiro Motoki wrote: > > [...] > > > How to get started > > ------------------ > > > > - "How to get started" section in the PDF goal etherpad [1] explains > > the minimum steps. > > You can find useful examples there too. > > This is a bit confusing because the goal[1] mentions that there should > be no extra tox target declared for the gate job. > But the etherpad explains that there should be a new tox target[2]. > > So do we need a new tox target in the project repo? Or is that optional > and just for local testing? The new tox target in the project repo is required now. The PDF doc will be generated only when the "pdf-docs" tox target does exists. When the goal is defined the docs team thought the doc gate job can handle the PDF build without extra tox env and zuul job configuration. However, during implementing the zuul job support it turns out at least a new tox env or an extra zuul job configuration is required in each project to make the docs job fail when PDF build failure is detected. As a result, we changes the approach and the new tox target is now required in each project repo. Perhaps we need to update the description of the goal definition document. Thanks, Akihiro > > Cheers, > > Tom > > [1] > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html#completion-criteria > [2] https://etherpad.openstack.org/p/train-pdf-support-goal > > > - To build PDF docs locally, you need to install LaTex related > > packages. See "To test locally" in the etherpad [1]. > > - If you hit problems during PDF build, check the common problems > > etherpad [2]. We are collecting knowledges there. > > - If you have questions, feel free to ask #openstack-doc IRC channel. > > > > Also Please sign up your name to "Project volunteers" in [1]. > > > > Useful links > > ------------ > > > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems > > [3] Ongoing reviews: > > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) > > > > Thanks, > > Akihiro Motoki (amotoki) > > > > From ccamacho at redhat.com Mon Sep 2 07:34:45 2019 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Mon, 2 Sep 2019 09:34:45 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> References: <20190830122850.GA5248@holtby> <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> Message-ID: +1 On Mon, Sep 2, 2019 at 8:36 AM Cédric Jeanneret wrote: > > Of course +1 ! > > On 8/30/19 2:28 PM, Michele Baldessari wrote: > > Hi all, > > > > Damien (dciabrin on IRC) has always been very active in all HA things in > > TripleO and I think it is overdue for him to have core rights on this > > topic. So I'd like to propose to give him core permissions on any > > HA-related code in TripleO. > > > > Please vote here and in a week or two we can then act on this. > > > > Thanks, > > > > -- > Cédric Jeanneret (He/Him/His) > Software Engineer - OpenStack Platform > Red Hat EMEA > https://www.redhat.com/ > From chx769467092 at 163.com Mon Sep 2 07:43:38 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Mon, 2 Sep 2019 15:43:38 +0800 (CST) Subject: [QA][nova][Concurrent performance] Message-ID: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> Hello everyone! Why the performance of Stein concurrent creation of VM is not as good as that of Ocata? Create 250 VMs concurrently, the O version only needs 160s, but the S version needs 250s. Security_group and port_security functions are disabled. Among 250 VMs, there are single network card and multi-network card. Using neutron-openvswitch-agent. Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Mon Sep 2 07:58:00 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Mon, 2 Sep 2019 15:58:00 +0800 Subject: [QA][nova][Concurrent performance] In-Reply-To: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> Message-ID: Hi on 2019/9/2 15:43, 崔恒香 wrote: > Why the performance of Stein concurrent creation of VM is not as good as > that of Ocata? > Create 250 VMs concurrently, the O version only needs 160s, but the S > version needs 250s. Openstack's execution environment is complicated. It's hard to say O is faster than S from the architecture viewpoint. Are you sure two systems' running environment are totally the same for comparision? including the software, hardware, network connection etc. regards. From rico.lin.guanyu at gmail.com Mon Sep 2 08:17:19 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 2 Sep 2019 16:17:19 +0800 Subject: [heat][election][ptl] Heat PTL candidacy for Ussuri cycle Message-ID: Dear Heat members, I would like to announce my candidacy as the Heat project team leader for Ussuri cycle. We have been suffered from a lack of people to help on review. So if you're reading this, please join us, and help with review and features. One of our current strategies is to make a worklist to target for each release and try to make those items finish on time. Also, another strategy is to get better integration with other projects or even cross communities. We have been triggered some discussion with SIGs and other projects to try to figgering where we can keep this integration moving. Appears we still got a lot of jobs to do. My plan for the next cycle will keep those two strategies, and extend from there if we got time. Please consider my candidacy. Thank you. Rico Lin (ricolin) -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Sep 2 08:46:55 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 02 Sep 2019 10:46:55 +0200 Subject: [election][tc] Candidacy for TC Message-ID: <0b089dd6bc0226e3652167335f8a2818300137aa.camel@evrard.me> Hello everyone, I am hereby announcing my candidacy for a renewed position on the OpenStack Technical Committee (TC). I have been following the OpenStack ecosystem since Kilo. I went through multiple companies and wore multiple hats (A cloud end- user, an OpenStack advocate in meetups and at FOSDEM, a product owner of the cloud strategy, architect of a community cloud, a deployer, a developer, a team lead, a technology analyst), which gives me a unique view on OpenStack and other adjacent communities. I am now working full time on OpenStack for SUSE. During my time at the TC, I worked on refactoring the TC health check process, worked on community goals selection, and worked on different TC activities to help both the community and my company. During my first term, I have experienced or seen pain in the OpenStack processes: project health checks, community goals, release name selection are just a series of examples. I "learnt the ropes" during that first year, but I don't think the quality of life of the OpenStack contributors has increased, which was one of my personal goals. If I get elected, I want to change the TC from the inside, and through that, change OpenStack. These are not just changes I can drive from the outside as they involve mindset changes. Without further ado, here are a few of my crazy ideas. First, I want to make the TC a birthplace for innovation, instead of being so process oriented. I have the impression our processes dulled innovation. I want to remove processes, name conventions, and just allow people to propose ideas (if possible, the wildest and craziest ideas) to the TC. I believe this would help people feel empowered to change OpenStack. With the fact that we are more and more organising ourselves as teams of specific interest, I would like the TC to issue more official stances on how to implement OpenStack-wide changes/best-practices with the help of field experts. That would mean issuing more "goals" for the projects and define a roadmap more actively. (I would love to see recommendations on using the last features of the python language to simplify our code base for example!) Finally, I would like us to try a new kind of periodic "leadership" meeting. In those meetings PTLs and SIG chairs would discuss the issues they are recently facing. That means sharing all together, being a ground for proposing new ideas again. PTLs are too overburdened by their work and don't have the occasion to share experience/expertise/crazy ideas on tech debt reduction for example. I believe this would bring people closer together. Thank you, Jean-Philippe (evrardjp) From geguileo at redhat.com Mon Sep 2 08:52:43 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 2 Sep 2019 10:52:43 +0200 Subject: [ptl][cinder] U Cycle PTL Non-Candidacy ... In-Reply-To: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> References: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> Message-ID: <20190902085243.5wxdyfrauhqridhf@localhost> Jay, Thank you for your hard work as PTL, core reviewer, and coder. I'm really glad to hear this is not a goodbye and you will be staying with us in Cinder. :-) Cheers, Gorka. On 30/08, Jay Bryant wrote: > All, > > I just wanted to communicate that I am not going to be running for another > term as Cinder's PTL. > > It has been an honor to lead the Cinder team for the last two years.  When I > started working with OpenStack nearly 6 years ago, leading the Cinder team > was one of my goals and I appreciate the team trusting me with this > responsibility for the last 4 cycles. > > I have enjoyed watching the project evolve over the last couple of years, > going from a focus on getting new features in place to a focus on ensuring > that customers get reliable storage management with an ever improving user > experience. > > Cinder's value in the storage community outside of OpenStack has been > validated as other SDS solutions have leveraged it to provide storage > management for the many vendors that Cinder supports. Cinder continues to > grow by adding things like cinderlib, making it relevant not only in > virtualized environments but also for containerized environments.  I am glad > that I have been able to help this evolution happen. > > As PTLs have done in the past, it is time for me to pursue other > opportunities in the OpenStack ecosystem and hand over the reigns to a new > leader.  Cinder has a great team and will continue to do great things.  Fear > not, I am not going to go anywhere, I plan to continue to stay active in > Cinder for the foreseeable future. > > Again, thank you for the opportunity to be Cinder's PTL, it has been a great > ride! > > Sincerely, > > Jay Bryant > > (irc: jungleboyj) > > > From chx769467092 at 163.com Mon Sep 2 08:54:41 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Mon, 2 Sep 2019 16:54:41 +0800 (CST) Subject: [QA][nova][Concurrent performance] In-Reply-To: References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> Message-ID: <1c7c5b6c.a53d.16cf12ef0ef.Coremail.chx769467092@163.com> Hi At 2019-09-02 15:58:00, "Wesley Peng" wrote: >Hi > >on 2019/9/2 15:43, 崔恒香 wrote: >> Why the performance of Stein concurrent creation of VM is not as good as >> that of Ocata? >> Create 250 VMs concurrently, the O version only needs 160s, but the S >> version needs 250s. > >Openstack's execution environment is complicated. >It's hard to say O is faster than S from the architecture viewpoint. >Are you sure two systems' running environment are totally the same for >comparision? including the software, hardware, network connection etc. The same set of servers, after testing version O, re-install the system, and then test version S. First deployed version O, the operating system is Ubuntu 16.04. Then deploy the version S, the operating system is ubuntu18.04. The port_security function is disabled.But in the S version environment, adding a lot of flows on br-int. Does this slow down the creation of VM? The flows is as follows:(for port qvo83b4285e-b5) cookie=0x9ca1d4c6ecdcb31f, duration=257335.826s, table=0, n_packets=0, n_bytes=0, priority=10,icmp6,in_port="qvo83b4285e-b5",icmp_type=136 actions=resubmit(,24) cookie=0x9ca1d4c6ecdcb31f, duration=257335.823s, table=0, n_packets=19, n_bytes=798, priority=10,arp,in_port="qvo83b4285e-b5" actions=resubmit(,24) cookie=0x9ca1d4c6ecdcb31f, duration=257335.831s, table=0, n_packets=95, n_bytes=10680, priority=9,in_port="qvo83b4285e-b5" actions=resubmit(,25) cookie=0x9ca1d4c6ecdcb31f, duration=257335.829s, table=24, n_packets=0, n_bytes=0, priority=2,icmp6,in_port="qvo83b4285e-b5",icmp_type=136,nd_target=fe80::f816:3eff:fe39:1601 actions=resubmit(,60) cookie=0x9ca1d4c6ecdcb31f, duration=257335.826s, table=24, n_packets=19, n_bytes=798, priority=2,arp,in_port="qvo83b4285e-b5",arp_spa=30.0.1.180 actions=resubmit(,25) cookie=0x9ca1d4c6ecdcb31f, duration=257335.841s, table=25, n_packets=114, n_bytes=11478, priority=2,in_port="qvo83b4285e-b5",dl_src=fa:16:3e:39:16:01 actions=resubmit(,60) Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Sep 2 08:59:25 2019 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 2 Sep 2019 10:59:25 +0200 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: +1 for me! Welcome on board Gabriele! Sorry for my late response I was on PTO... Le mer. 21 août 2019 21:50, Doug Hellmann a écrit : > > > > On Aug 21, 2019, at 10:25 AM, Ben Nemec wrote: > > > > Hello Norsk, > > > > It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a > new member of the oslo.messaging core team. He has been contributing to the > project for about a cycle now and has gotten up to speed on our development > practices. Oh, and he wrote the book on RabbitMQ[0]. :-) > > > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > > > Thanks. > > > > -Ben > > > > 0: http://shop.oreilly.com/product/9781849516501.do > > > > +1 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Mon Sep 2 09:02:12 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Mon, 2 Sep 2019 17:02:12 +0800 Subject: [QA][nova][Concurrent performance] In-Reply-To: <1c7c5b6c.a53d.16cf12ef0ef.Coremail.chx769467092@163.com> References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> <1c7c5b6c.a53d.16cf12ef0ef.Coremail.chx769467092@163.com> Message-ID: <08274916-caba-4348-ed1e-4bd04c1f463d@gmail.com> Hi on 2019/9/2 16:54, 崔恒香 wrote: > The port_security function is disabled.But in the S version environment, adding a lot of flows on br-int. Does this slow down the creation of VM? why adding flows on S but not equivalent on O? sure too many flows take performance down. regards. From mbultel at redhat.com Mon Sep 2 09:31:53 2019 From: mbultel at redhat.com (Mathieu Bultel) Date: Mon, 2 Sep 2019 05:31:53 -0400 (EDT) Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: References: <20190830122850.GA5248@holtby> <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> Message-ID: <197210146.10371673.1567416713833.JavaMail.zimbra@redhat.com> +1 :) ----- Original Message ----- From: Carlos Camacho Gonzalez To: Cédric Jeanneret Cc: OpenStack Discuss Sent: Mon, 02 Sep 2019 03:34:45 -0400 (EDT) Subject: Re: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA +1 On Mon, Sep 2, 2019 at 8:36 AM Cédric Jeanneret wrote: > > Of course +1 ! > > On 8/30/19 2:28 PM, Michele Baldessari wrote: > > Hi all, > > > > Damien (dciabrin on IRC) has always been very active in all HA things in > > TripleO and I think it is overdue for him to have core rights on this > > topic. So I'd like to propose to give him core permissions on any > > HA-related code in TripleO. > > > > Please vote here and in a week or two we can then act on this. > > > > Thanks, > > > > -- > Cédric Jeanneret (He/Him/His) > Software Engineer - OpenStack Platform > Red Hat EMEA > https://www.redhat.com/ > From zhangbailin at inspur.com Mon Sep 2 09:32:03 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Mon, 2 Sep 2019 09:32:03 +0000 Subject: [QA][nova][Concurrent performance] Message-ID: <11d8b9b3b3534717ba2c1008aa4f56bf@inspur.com> Hi > on 2019/9/2 16:54, 崔恒香 wrote: > The port_security function is disabled.But in the S version environment, > adding a lot of flows on br-int. Does this slow down the creation of VM? > why adding flows on S but not equivalent on O? True,if you want to compared the performance of the Stein and Ocata, you should keep their configuration as the same, otherwise it's not representative. > sure too many flows take performance down. @崔恒香 I think you can set the configuration item " vif_plugging_is_fatal=False" [1] in the nova-compute.conf, and then compare the performance of Ocata and Stein version by creating the same quantity servers. And you can combine with the configuration item " vif_plugging_timeout" to use. This way, you can verify if you are creating servers slow because of network. [1] https://docs.openstack.org/nova/stein/configuration/config.html#DEFAULT.vif_plugging_is_fatal [2] https://docs.openstack.org/nova/stein/configuration/config.html#DEFAULT.vif_plugging_timeout > regards. Brin Zhang From ianyrchoi at gmail.com Mon Sep 2 09:45:53 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Mon, 2 Sep 2019 18:45:53 +0900 Subject: [I18n] PTL Non-Candidacy Message-ID: <76659038-d673-3bc0-e97c-817cccdbf36a@gmail.com> Hello I18n team, As announced by [1], now is U-cycle (Ussuri) PTL nomination period, and only less than 2 days are left from now. I will not run for I18n PTL for U-cycle because I am now serving as an PTL/TC election official and the role conflicts with running for PTL [2]. I ran for election official to better understand OpenStack ecosystem and more interact with community members, not to leave from I18n team. I will still stay around I18n team, but it would be so great if someone runs for I18n PTL on upcoming cycle. Also, I shared the current status of I18n team on last July [3]. If you have not read this, please read for more information. Note that I will be at Shanghai Summit + PTG for more I18n sessions as an official team succeeding from my current PTL activities - please prioritize to participate in upcoming Summit + PTG with any kind of ways - both online and offline participation should be definitely fine. With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008679.html [2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines [3] http://lists.openstack.org/pipermail/openstack-i18n/2019-July/003439.html From a.settle at outlook.com Mon Sep 2 10:41:54 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:41:54 +0000 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: On Mon, 2019-09-02 at 16:07 +0900, Akihiro Motoki wrote: > On Mon, Sep 2, 2019 at 1:12 PM Thomas Bechtold > wrote: > > > > Hi, > > > > On 8/27/19 7:58 AM, Akihiro Motoki wrote: > > > > [...] > > > > > How to get started > > > ------------------ > > > > > > - "How to get started" section in the PDF goal etherpad [1] > > > explains > > > the minimum steps. > > > You can find useful examples there too. > > > > This is a bit confusing because the goal[1] mentions that there > > should > > be no extra tox target declared for the gate job. > > But the etherpad explains that there should be a new tox target[2]. > > > > So do we need a new tox target in the project repo? Or is that > > optional > > and just for local testing? > > The new tox target in the project repo is required now. > The PDF doc will be generated only when the "pdf-docs" tox target > does exists. > > When the goal is defined the docs team thought the doc gate job can > handle the PDF build > without extra tox env and zuul job configuration. However, during > implementing the zuul job support > it turns out at least a new tox env or an extra zuul job > configuration > is required in each project > to make the docs job fail when PDF build failure is detected. As a > result, we changes the approach > and the new tox target is now required in each project repo. > > Perhaps we need to update the description of the goal definition > document. This is something I can propose. I will update here when I have updated. Thanks, Alex > > Thanks, > Akihiro > > > > > Cheers, > > > > Tom > > > > [1] > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-ge > > neration.html#completion-criteria > > [2] https://etherpad.openstack.org/p/train-pdf-support-goal > > > > > - To build PDF docs locally, you need to install LaTex related > > > packages. See "To test locally" in the etherpad [1]. > > > - If you hit problems during PDF build, check the common problems > > > etherpad [2]. We are collecting knowledges there. > > > - If you have questions, feel free to ask #openstack-doc IRC > > > channel. > > > > > > Also Please sign up your name to "Project volunteers" in [1]. > > > > > > Useful links > > > ------------ > > > > > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > > > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-proble > > > ms > > > [3] Ongoing reviews: > > > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+ > > > OR+status:merged) > > > > > > Thanks, > > > Akihiro Motoki (amotoki) > > > > > > > > -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:45:26 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:45:26 +0000 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: On Mon, 2019-08-26 at 08:57 -0400, Doug Hellmann wrote: > Since nominations open this week, I wanted to go ahead and let you > all know that I will not be seeking re-election to the Technical > Committee this term. This is some very sad, but unsurprising news. > > My role within Red Hat has been changing over the last year, and > while I am still working on projects related to OpenStack it is no > longer my sole focus. I will still be around, but it is better for me > to make room on the TC for someone with more time to devote to it. Good luck with your exciting new role! > > It’s hard to believe it has been 6 years since I first joined the > Technical Committee. So much has happened in our community in that > time, and I want to thank all of you for the trust you have placed in > me through it all. It has been an honor to serve and help build the > community. Thank you for all the support and energy you have put into this community, the projects, and the people. Your level-headed approach to dealing with key issues, tough conversations, and difficult decisions have had a massive affect on me and influenced so many projects and people for the better. Thank you for all that you've done - and good luck with everything that is to come. Cheers, Alex > > Thank you, > Doug > > -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:46:04 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:46:04 +0000 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: On Mon, 2019-08-26 at 13:19 +0000, Jeremy Stanley wrote: > I've been on the OpenStack Technical Committee continuously for > several years, and would like to take this opportunity to thank > everyone in the community for their support and for the honor of > being chosen to represent you. I plan to continue participating in > the community, including in TC-led activities, but am stepping back > from reelection this round for a couple of reasons. > > First, I want to provide others with an opportunity to serve our > community on the TC. I hope that by standing aside for now, others > will be encouraged to run. A regular influx of fresh opinions helps > us maintain the requisite level of diversity to engage in productive > debate. > > Second, the scheduling circumstances for this election, with the TC > and PTL activities combined, will be a bit more complicated for our > election officials. I'd prefer to stay engaged in officiating so > that we can ensure it goes as smoothly for everyone a possible. To > do this without risking a conflict of interest, I need to not be > running for office. > > It's quite possible I'll run again in 6 months, but for now I'm > planning to help behind the scenes instead. Best of luck to all who > decide to run for election to any of our leadership roles! So sad to hear this! But I'm glad to see you'll be around, regardless. You have been a fantastic source of knowledge and guidance. Thank you, -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:46:42 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:46:42 +0000 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: On Mon, 2019-08-26 at 12:22 -0400, Julia Kreger wrote: > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. :( > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be > possible > for me to serve during this next term. Totally understandable! All the best and no doubt you'll still be around :) thank you for all your help and guidance over your term. > > Thanks everyone! > > -Julia > -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:47:16 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:47:16 +0000 Subject: [tc] not seeking reelection In-Reply-To: References: Message-ID: On Tue, 2019-08-27 at 09:59 -0500, Lance Bragstad wrote: Hi all, Now that the nomination period is open for TC candidates, I'd like to say that I won't be running for a second term on the TC. Sad news! My time on the TC has enriched my understanding of open-source communities and I appreciate all the time people put into helping me get up-to-speed. I wish the best of luck to folks putting their hat in the ring this week! And you were so good at it! Thank you for everything, it has always been so much fun working with you :) Thanks all, Lance -- Alexandra Settle > IRC: asettle -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Mon Sep 2 11:24:04 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Mon, 2 Sep 2019 13:24:04 +0200 Subject: [charms] Ussuri Cycle PTL Candidacy Message-ID: Hello all, I would like to announce my candidacy as PTL for the OpenStack Charms project for the Ussuri cycle. The project has made great progress in the Train cycle under James's capable leadership. Some examples are; further Python3 stabilization and Python2 dependency removal, multi-model support was implemented in the Zaza functional test framework, improvements were made to SSH host key handling for our charmed deployment of Nova, Neutron DVR support was improved, actions to help handle cold start of a Percona Cluster and a tool for retrofitting existing cloud images for use as Octavia Amphora was implemented. We also provided preview charmed support for Masakari. For the Ussuri cycle we look to further improve existing features as well as implement new charm features and new charms. Over several cycles we have developed a good framework for our reactive charm development of OpenStack related charms. This framework has also been adopted for use by non-OpenStack components. I think it is worth taking some time to analyze which building blocks attract non-OpenStack components, and perhaps move the generally applicable parts of the framework down a layer to make it available for general consumption. In the spirit of Open Development this will provide us with benefits we can reap for OpenStack in the long term. Cheers, -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Sep 2 12:39:09 2019 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 2 Sep 2019 14:39:09 +0200 Subject: [neutron] Bug Deputy August 26 - September 01 Message-ID: Hi, Here is the bug deputy report for the week August 26 - September 01: As far as I see only one is without assignee (#1842150 ) - Medium - [L2][OVS] add accepted egress fdb flows (#1841622): assigned - DHCP port information incomplete during the DHCP port setup (#1841636): assigned / in progress - Pyroute2 netns.ns_pids() will fail if during the function loop, one namespace is deleted (#1841753) In progress - Make the MTU attribute not nullable (#1842261) assigned - Low - rootwrap sudo process goes into defunct state (#1841682) assigned - neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin DBError (#1841788) assigned neutron_dynamic_routing - ML2 mech driver sometimes receives network context without provider attributes in delete_network_postcommit (#1841967) in progress / assigned - Undecided - excessive SQL query fanout on port list with many trunk ports (#1842150) Trunk/neutron performance - Dupilicate: - Port-Forwarding can't be set to different protocol in the same IP and Port (#1841741) duplicate of https://bugs.launchpad.net/neutron/+bug/1799155 - Whislist - [L2] stop processing ports twice in ovs-agent (#1841865) - kolla-ansible - Neutron bootstrap failing on Ubuntu bionic with Cannot change column 'network_id (#1841907) fix released in kolla - Old bugs that reappeared: - openvswitch firewall flows cause flooding on integration bridge (#1732067): High, assigned - Invalid: - instance ingress bandwidth limiting doesn't works in ocata. (#1841700) - Vlan network with vlan_id outside of available ranges for physical network can be created always (#1842052) Regards Lajos -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Sep 2 12:55:09 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 2 Sep 2019 14:55:09 +0200 Subject: [blazar][election][ptl] PTL candidacy for Ussuri Message-ID: Hi, I would like to submit my candidacy to serve as PTL of Blazar for the Ussuri release cycle. I served as PTL during the Stein and Train cycles and I am willing to continue in this role. The Train release cycle has been less active than previous ones, with core contributors being less available due to other commitments. To keep the project healthy, it is essential that we grow our community further. As an example, we started running an additional IRC meeting in a timezone compatible with the Americas, which has proved helpful with getting more people involved in the community. I would like to continue this effort in the upcoming cycle, release all the new features that are currently in progress, and work together to fix the main issues that are blocking further adoption of Blazar. Thank you for your support, Pierre Riteau (priteau) From james.page at canonical.com Mon Sep 2 13:27:27 2019 From: james.page at canonical.com (James Page) Date: Mon, 2 Sep 2019 14:27:27 +0100 Subject: [charms][election][ptl] PTL non-candidacy for Ussuri Message-ID: Hi All As Frode has kindly offered to pickup the PTL role this cycle I won't be putting myself forward as a PTL candidate for OpenStack Charms for the Ussuri release cycle. Thanks James -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmendra.kushwaha at india.nec.com Mon Sep 2 14:37:14 2019 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Mon, 2 Sep 2019 14:37:14 +0000 Subject: [tacker][election][ptl] PTL candidacy for Ussuri Message-ID: Hello Everyone, I would like to announce my candidacy again for Tacker PTL role for Ussuri cycle. I am Dharmendra Kushwaha known as dkushwaha on IRC, active member of Tacker community since Mitaka release. I run as Tacker PTL in last two cycles. I would like to thanks all who supported Tacker with their contributions in Train cycle. It is a great experience for me to working in Tacker project with very supportive contributors team. In Train cycle we planned limited features level activities and more towards project stability, bug fixes & code coverage things. Team is working on some rich features like VNF packages support, resource force delete, enhancements in containerized VNFs, and couple of other improvements. Along with daily Tacker activities, my priority for Tacker for U cycle will be more towards: * Tacker CI/CD Improvement: - Focus to introduce more functional and scenario tests. * Tacker stability & production ready: - Focus to have more error-handling and significant logging. - More user friendly documentation. * More towards NFV-MANO rich features: - Make Tacker more ESTI compatible. * More enhancements in VNF Forwarding Graph area. * More work on container based VNFs. You can find my complete contributions here: http://stackalytics.com/?release=all&project_type=all&metric=commits&user_id=dharmendra-kushwaha Thanks for reading and consideration my candidacy. Thanks & Regards Dharmendra Kushwaha IRC: dkushwaha ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. From a.settle at outlook.com Mon Sep 2 14:41:43 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 14:41:43 +0000 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri Message-ID: Hey all, I would like to submit my candidacy for the documentation team's PTL for the Ussuri cycle. Stephen Finucane (Train PTL) will be unofficially serving alongside me in a co-PTL capacity so we can equally address documentation-related tasks and discussions. I served as the documentation PTL for Pike, and am currently serving as an elected member of the Technical Committee in the capacity of vice chair. I have been a part of the community since the beginning of 2014, and have seen the highs and the lows and continue to love working for and with this community. The definition of documentation for OpenStack has been rapidly changing and the future of the documentation team continues to evolve and change. I would like that opportunity to help guide the documentation team, and potentially finish what myself, Petr, Stephen and many others have started and carried on. Thanks, Alex -- Alexandra Settle IRC: asettle From gmann at ghanshyammann.com Mon Sep 2 14:52:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 02 Sep 2019 23:52:46 +0900 Subject: [election][tc] TC Candidacy Message-ID: <16cf276c89c.ac4c7ff3132284.441171358608330433@ghanshyammann.com> Hi All, I would like to announce my candidacy for OpenStack Technical Committee position. First of all, thanks for giving me the opportunity as the technical committee in the previous term. It has been a learning process for me to understand the community and its technicality in a much broader way. There are a lot of things to do for me which I targetted last year but not finished. I am fortunate to work in this community which help me to learn a lot on daily basis. While being a TC, I got more opportunities to talk and work with multiple projects and awesome contributors. Thank you everyone for your support and hardwork. Along with my QA and Nova role, I tried to target broader and cross projects work as my TC responsibility. Migrating the OpenStack CI/CD from Xenial to Bionic, updating the python testing, a current ongoing community goal of IPv6 deployment and testing are the main work as part of this. Obviously it is not necessary to be a TC to do community-wide work but as TC it gives more understanding and actual benefits as overall. For those who do not know, let me introduce myself. I have joined the OpenStack community since 2012 as operator and started as a full-time upstream contributor since 2014 during mid of Ice-House release. Currently, I am PTL for the QA Program since the Rocky cycle and active contributor in QA projects and Nova. Also, I have been randomly contributing in many other projects for example, to Tempest plugins for bug fix and tempest compatibility changes. Along with that, I am actively involved in programs helping new contributors in OpenStack. 1. As a mentor in the Upstream Institute Training since Barcelona Summit (Oct 2016)[1]. 2. FirstContact SIG [2] to help new contributors to onboard in OpenStack. It's always a great experience to introduce OpenStack upstream workflow to new contributors and encourage them to start contribution. I feel that is very much needed in OpenStack. Hosting Upstream Training in Tokyo was a great experience. TC direction has always been valuable and helps to keep the common standards in OpenStack. There are always room for improvements and so does in TC. In the last cycle, TC started an effort to ask the community about "what they expect from TC" but I think we did not get much feedback from the community. But these kind of effort are really great and I think making these practice in every cycle or year is needed. This is my personal interest or opinion. As TC, which is there to set and govern the technical direction and common standard in OpenStack, I think we should also participate in doing more coding. Every TC members are from some projects and contribute a lot of code there. But as TC let's make a practice to do more coding for community-wide efforts. Getting the use case or common problem from users and try to fix them by themselves if no one there. There is no restriction of doing that currently but making this as practice will help the community. Let me list down the area I want to work in my second TC term as well (few are continue from my last term target and few new): * Share Project teams work for Common Goals: This is very important for me as TC and I tried to do this at some extent. I helped on OpenStack gate testing migration from Xenial to Bionic and also I am doing the IPv6 community goal in Train cycle. My strategy is always to do the things by myself if there is no one there instead of keeping things in the backlog. I will be continuing this effort as much as possible. * Users/Operators and Developers interaction: Users and Operators are the most important part of any product and improving the users and developers interaction is much needed for any software. I still feel we are lacking in this area. There are few projects which get user feedback from time to time. Nova is great example to see many users or operators engaged with developers as direct contribution or meetup or ML etc. There are many other projects which are doing good in this area. But there are many projects who do not have much interaction or feedback from users. I would like to try a few ideas to improve this. Not just project-wise but as overall OpenStack. * TC and Developers interaction: There is good amount of effort to improve the interaction between PTL and TC in last couple of years. Health tracker was a good example and now TC Liasion. I would like to extend this interaction to each developer, not just PTL. We need some practical mechanism to have frequent discussions between TC and developers. At this stage, I do not know how to do that but I will be working on this in my next term. One way is to help them in term of coding, user feedback etc and then encourage them to take part in TC engagements. Thank you for reading and considerating my candidacy. Refernce: * Blogs: https://ghanshyammann.com * Review: http://stackalytics.com/?release=all&metric=marks&user_id=ghanshyammann&project_type=all * Commit: http://stackalytics.com/?release=all&metric=commits&user_id=ghanshyammann&project_type=all * Foundation Profile: https://www.openstack.org/community/members/profile/6461 * IRC (Freenode): gmann [1] https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute [2] https://wiki.openstack.org/wiki/First_Contact_SIG - Ghanshyam Mann (gmann) From gkotton at vmware.com Mon Sep 2 14:57:09 2019 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 2 Sep 2019 14:57:09 +0000 Subject: [QA][nova][Concurrent performance] In-Reply-To: References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com>, Message-ID: Hi, When we did our testing a few months ago we saw the same thing with the degradation of the performance. Keystone was a notable bottleneck. Thanks Gary ________________________________ From: Wesley Peng Sent: Monday, September 2, 2019 10:58 AM To: openstack-discuss at lists.openstack.org Subject: Re: [QA][nova][Concurrent performance] Hi on 2019/9/2 15:43, 崔恒香 wrote: > Why the performance of Stein concurrent creation of VM is not as good as > that of Ocata? > Create 250 VMs concurrently, the O version only needs 160s, but the S > version needs 250s. Openstack's execution environment is complicated. It's hard to say O is faster than S from the architecture viewpoint. Are you sure two systems' running environment are totally the same for comparision? including the software, hardware, network connection etc. regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Mon Sep 2 15:51:48 2019 From: james.page at canonical.com (James Page) Date: Mon, 2 Sep 2019 16:51:48 +0100 Subject: [neutron][networking-infoblox] current status? Message-ID: Hi networking-infoblox developers What's the current status of this driver for Neutron? I've been working on packaging for Ubuntu today and don't see a release for Stein as well as a few reviews in gerrit that have been open for a while with no activity so I was wondering whether this project still had focus from Infoblox. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 2 17:08:57 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 2 Sep 2019 17:08:57 +0000 Subject: [all][elections][ptl][tc] Conbined PTL/TC Nominations Last Days Message-ID: <20190902170857.hrtnmn3yrcvq543i@yuggoth.org> A quick reminder that we are in the last hours for declaring PTL and TC candidacies. Nominations are open until Sep 03, 2019 23:45 UTC. If you want to stand for election, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2019-08-27 23:45:00 UTC Nominations end @ 2019-09-03 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 1 day, 6:41:51 Nominations progress : 81.73% --------------------------------------------------- Projects[1] : 63 Projects with candidates : 39 ( 61.90%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 24 (Adjutant Cyborg Designate Freezer Horizon I18n Infrastructure Loci Manila Monasca Nova Octavia OpenStackAnsible OpenStackSDK OpenStack_Helm Oslo Placement PowerVMStackers Rally Release_Management Requirements Telemetry Winstackers Zun) =================================================== Stats gathered @ 2019-09-02 17:03:09 UTC This means that with approximately one day left, 24 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. We also need at least three more candidates to fill the six open seats on the OpenStack Technical committee. Thank you, [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy [2] Any open reviews at https://review.openstack.org/#/q/is:open+project:openstack/election have not been factored into these stats. [3] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -- Jeremy Stanley, on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From e0ne at e0ne.info Mon Sep 2 18:22:37 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 2 Sep 2019 21:22:37 +0300 Subject: [Horizon] [stable] Adding Radomir Dopieralski to horizon-stable-maint In-Reply-To: References: Message-ID: Almost two weeks passed without any objections. I would like to ask Stable team to add Rodomir to the horizon-stable-maint group. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Aug 20, 2019 at 4:28 PM Ivan Kolodyazhny wrote: > Hi team, > > I'd like to propose adding Radomir Dopieralski to the horizon-stable-maint > team. > He's doing good quality reviews for stable branches [1] on a regular basis > and > I think Radomir will be a good member of our small group. > > [1] > https://review.opendev.org/#/q/reviewer:openstack%2540sheep.art.pl+NOT+branch:master > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Sep 2 18:30:11 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 2 Sep 2019 21:30:11 +0300 Subject: [horizon][ptl][election] PTL Non-Candidacy Message-ID: Hi team, It has been a pleasure and a big honour to lead Horizon team for the last three cycles. Beeing a PTL is a hard full-time job. Unfortunately, my job priorities changed and I'm not feeling that I could spend enough time as a PTL for the next cycle. There are a lot of things to be done in Horizon which we started and planned. I'm not going away from the community and will continue to contribute to the project. I'm pretty sure that with a new PTL we'll have a good time to work on at least during the next U cycle. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Sep 2 19:31:20 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 2 Sep 2019 15:31:20 -0400 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: > On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > > On Mon, Sep 2, 2019 at 1:12 PM Thomas Bechtold wrote: >> >> Hi, >> >> On 8/27/19 7:58 AM, Akihiro Motoki wrote: >> >> [...] >> >>> How to get started >>> ------------------ >>> >>> - "How to get started" section in the PDF goal etherpad [1] explains >>> the minimum steps. >>> You can find useful examples there too. >> >> This is a bit confusing because the goal[1] mentions that there should >> be no extra tox target declared for the gate job. >> But the etherpad explains that there should be a new tox target[2]. >> >> So do we need a new tox target in the project repo? Or is that optional >> and just for local testing? > > The new tox target in the project repo is required now. > The PDF doc will be generated only when the "pdf-docs" tox target does exists. > > When the goal is defined the docs team thought the doc gate job can > handle the PDF build > without extra tox env and zuul job configuration. However, during > implementing the zuul job support > it turns out at least a new tox env or an extra zuul job configuration > is required in each project > to make the docs job fail when PDF build failure is detected. As a > result, we changes the approach > and the new tox target is now required in each project repo. The whole point of structuring the goal the way we did was that we do not want to update every single repo this cycle so we could roll out PDF building transparently. We said we would allow the job to pass even if the PDF build failed, because this was phase 1 of making all of this work. The plan was to 1. extend the current job to make PDF building optional; 2. examine the results to see how many repos need significant work; 3. add a feature flag via a setting somewhere in the repo to control whether the job fails if PDFs cannot be built. That avoids a second doc job running in parallel, and still allows us to roll out the PDF build requirement over time when we have enough information to do so. > > Perhaps we need to update the description of the goal definition document. I don’t think it’s a good idea to change the scope of the goal at this point in the release cycle. > > Thanks, > Akihiro > >> >> Cheers, >> >> Tom >> >> [1] >> https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html#completion-criteria >> [2] https://etherpad.openstack.org/p/train-pdf-support-goal >> >>> - To build PDF docs locally, you need to install LaTex related >>> packages. See "To test locally" in the etherpad [1]. >>> - If you hit problems during PDF build, check the common problems >>> etherpad [2]. We are collecting knowledges there. >>> - If you have questions, feel free to ask #openstack-doc IRC channel. >>> >>> Also Please sign up your name to "Project volunteers" in [1]. >>> >>> Useful links >>> ------------ >>> >>> [1] https://etherpad.openstack.org/p/train-pdf-support-goal >>> [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems >>> [3] Ongoing reviews: >>> https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) >>> >>> Thanks, >>> Akihiro Motoki (amotoki) From mthode at mthode.org Mon Sep 2 21:54:44 2019 From: mthode at mthode.org (Matthew Thode) Date: Mon, 2 Sep 2019 16:54:44 -0500 Subject: [requirements][election][ptl] PTL canidacy for ussuri Message-ID: <20190902215444.v3klpjxzyvfvqnxk@mthode.org> I would like to announce my candidacy for PTL of the Requirements project for the Ussuri cycle. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. (Keep things working well) 2. Un-cap requirements where possible (stuff like cmd2). 3. Publish constraints and requirements to streamline the freeze process. 4. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. JSON libs are on the short list this go around. I look forward to continue working with you in this cycle, as your PTL or not. Thanks for your time, Matthew Thode IRC: prometheanfire -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zhangbailin at inspur.com Tue Sep 3 01:54:43 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Tue, 3 Sep 2019 01:54:43 +0000 Subject: =?utf-8?B?cmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVtlbGVjdGlvbl1bdGNd?= =?utf-8?Q?_TC_Candidacy?= Message-ID: <1822cfc26fe14eddb46b318af8e694af@inspur.com> Agree with the gmann campaign TC. He has always been concerned about the work of the OpenStack community (mainly nova) and has provided help to contributors (including me) on different projects in the community. I think that TC will make him more broad-minded and need to provide more people who contribute to the community. Brin From kevin at cloudnull.com Tue Sep 3 02:17:09 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Mon, 2 Sep 2019 21:17:09 -0500 Subject: [election][tc] Candidacy for TC Message-ID: Hello Everyone, Some of you I've known for years; others are reading this and wondering who I am. While I may not know everyone in our ever-expanding community, I'd love the opportunity to get to know more of you, and I'd be honored to represent this community as a member of the TC. At this time, I would like to (re)introduce myself[6] and announce my candidacy for the upcoming TC election. A bit about me and why OpenStack is where my heart is. I have had the pleasure of working with OpenStack since 2012 and on OpenStack since 2013. My contributions to the community have not focused on core services, so I'm sure I've never crossed paths with a lot of folks reading this. However, I've spent considerable time working on the deployment, operations, and scale components of OpenStack. I got my start in OpenStack as an administrator of public clouds in 2012. I transitioned to Private Clouds in 2013. I had the pleasure to join a system engineering and the development team; focused on platform operations and deployments. It was through my time in the Private Cloud where when I began working on tooling that would eventually become the OpenStack-Ansible project. I was PTL of OpenStack-Ansible from its inception[2] until Liberty. I remained a core reviewer with the project, continuing to work on every facet of the tooling, until very recently. In 2015, at the Vancouver OpenStack summit, the simple OpenStack initiative was announced[5]. I took an active role in this effort as it aimed to promote better cross-community collaboration in a time where folks were somewhat siloed. While this particular effort never really took off, it establishes credibility in trying to work with people not necessarily focused on OpenStack. In my opinion, the simple OpenStack initiative was a success as it helped pave the way to future relationships while also serving as a precursor to other efforts just now starting (e.g., the Ansible-SIG[4]). This year, my journey charted a slightly new course. In mid-2019, I put on a Red fedora and have begun working on TripleO. I joined an incredible team within the Deployment Framework, and I'm looking forward to the new challenges ahead of me as I dive deeper into developing cloud tooling for the enterprise. Rest assured, I'm still working on OpenStack, I'm still trying to build a more perfect cloud, and I'm still in love with our community. So now that you know a little bit about me, I bet your reading this and wondering, why I'm running for the TC. To put it simply, I believe my experience running, building, and developing both public and private clouds puts me in a position to add a distinct voice to the TC. As a member of the TC, I would like to partner with everyone interested to help build a better, more engaged, fraternity of Stackers. I also think we can do more cross-project (cross-community) collaboration. We've done some fantastic work in this space, and I'd like to take up this mantle to continue our collaborative march to success. OpenStack has been a tremendous community to be involved with. My success as an individual is directly tied to the community, and if elected to the TC, it would be my honor to give back to the community in this new capacity. I will focus on bringing different points of view to the table. I will concentrate on collaboration. I will reach out to new and old projects alike. I will work tirelessly to assist anyone who requests my help. Finally, while it goes without saying, I will do everything I can to promote OpenStack's future success. Thank you for your consideration. -- Kevin Carter IRC: Cloudnull [0] https://www.stackalytics.com/?metric=commits&release=all&module=openstackansible-group&user_id=kevin.carter at rackspace.com [1] https://www.stackalytics.com/?metric=commits&release=all&module=openstackansible-group&user_id=kevin-carter [2] http://lists.openstack.org/pipermail/openstack-operators/2014-December/005683.html [3] https://www.stackalytics.com/?metric=commits&release=all&module=tripleo-group&user_id=kevin-carter [4] https://review.opendev.org/#/c/676428/1/sigs.yaml [5] https://www.ansible.com/blog/simple-openstack [6] https://www.openstack.org/community/speakers/profile/758/kevin-carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From chx769467092 at 163.com Tue Sep 3 02:22:19 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Tue, 3 Sep 2019 10:22:19 +0800 (CST) Subject: [qa][nova][migrate]CPU doesn't have compatibility Message-ID: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> Hi This is my ERROR info(ocata): 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5229, in check_can_live_migrate_destination 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server disk_over_commit) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5240, in _do_check_can_live_migrate_destination 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server block_migration, disk_over_commit) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5670, in check_can_live_migrate_destination 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server self._compare_cpu(None, source_cpu_info, instance) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5938, in _compare_cpu 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server raise exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Tue Sep 3 02:30:54 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Tue, 3 Sep 2019 10:30:54 +0800 Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> Message-ID: <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> on 2019/9/3 10:22, 崔恒香 wrote: > 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server  raise > exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) > 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server > InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. Are you implementing a live migration? It seems you have a uncompatible CPU in peer host. regards. From chx769467092 at 163.com Tue Sep 3 03:36:39 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Tue, 3 Sep 2019 11:36:39 +0800 (CST) Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> Message-ID: <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> At 2019-09-03 10:30:54, "Wesley Peng" wrote: > > >on 2019/9/3 10:22, 崔恒香 wrote: >> 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server raise >> exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) >> 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server >> InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. > >Are you implementing a live migration? It seems you have a uncompatible >CPU in peer host. > Yes, It's a live migration. root at compute101:~# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz root at compute102:~# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 24 Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz We can migrate the vm from compute102 to compute101 Successfully. compute101 to compute102 ERROR info: the CPU is incompatible with host CPU: Host CPU does not provide required features: f16c, rdrand, fsgsbase, smep, erms -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Tue Sep 3 03:40:34 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Tue, 3 Sep 2019 11:40:34 +0800 Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> Message-ID: on 2019/9/3 11:36, 崔恒香 wrote: > We can migrate the vm from compute102 to compute101 Successfully. > compute101 to compute102 ERROR info: the CPU is incompatible with host > CPU: Host CPU does not provide required features: f16c, rdrand, > fsgsbase, smep, erms > > The error has said, cpu of compute101 is lower than compute102, some incompatible issues happened. To live migrate, you'd better have all hosts with the same hardwares, including cpu/mem/disk etc. regards. From rony.khan at brilliant.com.bd Tue Sep 3 06:19:17 2019 From: rony.khan at brilliant.com.bd (Md. farhad Hasan Khan) Date: Tue, 3 Sep 2019 12:19:17 +0600 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <024201d54c1c$391aa360$ab4fea20$@brilliant.com.bd> References: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> <62-5d43f000-5-50968180@101299267> <024201d54c1c$391aa360$ab4fea20$@brilliant.com.bd> Message-ID: <09f901d5621f$83ea0d40$8bbe27c0$@brilliant.com.bd> Hi Jens, Thanks for your nice documentation. Thanks & B'Rgds, Farhad -----Original Message----- From: Md. Farhad Hasan Khan [mailto:rony.khan at brilliant.com.bd] Sent: Tuesday, August 6, 2019 12:00 PM To: 'Core System' Subject: FW: Openstack IPv6 neutron confiuraton -----Original Message----- From: Jens Harbott [mailto:frickler at x-ion.de] Sent: Friday, August 2, 2019 2:10 PM To: Slawek Kaplonski Cc: rony.khan at brilliant.com.bd; OpenStack Discuss Subject: Re: Openstack IPv6 neutron confiuraton On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > Hi, > > In tenant networks IPv6 packets are going same way as IPv4 packets. > There is no differences between IPv4 and IPv6 AFAIK. > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You > can find some deployment examples and explanation when ovs mechanism > driver is used and in > https://docs.openstack.org/neutron/latest/admin/deploy-lb.html > there is similar doc for linuxbridge driver. For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. [1] https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html > There are differences with e.g. how DHCP is handled for IPv6. Please > check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html > for details. Also noting that the good reference article at the end of this doc sadly has disappeared, though you can still find it via the web archives. See also https://review.opendev.org/674018 From missile0407 at gmail.com Tue Sep 3 07:51:28 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 3 Sep 2019 15:51:28 +0800 Subject: [kolla-ansible] Correct way to add/remove nodes. Message-ID: Hi, I wanna know the correct way to add/remove nodes since I can't find the completely document or tutorial about this part. Here's what I know for now. For addition: 1. Install OS and setting up network on new servers. 2. Add new server's information into /etc/hosts and inventory file 3. Do bootstrapping to these servers by using bootstrap-servers with --limit option 4. (For Ceph OSD node) Add disk label to the disks that will become OSD. 5. Deploy again. For deletion (Compute): 1. Do migration if there're VMs exist on target node. 2. Set nova-compute service down on target node. Then remove the service from nova cluster. 3. Disable all Neutron agents on target node and remove from Neutron cluster. 4. Using kolla-ansible to stop all containers on target node. 5. Cleanup all containers and left settings by using cleanup-containers and cleanup-host script. For deletion (Ceph OSD node): 1. Remove all OSDs on target node by following Ceph tutorial. 2. Using kolla-ansible to stop all containers on target node. 3. Cleanup all containers and left settings by using cleanup-containers and cleanup-host script. Now I'm not sure about Controller if there's one controller down and want to add another one into HA cluster. My thought is that add into cluster first, then delete the informations about corrupted controller. But I have no clue about the details. Only about Ceph controller (mon, rgw, mds,. etc) Does anyone has experience about this? Many thanks, Eddie. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Sep 3 08:36:30 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 3 Sep 2019 10:36:30 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: <5649a103-0bed-467b-8a0c-23a27ed56562@openstack.org> Jean-Philippe Evrard wrote: > On Tue, 2019-08-27 at 12:50 -0400, Jim Rollenhagen wrote: >> I agree even numbers are not a problem. I don't think (hope?) >> the existing TC would merge anything that went 7-6 anyway. > > Agreed with that. > > And because I didn't write my opinion on the topic: > - I agree on the reduction. I don't know what the sweet spot is. 9 > might be it. > - If all the candidates and the election officials are ok with reduced > seats this time, we could start doing it now. > > It seems the last point isn't obvious, so in the meantime could we > propose the plan for the reduction by proposing a governance change? To close on that: It's too late to change for the current election, however if we don't get any new candidate, then the TC would mechanically get reduced to 11 already. Based on how the election turns out, once it is over I'll propose a governance change to gradually transition to 9 or 11 members, which will affect future elections. Cheers, -- Thierry Carrez (ttx) From sfinucan at redhat.com Tue Sep 3 09:54:49 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 03 Sep 2019 10:54:49 +0100 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > > On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: [snip] > > When the goal is defined the docs team thought the doc gate job can > > handle the PDF build > > without extra tox env and zuul job configuration. However, during > > implementing the zuul job support > > it turns out at least a new tox env or an extra zuul job configuration > > is required in each project > > to make the docs job fail when PDF build failure is detected. As a > > result, we changes the approach > > and the new tox target is now required in each project repo. > > The whole point of structuring the goal the way we did was that we do > not want to update every single repo this cycle so we could roll out > PDF building transparently. We said we would allow the job to pass > even if the PDF build failed, because this was phase 1 of making all > of this work. > > The plan was to 1. extend the current job to make PDF building > optional; 2. examine the results to see how many repos need > significant work; 3. add a feature flag via a setting somewhere in > the repo to control whether the job fails if PDFs cannot be built. > That avoids a second doc job running in parallel, and still allows us > to roll out the PDF build requirement over time when we have enough > information to do so. Unfortunately when we tried to implement this we found that virtually every project we looked at required _some_ amount of tweaks just to build, let alone look sensible. This was certainly true of the big service projects (nova, neutron, cinder, ...) which all ran afoul of a bug [1] in the Sphinx LaTeX builder. Given the issues with previous approach, such as the inability to easily reproduce locally and the general "hackiness" of the thing, along with the fact that we now had to submit changes against projects anyway, a collective decision was made [2] to drop that plan and persue the 'pdfdocs' tox target approach. If we're concerned about the difficulty of closing this out this cycle, I'd be in favour of just limiting our scope. IMO, the service projects are the ones that would benefit most from PDF documentation. These are the things people actually use and they tend to have the most complete documentation. Libraries can be self-documenting (yes, I know), in so far as once can use introspection, existing code examples, and the 'help' built-in to piece together what information they need. We should aim to close that gap long-term, but for now requiring modifications to ensure we have _some_ PDFs sounds a lot better than requiring no modifications and having no PDFs. Cheers, Stephen [1] https://github.com/sphinx-doc/sphinx/issues/3099 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2019-08-21.log.html#t2019-08-21T13:19:01 > > Perhaps we need to update the description of the goal definition document. > > I don’t think it’s a good idea to change the scope of the goal at > this point in the release cycle. From nate.johnston at redhat.com Tue Sep 3 11:28:04 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 3 Sep 2019 07:28:04 -0400 Subject: [election][tc] Candidacy for TC Message-ID: <20190903112705.x2vani3wdjmumhlz@bishop> Hello everyone, I would like to nominate myself for a position on the OpenStack Technical Committee. I started working in OpenStack in the Kilo release. I have always been focused on the networking aspects of OpenStack, mostly from a telco perspective. I had a two-cycle absence during Ocata/Pike when my former employer made a strategic decision to de-emphasize OpenStack. But I came back a year ago, and now I am a core reviewer for the Neutron project. I have never served on the TC. I deeply love OpenStack, the community of people that have come together to make cloud technology available in a completely open way for the world. It is indisputable that OpenStack is a truly global project now. I think the work that lies ahead of us is to cement OpenStack's place as a fundamental building block upon which future technologies are built. Now that StarlingX, Zuul, and Airship are also under the OpenStack foundation I think it will be more important for some of the strategic vision for the future evolution of OpenStack to come from the TC. Here are the main things I would focus on as a member of the TC: 1.) IPv6-only cloud computing: The incredible proliferation of network addressable devices will only accelerate. Some forward-thinking enterprises are already switching over to mostly, or entirely, IPv6 networking. Here we have an established advantage, as OpenStack supported IPv6 before any of the big public clouds, and we can continue to lean into the future by providing a well tested and documented IPv6-only option. 2.) A continued focus on Edge: Edge is not just for large enterprises with hundreds of widely spread points of presence. An edge deployment could also serve a small-to-medium business with a thin presence in two remote locations. The work to drive towards an edge architecture, combined with improvements in stability and ease of use, will make OpenStack an option in new areas, and I think that will be vitally important for our future. 3.) Making the experience of both operator and developer easier. I think this can be accomplished in a number of ways: by making the systems we use to develop and test our code more similar to operational clouds by moving beyond Devstack in the gate. 4.) Dealing with the contraction of the contributor community: There is much more documentation around what happens when a project begins than for what happens when it is no longer actively maintained. I think there is a lot of ambiguity that we ought to clear up for our users to clearly delineate a process of stepping down support as a project loses vitality, so that we are clearly communicating what they should expect from us as a community. Thank you very much for reading, and for considering my candidacy. Nate Johnston IRC: njohnston From witold.bedyk at suse.com Tue Sep 3 11:41:40 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Tue, 3 Sep 2019 13:41:40 +0200 Subject: [monasca][election][ptl] PTL Candidacy for Ussuri Message-ID: <46300de4-fa89-fe84-a746-fbe181a4a930@suse.com> Hello everyone, I would like to announce my candidacy to continue as the PTL of Monasca for the Ussuri release. After looking at my candidacy statement for the last cycle I would like to keep most of the defined themes. In the next release I would like to focus on the following goals: * strengthen the community and improve active participation and contribution * consolidate the project by concentrating on the core functionality (metrics, logs, events), cleaning up technical debt; in particular, I would like to continue driving the work on replacing the thresholding engine * collaborate with Telemetry project to identify and solve possible gaps and allow users to seamlessly migrate to Monasca * continue working on and promoting containerized deployment and Prometheus integration * continue to improve the documentation * collaborate with other OpenStack projects, e.g. by contributing to self-healing and auto-scaling SIGs Thank you for considering my candidacy. Best greetings Witek From fungi at yuggoth.org Tue Sep 3 12:01:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 12:01:01 +0000 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> Message-ID: <20190903120100.b72uecrxnan32wni@yuggoth.org> On 2019-09-03 10:54:49 +0100 (+0100), Stephen Finucane wrote: [...] > If we're concerned about the difficulty of closing this out this > cycle, I'd be in favour of just limiting our scope. [...] If the goal needs a major overhaul this late in the cycle, when projects need to be shifting their focus to release-related activities, it may be wise to defer this goal to Ussuri. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mnaser at vexxhost.com Tue Sep 3 12:31:03 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Sep 2019 08:31:03 -0400 Subject: [openstack-ansible][election] PTL candidacy for Ussuri Message-ID: I'd like to announce my candidacy for OpenStack Ansible PTL. Since my last time as PTL, I mentioned the hopes of doing a few things during the cycle: # Simplifying scenarios for usage and testing We made some really good progress on this but I think our testing has improved (see below) but there's still a big wall to climb to reach a production environment. # Using integrated repository for testing and dropping role tests This effort has been largely completed. I'm very happy of the results and I'm hoping to drop the tests/ folder once we figure out the linting stuff. # Progress on switching all deployments to use Python3 This is pretty much blocked and waiting until CentOS 8 is out so we can make an all EL8 release. # Eventual addition of CentOS 8 option The lack of availability has impared this :( # Reduction in number of config variables (encouraging overrides) We slowly started to do this for a few roles but it seems like this is a very long term thing. # Increase cooperation with other deployment projects (i.e. TripleO) This has started to happen over a few roles and the newly formed Ansible SIG which should encompass a lot of the work. I would like us to be able to continue to catch up on our technical debt as I think at this point, OSA is pretty much feature complete for the most part so it's about doing things that make the maintainership easy moving onwards. I would also like to start dropping some of the old releases that still have open branches. We don't have anyone that works on them at the moment and it's better to end them than leave them stale. Thank you for your consideration. -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From doug at doughellmann.com Tue Sep 3 12:42:21 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 3 Sep 2019 08:42:21 -0400 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> Message-ID: <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> > On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > > On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > > [snip] > >>> When the goal is defined the docs team thought the doc gate job can >>> handle the PDF build >>> without extra tox env and zuul job configuration. However, during >>> implementing the zuul job support >>> it turns out at least a new tox env or an extra zuul job configuration >>> is required in each project >>> to make the docs job fail when PDF build failure is detected. As a >>> result, we changes the approach >>> and the new tox target is now required in each project repo. >> >> The whole point of structuring the goal the way we did was that we do >> not want to update every single repo this cycle so we could roll out >> PDF building transparently. We said we would allow the job to pass >> even if the PDF build failed, because this was phase 1 of making all >> of this work. >> >> The plan was to 1. extend the current job to make PDF building >> optional; 2. examine the results to see how many repos need >> significant work; 3. add a feature flag via a setting somewhere in >> the repo to control whether the job fails if PDFs cannot be built. >> That avoids a second doc job running in parallel, and still allows us >> to roll out the PDF build requirement over time when we have enough >> information to do so. > > Unfortunately when we tried to implement this we found that virtually > every project we looked at required _some_ amount of tweaks just to > build, let alone look sensible. This was certainly true of the big > service projects (nova, neutron, cinder, ...) which all ran afoul of a > bug [1] in the Sphinx LaTeX builder. Given the issues with previous > approach, such as the inability to easily reproduce locally and the > general "hackiness" of the thing, along with the fact that we now had > to submit changes against projects anyway, a collective decision was > made [2] to drop that plan and persue the 'pdfdocs' tox target > approach. We wanted to avoid making a bunch of the same changes to projects just to add the PDF building instructions. If the *content* of a project’s documentation needs work, that’s different. We should make those changes. > > If we're concerned about the difficulty of closing this out this cycle, > I'd be in favour of just limiting our scope. IMO, the service projects > are the ones that would benefit most from PDF documentation. These are > the things people actually use and they tend to have the most complete > documentation. Libraries can be self-documenting (yes, I know), in so > far as once can use introspection, existing code examples, and the > 'help' built-in to piece together what information they need. We should > aim to close that gap long-term, but for now requiring modifications to > ensure we have _some_ PDFs sounds a lot better than requiring no > modifications and having no PDFs. > > Cheers, > Stephen > > [1] https://github.com/sphinx-doc/sphinx/issues/3099 > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2019-08-21.log.html#t2019-08-21T13:19:01 > >>> Perhaps we need to update the description of the goal definition document. >> >> I don’t think it’s a good idea to change the scope of the goal at >> this point in the release cycle. > > > From sean.mcginnis at gmx.com Tue Sep 3 12:54:22 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Sep 2019 07:54:22 -0500 Subject: [RelMgmt][election] PTL candidacy Message-ID: <20190903125422.GA28241@sm-workstation> Greetings! I would like to submit my name to continue as the release management PTL for the Ussuri release. I think release management is one of those critical functions for a project like OpenStack that often gets overlooked or just doesn't have the level of awareness that some of the other projects have. I've been PTL or active core since the Queens release. We have a lot of the release mechanisms automated now, but we still need to keep things running smoothly and handling any of the little issues that always pop up. My day job role isn't as focused on OpenStack as it had been, but I will still be able to devote enough time to help keep the fires out and guide anyone else that would like to get ready to take over the reins. Thank you for your consideration. Sean McGinnis (smcginnis) From sfinucan at redhat.com Tue Sep 3 13:04:53 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 03 Sep 2019 14:04:53 +0100 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> Message-ID: <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: > > On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > > > > On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > > > > On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > > > > [snip] > > > > > > When the goal is defined the docs team thought the doc gate job can > > > > handle the PDF build > > > > without extra tox env and zuul job configuration. However, during > > > > implementing the zuul job support > > > > it turns out at least a new tox env or an extra zuul job configuration > > > > is required in each project > > > > to make the docs job fail when PDF build failure is detected. As a > > > > result, we changes the approach > > > > and the new tox target is now required in each project repo. > > > > > > The whole point of structuring the goal the way we did was that we do > > > not want to update every single repo this cycle so we could roll out > > > PDF building transparently. We said we would allow the job to pass > > > even if the PDF build failed, because this was phase 1 of making all > > > of this work. > > > > > > The plan was to 1. extend the current job to make PDF building > > > optional; 2. examine the results to see how many repos need > > > significant work; 3. add a feature flag via a setting somewhere in > > > the repo to control whether the job fails if PDFs cannot be built. > > > That avoids a second doc job running in parallel, and still allows us > > > to roll out the PDF build requirement over time when we have enough > > > information to do so. > > > > Unfortunately when we tried to implement this we found that virtually > > every project we looked at required _some_ amount of tweaks just to > > build, let alone look sensible. This was certainly true of the big > > service projects (nova, neutron, cinder, ...) which all ran afoul of a > > bug [1] in the Sphinx LaTeX builder. Given the issues with previous > > approach, such as the inability to easily reproduce locally and the > > general "hackiness" of the thing, along with the fact that we now had > > to submit changes against projects anyway, a collective decision was > > made [2] to drop that plan and persue the 'pdfdocs' tox target > > approach. > > We wanted to avoid making a bunch of the same changes to projects just to > add the PDF building instructions. If the *content* of a project’s documentation > needs work, that’s different. We should make those changes. I thought the only reason to hack the docs venv in a Zuul job was to avoid having to mass patch projects to add tox configuration? As such, if we're already having to mass patch projects because they don't build otherwise, why wouldn't we add the tox configuration? Was there another reason to pursue the zuul-only approach that I've forgotten about/never knew? Stephen > > If we're concerned about the difficulty of closing this out this cycle, > > I'd be in favour of just limiting our scope. IMO, the service projects > > are the ones that would benefit most from PDF documentation. These are > > the things people actually use and they tend to have the most complete > > documentation. Libraries can be self-documenting (yes, I know), in so > > far as once can use introspection, existing code examples, and the > > 'help' built-in to piece together what information they need. We should > > aim to close that gap long-term, but for now requiring modifications to > > ensure we have _some_ PDFs sounds a lot better than requiring no > > modifications and having no PDFs. > > > > Cheers, > > Stephen > > > > [1] https://github.com/sphinx-doc/sphinx/issues/3099 > > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2019-08-21.log.html#t2019-08-21T13:19:01 > > > > > > Perhaps we need to update the description of the goal definition document. > > > > > > I don’t think it’s a good idea to change the scope of the goal at > > > this point in the release cycle. From doug at doughellmann.com Tue Sep 3 13:15:05 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 3 Sep 2019 09:15:05 -0400 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> Message-ID: <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> > On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: > > On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: >>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: >>> >>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: >>> >>> [snip] >>> >>>>> When the goal is defined the docs team thought the doc gate job can >>>>> handle the PDF build >>>>> without extra tox env and zuul job configuration. However, during >>>>> implementing the zuul job support >>>>> it turns out at least a new tox env or an extra zuul job configuration >>>>> is required in each project >>>>> to make the docs job fail when PDF build failure is detected. As a >>>>> result, we changes the approach >>>>> and the new tox target is now required in each project repo. >>>> >>>> The whole point of structuring the goal the way we did was that we do >>>> not want to update every single repo this cycle so we could roll out >>>> PDF building transparently. We said we would allow the job to pass >>>> even if the PDF build failed, because this was phase 1 of making all >>>> of this work. >>>> >>>> The plan was to 1. extend the current job to make PDF building >>>> optional; 2. examine the results to see how many repos need >>>> significant work; 3. add a feature flag via a setting somewhere in >>>> the repo to control whether the job fails if PDFs cannot be built. >>>> That avoids a second doc job running in parallel, and still allows us >>>> to roll out the PDF build requirement over time when we have enough >>>> information to do so. >>> >>> Unfortunately when we tried to implement this we found that virtually >>> every project we looked at required _some_ amount of tweaks just to >>> build, let alone look sensible. This was certainly true of the big >>> service projects (nova, neutron, cinder, ...) which all ran afoul of a >>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous >>> approach, such as the inability to easily reproduce locally and the >>> general "hackiness" of the thing, along with the fact that we now had >>> to submit changes against projects anyway, a collective decision was >>> made [2] to drop that plan and persue the 'pdfdocs' tox target >>> approach. >> >> We wanted to avoid making a bunch of the same changes to projects just to >> add the PDF building instructions. If the *content* of a project’s documentation >> needs work, that’s different. We should make those changes. > > I thought the only reason to hack the docs venv in a Zuul job was to > avoid having to mass patch projects to add tox configuration? As such, > if we're already having to mass patch projects because they don't build > otherwise, why wouldn't we add the tox configuration? Was there another > reason to pursue the zuul-only approach that I've forgotten about/never > knew? I expected to need to fix formatting (even up to the point of commenting things out, like we found with the giant config sample files). Those are content changes, and would be mostly unique across projects. I wanted to avoid a large number of roughly identical changes to add tox environments, zuul jobs, etc. because having a lot of patches like that across all the repos makes extra work for small gain, especially when we can get the same results with a small number of changes in one repository. The approach we discussed was to update the docs job to run some extra steps using scripts that lived in the openstackdocstheme repository. That shouldn’t require adding any extra software or otherwise modifying the tox environments. Did that approach not work out? Doug From kevin at cloudnull.com Tue Sep 3 13:51:00 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Tue, 3 Sep 2019 08:51:00 -0500 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 -- Kevin Carter IRC: Cloudnull On Fri, Aug 30, 2019 at 7:33 AM Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Sep 3 14:03:12 2019 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 3 Sep 2019 19:33:12 +0530 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190814192440.GA3048@sm-workstation> References: <20190814192440.GA3048@sm-workstation> Message-ID: On Thu, Aug 15, 2019 at 12:54 AM Sean McGinnis wrote: > > > > > > > Bringing this backup to see what we need to do to get the stable/ocata > > > branches ended for the TripleO projects. I'm bringing this up > > > because we have https://review.openstack.org/#/c/647009/ which is for > > > the upcoming rename but CI is broken and we have no interest in > > > continue to keep the stable/ocata branches alive (or fix ci for them). > > > > > So we had a discussion yesterday in TripleO meeting regarding EOL of > > Ocata and Pike Branches for TripleO projects, and there was no clarity > > regarding the process of making the branches EOL(is just pushing a > > change to openstack/releases(deliverables/ocata/.yaml) > > creating ocata-eol tag enough or something else is also needed), can > > someone from Release team point us in the right direction. > > > > > Thanks, > > > -Alex > > > > > It would appear we have additional information we should add to somewhere like: > > https://docs.openstack.org/project-team-guide/stable-branches.html > > or > > https://releases.openstack.org/#references > > I believe it really is just a matter of requesting the new tag in the > openstack/releases repo. There is a good example of this when Tony did it for > TripleO's stable/newton branch: > > https://review.opendev.org/#/c/583856/ Thanks Sean, so ocata-eol[1] and pike-eol[2] patches were proposed for TripleO and they are merged, both ocata-eol and pike-eol tags got created after the patches merged. But still stable/ocata and stable/pike branches exist. Can someone from Release Team get them cleared so there is no option left to get cherry-pick proposed to these EOL branches. If any step from TripleO maintainers is needed please guide. [1] https://review.opendev.org/#/c/677478/ [2] https://review.opendev.org/#/c/678154/ > > I think I recall there were some additional steps Tony took at the time, but I > think everything is now covered by the automated process. Tony, please correct > me if I am wrong. > > Not sure if it applies, but you may want to see if there are any Zuul jobs that > need to be cleaned up or anything of that sort. > > We do say branches will be in unmaintained in the Extended Maintenance phase > for six months before going End of Life. Looking at Ocata, that happened April > 5 of this year. Six months would put it at the beginning of October. But I > think if the team knows they will not be accepting any more patches to these > branches, then it is better to get it clearly marked as EOL so proper > expectations are set. > > Sean Thanks and Regards Yatin Karel From amotoki at gmail.com Tue Sep 3 14:12:30 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 3 Sep 2019 23:12:30 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> Message-ID: On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: > > > > > On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: > > > > On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: > >>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > >>> > >>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > >>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > >>> > >>> [snip] > >>> > >>>>> When the goal is defined the docs team thought the doc gate job can > >>>>> handle the PDF build > >>>>> without extra tox env and zuul job configuration. However, during > >>>>> implementing the zuul job support > >>>>> it turns out at least a new tox env or an extra zuul job configuration > >>>>> is required in each project > >>>>> to make the docs job fail when PDF build failure is detected. As a > >>>>> result, we changes the approach > >>>>> and the new tox target is now required in each project repo. > >>>> > >>>> The whole point of structuring the goal the way we did was that we do > >>>> not want to update every single repo this cycle so we could roll out > >>>> PDF building transparently. We said we would allow the job to pass > >>>> even if the PDF build failed, because this was phase 1 of making all > >>>> of this work. > >>>> > >>>> The plan was to 1. extend the current job to make PDF building > >>>> optional; 2. examine the results to see how many repos need > >>>> significant work; 3. add a feature flag via a setting somewhere in > >>>> the repo to control whether the job fails if PDFs cannot be built. > >>>> That avoids a second doc job running in parallel, and still allows us > >>>> to roll out the PDF build requirement over time when we have enough > >>>> information to do so. > >>> > >>> Unfortunately when we tried to implement this we found that virtually > >>> every project we looked at required _some_ amount of tweaks just to > >>> build, let alone look sensible. This was certainly true of the big > >>> service projects (nova, neutron, cinder, ...) which all ran afoul of a > >>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous > >>> approach, such as the inability to easily reproduce locally and the > >>> general "hackiness" of the thing, along with the fact that we now had > >>> to submit changes against projects anyway, a collective decision was > >>> made [2] to drop that plan and persue the 'pdfdocs' tox target > >>> approach. > >> > >> We wanted to avoid making a bunch of the same changes to projects just to > >> add the PDF building instructions. If the *content* of a project’s documentation > >> needs work, that’s different. We should make those changes. > > > > I thought the only reason to hack the docs venv in a Zuul job was to > > avoid having to mass patch projects to add tox configuration? As such, > > if we're already having to mass patch projects because they don't build > > otherwise, why wouldn't we add the tox configuration? Was there another > > reason to pursue the zuul-only approach that I've forgotten about/never > > knew? > > I expected to need to fix formatting (even up to the point of commenting things > out, like we found with the giant config sample files). Those are content changes, > and would be mostly unique across projects. > > I wanted to avoid a large number of roughly identical changes to add tox environments, > zuul jobs, etc. because having a lot of patches like that across all the repos makes > extra work for small gain, especially when we can get the same results with a small > number of changes in one repository. > > The approach we discussed was to update the docs job to run some extra steps using > scripts that lived in the openstackdocstheme repository. That shouldn’t require > adding any extra software or otherwise modifying the tox environments. Did that approach > not work out? We explored ways only to update the docs job to run extra commands to build PDF docs, but there is one problem that the job cannot know whether PDF build is ready or not. If we ignore an error from PDF build, it works for repositories which are not ready for PDF build, but we cannot prevent PDF build failure again for repositories ready for PDF build As my project team hat of neutron team, we don't want to have PDF build failure again once the PDF build starts to work. To avoid this, stephenfin, asettle, AJaeger and I agree that some flag to determine if the PDF build is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. Another way we considered is a variable in openstack-tox-docs job, but we cannot pass a variable to zuul project template, so we didn't use this way. If there is a more efficient way, I am happy to use it. Thanks, Akihiro From morgan.fainberg at gmail.com Tue Sep 3 14:55:45 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 3 Sep 2019 07:55:45 -0700 Subject: [keystone] Weekly meeting for September 3rd 2019 In-Reply-To: References: Message-ID: This is a reminder, there is no keystone weekly meeting today, September 3rd, 2019. Have a great start to your week everyone! —Morgan On Fri, Aug 30, 2019 at 14:02 Morgan Fainberg wrote: > As of this time, we are planning to skip the keystone weekly meeting for > 2019-09-03. This is to allow for work to continue with less interruption as > well as US-based folks who have Labor Day (2019-09-02 this year) off to > continue to make progress in light of the abbreviated week. > > As always, please feel free to join us on irc (freenode) in > #openstack-keystone if you have any questions. I am also available (irc > nic: kmalloc ). > > Cheers, > --Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Sep 3 15:06:18 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Sep 2019 10:06:18 -0500 Subject: [oslo][ptl][election] PTL candidacy for Ussuri Message-ID: <168ec916-9d64-adc2-31e6-c707ba956745@nemebean.com> See my election review for more details: https://review.opendev.org/#/c/679803/1/candidates/u/Oslo/openstack%2540nemebean.com Thanks. -Ben From gcerami at redhat.com Tue Sep 3 15:35:03 2019 From: gcerami at redhat.com (Gabriele Cerami) Date: Tue, 3 Sep 2019 16:35:03 +0100 Subject: [TripleO][CI] Outage on the rdoprojects.org server is causing jobs to fail Message-ID: <20190903153503.lh2hym6sjgxuiqet@localhost> Hi, this weekend, and outage caused trunk.rdoprojects.org servers to become unreachable. As main effect, all tripleo ci jobs were unable to download and install dlrn repositories for the needed hashes and failed. The outage has been resolved yesterday but there's a problem with DNS propagation and we're still seeing DNS queries returning incorrect IPs, and as a result, jobs are not consistently passing. We would advise to limit the rechecks until we are sure the DNS results are stable. Thanks. From mriedemos at gmail.com Tue Sep 3 15:37:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 3 Sep 2019 10:37:36 -0500 Subject: [Horizon] [stable] Adding Radomir Dopieralski to horizon-stable-maint In-Reply-To: References: Message-ID: <2d44f14b-582b-131b-4fa8-1a9d5f0fdd96@gmail.com> On 9/2/2019 1:22 PM, Ivan Kolodyazhny wrote: > Almost two weeks passed without any objections. > > I would like to ask Stable team to add Rodomir to the > horizon-stable-maint group. Done. -- Thanks, Matt From ianyrchoi at gmail.com Tue Sep 3 15:43:32 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 4 Sep 2019 00:43:32 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> Message-ID: <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> Akihiro Motoki wrote on 9/3/2019 11:12 PM: > On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: >> >> >>> On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: >>> >>> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: >>>>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: >>>>> >>>>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>>>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: >>>>> [snip] >>>>> >>>>>>> When the goal is defined the docs team thought the doc gate job can >>>>>>> handle the PDF build >>>>>>> without extra tox env and zuul job configuration. However, during >>>>>>> implementing the zuul job support >>>>>>> it turns out at least a new tox env or an extra zuul job configuration >>>>>>> is required in each project >>>>>>> to make the docs job fail when PDF build failure is detected. As a >>>>>>> result, we changes the approach >>>>>>> and the new tox target is now required in each project repo. >>>>>> The whole point of structuring the goal the way we did was that we do >>>>>> not want to update every single repo this cycle so we could roll out >>>>>> PDF building transparently. We said we would allow the job to pass >>>>>> even if the PDF build failed, because this was phase 1 of making all >>>>>> of this work. >>>>>> >>>>>> The plan was to 1. extend the current job to make PDF building >>>>>> optional; 2. examine the results to see how many repos need >>>>>> significant work; 3. add a feature flag via a setting somewhere in >>>>>> the repo to control whether the job fails if PDFs cannot be built. >>>>>> That avoids a second doc job running in parallel, and still allows us >>>>>> to roll out the PDF build requirement over time when we have enough >>>>>> information to do so. >>>>> Unfortunately when we tried to implement this we found that virtually >>>>> every project we looked at required _some_ amount of tweaks just to >>>>> build, let alone look sensible. This was certainly true of the big >>>>> service projects (nova, neutron, cinder, ...) which all ran afoul of a >>>>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous >>>>> approach, such as the inability to easily reproduce locally and the >>>>> general "hackiness" of the thing, along with the fact that we now had >>>>> to submit changes against projects anyway, a collective decision was >>>>> made [2] to drop that plan and persue the 'pdfdocs' tox target >>>>> approach. >>>> We wanted to avoid making a bunch of the same changes to projects just to >>>> add the PDF building instructions. If the *content* of a project’s documentation >>>> needs work, that’s different. We should make those changes. >>> I thought the only reason to hack the docs venv in a Zuul job was to >>> avoid having to mass patch projects to add tox configuration? As such, >>> if we're already having to mass patch projects because they don't build >>> otherwise, why wouldn't we add the tox configuration? Was there another >>> reason to pursue the zuul-only approach that I've forgotten about/never >>> knew? >> I expected to need to fix formatting (even up to the point of commenting things >> out, like we found with the giant config sample files). Those are content changes, >> and would be mostly unique across projects. >> >> I wanted to avoid a large number of roughly identical changes to add tox environments, >> zuul jobs, etc. because having a lot of patches like that across all the repos makes >> extra work for small gain, especially when we can get the same results with a small >> number of changes in one repository. >> >> The approach we discussed was to update the docs job to run some extra steps using >> scripts that lived in the openstackdocstheme repository. That shouldn’t require >> adding any extra software or otherwise modifying the tox environments. Did that approach >> not work out? > We explored ways only to update the docs job to run extra commands to > build PDF docs, > but there is one problem that the job cannot know whether PDF build is > ready or not. > If we ignore an error from PDF build, it works for repositories which > are not ready for PDF build, > but we cannot prevent PDF build failure again for repositories ready > for PDF build > As my project team hat of neutron team, we don't want to have PDF > build failure again > once the PDF build starts to work. > To avoid this, stephenfin, asettle, AJaeger and I agree that some flag > to determine if the PDF build > is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. > Another way we considered is a variable in openstack-tox-docs job, but > we cannot pass a variable > to zuul project template, so we didn't use this way. > If there is a more efficient way, I am happy to use it. > > Thanks, > Akihiro > Hello, Sorry for joining in this thread late, but to I first would like to try to figure out the current status regarding the current discussion on the thread: - openstackdocstheme has docstheme-build-pdf script [1] - build-pdf-docs Zuul job in openstack-zuul-jobs pre-installs all required packages [2] - Current guidance for project repos is that 1) is to just add to latex_documents settings [3] and add pdf-docs environment for trigger [4] - Project repos additionally need to change more for successful PDF builds like adding more options on conf.py [5] and changing more on rst files to explictly options like [6] . Now my questions from comments are: a) How about checking an option in somewhere else like .zuul.yaml or using grep in docs env part, not doing grep to check the existance of "pdf-docs" tox env [3]? b) Can we call docstheme-build-pdf in openstackdocstheme [1] instead of direct Sphinx & make commands in "pdf-docs" environment [4]? c) Ultimately, would executing docstheme-build-pdf command in build-pdf-docs Zuul job with another kind of trigger like bullet a) be feasible and/or be implemented by the end of this cycle? With many thanks, /Ian [1] https://review.opendev.org/#/c/665163/ [2] https://review.opendev.org/#/c/664555/25/roles/prepare-build-pdf-docs/tasks/main.yaml at 3 [3] https://review.opendev.org/#/c/678393/4/doc/source/conf.py [4] https://review.opendev.org/#/c/678393/4/tox.ini [5] https://review.opendev.org/#/c/678747/1/doc/source/conf.py at 270 [6] https://review.opendev.org/#/c/678747/1/doc/source/index.rst at 13 From anmar.salih1 at gmail.com Tue Sep 3 15:54:07 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Tue, 3 Sep 2019 11:54:07 -0400 Subject: Need help trigger aodh alarm Message-ID: Hey all, I need help trigger aodh alarm to execute a simple function. I am following the instructions here but it does't work. Here are my system configurations: 1- Operating system (Ubuntu16 server) -> running on virtual machine 2- Devstack local.conf file. 3- Devstack Stein release. Note: I tried to install Devstack on Ubuntu16 desktop and Ubuntu18 desktop but no luck. This link is the error output screen I received during the installation on Ubuntu18 desktop. Thank you in advance. Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Sep 3 16:04:36 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Sep 2019 11:04:36 -0500 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri In-Reply-To: References: Message-ID: Do we even still have a docs PTL position now that docs has become a SIG? On 9/2/19 9:41 AM, Alexandra Settle wrote: > Hey all, > > I would like to submit my candidacy for the documentation team's PTL > for the Ussuri cycle. > > Stephen Finucane (Train PTL) will be unofficially serving alongside me > in a co-PTL capacity so we can equally address documentation-related > tasks and discussions. > > I served as the documentation PTL for Pike, and am currently serving as > an elected member of the Technical Committee in the capacity of vice > chair. I have been a part of the community since the beginning of 2014, > and have seen the highs and the lows and continue to love working for > and with this community. > > The definition of documentation for OpenStack has been rapidly changing > and the future of the documentation team continues to evolve and > change. I would like that opportunity to help guide the documentation > team, and potentially finish what myself, Petr, Stephen and many others > have started and carried on. > > Thanks, > > Alex > From ianyrchoi at gmail.com Tue Sep 3 16:09:14 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 4 Sep 2019 01:09:14 +0900 Subject: [ALL][UC] The UC Special Election results Message-ID: <1e034eb4-042c-0bb1-dd94-eb6677ee6f0e@gmail.com> Hello all, On behalf of the User Committee Elections officials, I am pleased to announce the results of the UC elections for the special election 2019 [1]. Please join me in congratulating the winner: Jaesuk Ahn! With the result from the previous UC election on last month [2], total two winners (Mohamed Elsakhawy, Jaesuk Ahn) will serve UC for one year. Thank you, - Ed & Ian [1] https://governance.openstack.org/uc/reference/uc-election-sep2019.html [2] http://lists.openstack.org/pipermail/user-committee/2019-August/002870.html From gcerami at redhat.com Tue Sep 3 16:35:18 2019 From: gcerami at redhat.com (Gabriele Cerami) Date: Tue, 3 Sep 2019 17:35:18 +0100 Subject: [TripleO][CI] Outage on the rdoprojects.org server is causing jobs to fail Message-ID: <20190903163518.zf2zlt5hvk32a4fq@localhost> Hi, this weekend, and outage caused trunk.rdoprojects.org servers to become unreachable. As main effect, all tripleo ci jobs were unable to download and install dlrn repositories for the needed hashes and failed. The outage has been resolved yesterday but there's a problem with DNS propagation and we're still seeing DNS queries returning incorrect IPs, and as a result, jobs are not consistently passing. You may see problems dowloading repos, building changes, installing packages We would advise to limit the rechecks until we are sure the DNS results are stable. Thanks. From mark at stackhpc.com Tue Sep 3 16:49:57 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 3 Sep 2019 17:49:57 +0100 Subject: [kolla] Kayobe Stein release now available Message-ID: Hi, I'm pleased to announce that the first Stein cycle release for Kayobe is now available - 6.0.0. Thanks to everyone who contributed. Release notes: https://docs.openstack.org/releasenotes/kayobe/stein.html Join us on #openstack-kolla to help make the Train cycle release even better. Cheers, Mark From aj at suse.com Tue Sep 3 17:36:28 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 3 Sep 2019 19:36:28 +0200 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri In-Reply-To: References: Message-ID: On 03/09/2019 18.04, Ben Nemec wrote: > Do we even still have a docs PTL position now that docs has become a SIG? That transition is not yet effective, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From fungi at yuggoth.org Tue Sep 3 17:54:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 17:54:11 +0000 Subject: [elections][ptl][tc][adjutant][cyborg][designate][i18n][manila][nova][openstacksdk][placement][powervmstackers][winstackers] Missing PTL/TC Candidates! Message-ID: <20190903175410.rbhkdrimut6uccex@yuggoth.org> A final reminder, we are now into the last few hours for declaring PTL and TC candidacies. Nominations are open until Sep 03, 2019 23:45 UTC. If you want to stand for election, don't delay, follow the instructions to make sure the community knows your intentions: https://governance.openstack.org/election/#how-to-submit-a-candidacy Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. With approximately six hours remaining, the 10 projects tagged in the Subject line of this message will be deemed leaderless if no eligible nominees step forward. In this case the TC will directly oversee PTL selection/appointment. We also need at least one more TC candidate to have enough to fill the six open seats on the OpenStack Technical committee. Thank you, -- Jeremy Stanley, on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Tue Sep 3 18:14:07 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 3 Sep 2019 12:14:07 -0600 Subject: [TripleO][CI] Outage on the rdoprojects.org server is causing jobs to fail In-Reply-To: <20190903163518.zf2zlt5hvk32a4fq@localhost> References: <20190903163518.zf2zlt5hvk32a4fq@localhost> Message-ID: On Tue, Sep 3, 2019 at 10:43 AM Gabriele Cerami wrote: > Hi, > > this weekend, and outage caused trunk.rdoprojects.org servers to become > unreachable. > As main effect, all tripleo ci jobs were unable to download and install > dlrn repositories for the needed hashes and failed. > > The outage has been resolved yesterday but there's a problem with DNS > propagation and we're still seeing DNS queries returning incorrect IPs, > and as a result, jobs are not consistently passing. > You may see problems dowloading repos, building changes, installing > packages > > We would advise to limit the rechecks until we are sure the DNS results > are stable. > > Thanks. Just adding a little more clarity. You can see the pass rate of TripleO jobs start to drop on 8/31 and recover on 9/3 in the screenshot. We have not yet fully recovered quite yet, we will update this thread when that is the case. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ci_job_pass_fail_rate.png Type: image/png Size: 52623 bytes Desc: not available URL: From sean.mcginnis at gmx.com Tue Sep 3 19:03:37 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Sep 2019 14:03:37 -0500 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <20190814192440.GA3048@sm-workstation> Message-ID: <20190903190337.GA14785@sm-workstation> > > Thanks Sean, so ocata-eol[1] and pike-eol[2] patches were proposed for > TripleO and they are merged, both ocata-eol and pike-eol tags got > created after the patches merged. But still stable/ocata and > stable/pike branches exist. Can someone from Release Team get them > cleared so there is no option left to get cherry-pick proposed to > these EOL branches. If any step from TripleO maintainers is needed > please guide. > > [1] https://review.opendev.org/#/c/677478/ > [2] https://review.opendev.org/#/c/678154/ > The release automation can only create branches, not remove them. That is something the infra team would need to do. I can't recall how this was handled in the past. Maybe someone from infra can shed some light on how EOL'ing stable branches should be handled for the no longer needed stable/* branches. Sean From jungleboyj at gmail.com Tue Sep 3 19:07:29 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 3 Sep 2019 14:07:29 -0500 Subject: [elections][tc] Announcing Candidacy for OpenStack Technical Committee Message-ID: <847a4225-6a98-4edc-aed5-f9934ec80457@gmail.com> Dear OpenStack Community, This note is to officially announce my candidacy for the OpenStack TC. For those of you that don’t know me, allow me to introduce myself:  [1] I have been active in the OpenStack Community since early in 2013. After nine years of working on Super Computing Solutions with IBM I moved to OpenStack and started working on Neutron (Quantum back in those days) before moving over to a relatively new project called Cinder. Within IBM I worked to create the processes for IBM’s storage driver development teams to interact with and contribute to the Cinder project. I became a core member of the Cinder team in the middle of 2013 and have remained active ever since. [2] [3] I have been the Cinder PTL for the last two years, starting in the Queens release. Beyond Cinder I have sought opportunities to work across OpenStack projects. I was the liaison between the Cinder and Documentation teams. This lead to opportunities to learn how OpenStack’s documentation is developed and enabled me to help improve the Cinder team’s documentation.  I have also served for quite some time as the liaison to the Oslo team. Education and on-boarding of new team members has been a focus of my tenure in OpenStack.  I started helping to lead the OpenStack Upstream Institute at the fall 2016 Summit in Barcelona.  After Barcelona I helped to revise the education to meet the needs of future Upstream Institute sessions and have coordinated with my current employer, Lenovo, to sponsor each OpenStack Upstream Institute at Summits since the Spring 2017 Summit in Boston.  Lenovo will even be hosting the OUI session in Shanghai!  I also created Cinder’s on-boarding education which I have presented at each Summit since the Fall 2017 Summit in Sydney.  I have sought opportunities to mentor new contributors both within my employers and from the community in general. I have a long experience working with OpenStack and a broad understanding of how the community works, I also feel that I have a breadth of technical experience that will benefit the TC.  I have experience in High Performance Computing and feel I can understand and represent the needs of the HPC community.  I have years of experience in the storage realm and can represent the unique concerns that storage vendors bring to OpenStack and the subsequent concerns that our OpenStack distributors have supporting OpenStack and its many drivers. Since moving from IBM to Lenovo my focus has changed from development of OpenStack to developing solutions that leverage OpenStack. I have enjoyed the opportunity to become a consumer of OpenStack as it has given me an opportunity to better understand the versatility and complexity of OpenStack.  I have been working with customers with an interest in both telco applications, particularly where Edge computing is concerned, as well as enterprise customers.  Given my latest work I feel that I am able to understand and represent many interests from the OpenStack community. If elected to the TC here are some of the concerns that I would like to address: * Ensure that we continue to improve our on-boarding and educational processes.  The days where people are assigned to only work on OpenStack are gone.  The easier we make it for people to successfully contribute to and leverage OpenStack, the more likely they will be to continue to contribute. * Improve documentation of OpenStack and Project processes. There have been a lot of discussions lately regarding undocumented processes.  There is a lot of tribal knowledge involved in OpenStack and this too makes it hard for new contributors to integrate. Improving the documentation/description of ‘how we build OpenStack’ is crucial. * I would like the community as a whole to seek ways to make OpenStack more consumable by our users and distributors.  I think the move to having longer lived stable branches has been a good step, but it has not resolved all the issues posed by customers that stay on older releases of OpenStack. The stable backport policies need to be readdressed to seek a solution that allows vendors to backport code for their customers, improving OpenStack’s usability without risking its stability. * I would like to continue the work that has been started to increase the visibility of the TC’s contributions to OpenStack and increase the effort to have the TC be a resource to the whole community. * I want to seek opportunities for OpenStack to continue to inter-operate with other cloud solutions.  Virtualization is not the only cloud approach available and customers, very often, do not want just one solution or the other.  OpenStack needs to continue to expand to address these concerns to remain vibrant and relevant. I hope that the thoughts above have resonated with you and appreciate you considering me for a position on the Technical Committee.  I am passionate about OpenStack and believe that we have a community like no other in the industry.  It would be a great honor to represent this community in a new capacity. Sincerely, Jay Bryant [1]  Foundation Profile: https://www.openstack.org/community/members/profile/8348/jay-bryant [2]  Reviews:  https://www.stackalytics.com/?user_id=jsbryant [3]  Commits: https://www.stackalytics.com/?user_id=jsbryant&metric=commits IRC (Freenode):  jungleboyj From fungi at yuggoth.org Tue Sep 3 19:22:49 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 19:22:49 +0000 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190903190337.GA14785@sm-workstation> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> Message-ID: <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> On 2019-09-03 14:03:37 -0500 (-0500), Sean McGinnis wrote: [...] > The release automation can only create branches, not remove them. > That is something the infra team would need to do. > > I can't recall how this was handled in the past. Maybe someone > from infra can shed some light on how EOL'ing stable branches > should be handled for the no longer needed stable/* branches. We've done it different ways. Sometimes it's been someone from the OpenDev/Infra sysadmins who volunteers to just delete the list of branches requested, but more recently for large batches related to EOL work we've temporarily elevated permissions for a member of the Stable Branch (now Extended Maintenance SIG?) or Release teams. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Sep 3 19:28:01 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Sep 2019 14:28:01 -0500 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri In-Reply-To: References: Message-ID: On 9/3/19 12:36 PM, Andreas Jaeger wrote: > On 03/09/2019 18.04, Ben Nemec wrote: >> Do we even still have a docs PTL position now that docs has become a SIG? > > That transition is not yet effective, Ah, didn't realize that. Thanks. From amy at demarco.com Tue Sep 3 19:35:21 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 3 Sep 2019 14:35:21 -0500 Subject: [Horizon] Help making custom theme Message-ID: For the Grace Hopper Conference's Open Source Day we're doing a Horizon based workshop for OpenStack (running Devstack Pike). The end goal is to have the attendee teams create their own OpenStack theme supporting a humanitarian effort of their choice in a few hours. I've tried modifying the material theme thinking it would be the easiest route to go but that might not be the best way to go about this.:) I've been getting some assistance from e0ne in the Horizon channel and my logo now shows up on the login page, and I had already gotten the SITE_BRAND attributes and the theme itself to show up after changing the local_settings.py. If anyone has some tips or a tutorial somewhere it would be greatly appreciated and I will gladly put together a tutorial for the repo when done. Thanks! Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 3 19:45:46 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 19:45:46 +0000 Subject: [Horizon] Help making custom theme In-Reply-To: References: Message-ID: <20190903194546.rug3bagpglhzdyio@yuggoth.org> On 2019-09-03 14:35:21 -0500 (-0500), Amy Marrich wrote: > For the Grace Hopper Conference's Open Source Day we're doing a > Horizon based workshop for OpenStack (running Devstack Pike). [...] I'm thrilled to see you were able to make it happen, thanks for representing our community there! Out of curiosity though, why Pike? I expect there's a really good reason you're stuck doing it on an almost two-year-old release, but I lack sufficient imagination to guess what it might be. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Tue Sep 3 20:05:24 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 3 Sep 2019 15:05:24 -0500 Subject: [Horizon] Help making custom theme In-Reply-To: <20190903194546.rug3bagpglhzdyio@yuggoth.org> References: <20190903194546.rug3bagpglhzdyio@yuggoth.org> Message-ID: Jeremy, It's what I could get running on City Network's generously provided infrastructure. I wasn't getting the same results installing there as locally, for instance had to turn off etcd but I don't have to on my local virtual box instance. I'd get partially through master and stein installs and then errors so I kind of stopped at a good installation and quickly made a 'golden' image and moved on to working on the workshop itself. I also attempted packstack and had errors as well so it just made more sense to move on vs continuously pestering Florian.:). Note: Errors could definitely be a result of me trying to run as lean as possible and not take advantage of the resources being donated. Amy (spotz) On Tue, Sep 3, 2019 at 2:47 PM Jeremy Stanley wrote: > On 2019-09-03 14:35:21 -0500 (-0500), Amy Marrich wrote: > > For the Grace Hopper Conference's Open Source Day we're doing a > > Horizon based workshop for OpenStack (running Devstack Pike). > [...] > > I'm thrilled to see you were able to make it happen, thanks for > representing our community there! Out of curiosity though, why Pike? > I expect there's a really good reason you're stuck doing it on an > almost two-year-old release, but I lack sufficient imagination to > guess what it might be. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 3 20:11:12 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Sep 2019 16:11:12 -0400 Subject: [openstack-ansible] weekly office hours Message-ID: Hi everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. - The 42.3 clean up and job state matrix are still pending. - We decided to retire Ocata and are thinking of retiring Pike since they are not being used or maintained anymore. - We also discussed the future of OSA. We noticed a lot of operators are going more towards containers or kubernetes and the contributions and traction for OSA have decreased, so we’re wondering if we should start thinking about adopting a new direction in the future. I suggest that you read the eavesdrop for the last point and would like to ask for input. Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Tue Sep 3 20:13:09 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 20:13:09 +0000 Subject: [Horizon] Help making custom theme In-Reply-To: References: <20190903194546.rug3bagpglhzdyio@yuggoth.org> Message-ID: <20190903201308.gfhyg6p7eybtnsuw@yuggoth.org> On 2019-09-03 15:05:24 -0500 (-0500), Amy Marrich wrote: > It's what I could get running on City Network's generously > provided infrastructure. I wasn't getting the same results > installing there as locally, for instance had to turn off etcd but > I don't have to on my local virtual box instance. I'd get > partially through master and stein installs and then errors so I > kind of stopped at a good installation and quickly made a 'golden' > image and moved on to working on the workshop itself. I also > attempted packstack and had errors as well so it just made more > sense to move on vs continuously pestering Florian.:). > > Note: Errors could definitely be a result of me trying to run as > lean as possible and not take advantage of the resources being > donated. [...] Ahh, sorry to hear it was a struggle! I'm definitely not questioning your expedient choices, just want to be sure that any bugs you encountered get tracked somewhere so we can try to fix them. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mnaser at vexxhost.com Tue Sep 3 20:13:21 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Sep 2019 16:13:21 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # General changes - Added kayobe as a deliverable of the kolla project: https://review.opendev.org/#/c/669299/ Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amy at demarco.com Tue Sep 3 20:22:39 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 3 Sep 2019 15:22:39 -0500 Subject: [Horizon] Help making custom theme In-Reply-To: <20190903201308.gfhyg6p7eybtnsuw@yuggoth.org> References: <20190903194546.rug3bagpglhzdyio@yuggoth.org> <20190903201308.gfhyg6p7eybtnsuw@yuggoth.org> Message-ID: Jeremy, Ha you should know me better then that:) I just couldn't be sure if it was something I was doing or not and I didn't want to keep bumping up RAM and cores on the instances as I do try to be a good guest:). I do think the etcd thing is interesting as I never ran into that on Rackspace or on my local VM, but it could be a difference in the Ubuntu being used even though the same version. Amy (spotz) On Tue, Sep 3, 2019 at 3:14 PM Jeremy Stanley wrote: > On 2019-09-03 15:05:24 -0500 (-0500), Amy Marrich wrote: > > It's what I could get running on City Network's generously > > provided infrastructure. I wasn't getting the same results > > installing there as locally, for instance had to turn off etcd but > > I don't have to on my local virtual box instance. I'd get > > partially through master and stein installs and then errors so I > > kind of stopped at a good installation and quickly made a 'golden' > > image and moved on to working on the workshop itself. I also > > attempted packstack and had errors as well so it just made more > > sense to move on vs continuously pestering Florian.:). > > > > Note: Errors could definitely be a result of me trying to run as > > lean as possible and not take advantage of the resources being > > donated. > [...] > > Ahh, sorry to hear it was a struggle! I'm definitely not questioning > your expedient choices, just want to be sure that any bugs you > encountered get tracked somewhere so we can try to fix them. Thanks! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Tue Sep 3 20:31:15 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 3 Sep 2019 20:31:15 +0000 Subject: Nova causes MySQL timeouts Message-ID: It looks like nova is keeping mysql connections open until they time out. How are others responding to this issue? Do you just ignore the mysql errors, or is it possible to change configuration so that nova closes and reopens connections before they time out? Or is there a way to stop mysql from logging these aborted connections without hiding real issues? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan.trellu at incloudus.com Tue Sep 3 20:36:47 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Tue, 03 Sep 2019 16:36:47 -0400 Subject: Nova causes MySQL timeouts In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From openstack at fried.cc Tue Sep 3 21:12:19 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 3 Sep 2019 16:12:19 -0500 Subject: [nova][ptl] Eric Fried candidacy for Nova U PTL Message-ID: <918a3e7d-fbf5-51da-44bc-5e4e7501b094@fried.cc> I would be honored to continue serving as the Nova PTL in the Ussuri release [0]. Please note that I will not be in Shanghai. As Train PTL, I am working to delegate the project update [1]. If reelected for Ussuri, I intend to do the same for PTG responsibilities, including doing as much as possible via "virtual pre-PTG" on the mailing list. Being PTL for Train has been a growth experience. It has forced me to take a broader view of the project versus my previous focus on my topics of interest [2]. The flip-side is that I have had less time to devote to those things, and that has been a sacrifice. As such, I intend to be bolder about delegating this time around. In my stump speech for Train [3] I expressed a desire to grow contributor participation. I feel we have seen positive movement with new and existing non-cores showing improved code and review activity. Let's maintain the encouraging atmosphere and continue to grow in this space. However, core participation has not seen the same health, and it shows in the relatively low volume of feature work that has been accomplished to date in Train (more on this below). This has been one of my main frustrations as Nova Cat-Herder: like cats, Nova cores are mysterious beings motivated by forces beyond my ability to control. I would like to find ways to make core review activity more consistent as a step toward being able to predict more accurately what we can expect to get done in a cycle. This should make everyone's (project) managers happier, a delicious treat made with real tuna. Feature-wise, I was disappointed in the lack of progress exploiting nested resource providers. The Placement team worked hard to deliver the dependencies [4] to allow us to express things like subtree affinity for NUMA, but Nova missed the boat (train) due to lack of resource [5] and inability to agree on how to move forward [6]. Expressing NUMA in Placement is going to be the next major inflection point for scheduling robustness and performance; we need to get serious about making it a priority. But first we should finish what we started, closing on the many almost-there features that are looking risky for Train. We should be conservative about committing to new features until those are done. Thanks, Eric Fried (efried) (say it like "freed") [0] https://review.opendev.org/679862 [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8459 [2] meaning things my employer cares about, areas where I have background/expertise, and things that sound fun and/or further some mission of Nova/OpenStack [3] https://opendev.org/openstack/election/src/branch/master/candidates/train/Nova/openstack at fried.cc [4] https://docs.openstack.org/placement/latest/specs/train/approved/2005575-nested-magic-1.html [5] insert Placement joke here [6] https://review.opendev.org/#/c/650963/ From tpb at dyncloud.net Tue Sep 3 21:30:11 2019 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 3 Sep 2019 17:30:11 -0400 Subject: [manila][ptl] Non-candidacy for the Ussuri Cycle Message-ID: <20190903213011.dxj5254weftgw3tp@barron.net> I want to thank the Manila community for allowing me to serve as PTL for the last three cycles, but it is time for us to change it up! I know it's late for this announcement, but I wanted first to make sure that I won't be leaving an unfilled vacancy and events conspired such that that took longer than anticipated :) So stay tuned for an announcment shortly of a nomination for a new PTL for Ussuri -- I'm sure you'll be as pleased as I am with what you'll read. I'm not going anywhere. I'll be working on Manila itself and helping the new PTL, as well as working with actual Manila deployments and the use of Manila as open infrastructure by adjacent communities. Thanks again! -- Tom Barron From cboylan at sapwetik.org Tue Sep 3 21:35:01 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Sep 2019 14:35:01 -0700 Subject: review.opendev.org Outage September 16 to Perform Project Renames Message-ID: <25fb5c5d-4ee5-47c6-80d0-df1f857856d0@www.fastmail.com> Hello, We will be taking a Gerrit outage of about an hour on September 16 at 14:00 UTC to perform project renames. Please let us know if this scheduling does not work for some reason. We have tried to schedule this for a quiet time at the end of OpenStack's Train release cycle. Also, if you'd like to rename a project, now is the time to start prepping for that. Feel free to ask us any questions you have or bring up your concerns with us. Thank you for your patience, Clark From anlin.kong at gmail.com Tue Sep 3 21:45:34 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 4 Sep 2019 09:45:34 +1200 Subject: Need help trigger aodh alarm In-Reply-To: References: Message-ID: On Wed, Sep 4, 2019 at 3:57 AM Anmar Salih wrote: > Hey all, > > I need help trigger aodh alarm to execute a simple function. I am > following the instructions here > but it does't > work. > Hi Anmar, Could you please provide more information? e.g. does Qinling webhook itself work? Is the alarm created successfully? Is the python script in the guide executed successfully? Any related error logs? - Best regards, Lingxian Kong Catalyst Cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Sep 3 22:00:59 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 3 Sep 2019 17:00:59 -0500 Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> Message-ID: On 9/2/2019 10:40 PM, Wesley Peng wrote: >> We can migrate the vm from compute102 to compute101 Successfully. >> compute101 to compute102 ERROR info: the CPU is incompatible with host >> CPU: Host CPU does not provide required features: f16c, rdrand, >> fsgsbase, smep, erms >> >> > > The error has said, cpu of compute101 is lower than compute102, some > incompatible issues happened. To live migrate, you'd better have all > hosts with the same hardwares, including cpu/mem/disk etc [1] and [2] may be helpful. [1] https://www.openstack.org/videos/summits/berlin-2018/effective-virtual-cpu-configuration-in-nova [2] https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#specify-the-cpu-model-of-kvm-guests -- Thanks, Matt From adriant at catalyst.net.nz Tue Sep 3 23:12:06 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 4 Sep 2019 11:12:06 +1200 Subject: [adjutant][ptl] Adrian Turjak as Adjutant U cycle PTL Message-ID: <2f91b121-023a-299c-aeac-8939761da50a@catalyst.net.nz> Hello OpenStackers, I'm submitting myself as the PTL for Adjutant during the U cycle. At this time I think I'm still the best suited to continue leading the project, with the best understanding of the codebase and the direction that the service is taking. The Train cycle was sadly not as productive as I'd have liked, purely because of how much big refactor work we've been in the middle of. The progress of that though has been good, and it should lay the groundwork for a very productive U cycle. The planned work for the next cycle is: - introduce partial policy support rather than relying on hardcoded decorators. - finish the long planned support for sub-project management - add project (and resource) termination logic - rework the identity manager as a pluggable construct. Cheers, Adrian Turjak From gouthampravi at gmail.com Tue Sep 3 23:47:56 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 3 Sep 2019 16:47:56 -0700 Subject: [manila][ptl][election] PTL candidacy for Ussuri Message-ID: Greetings Zorillas & other Stackers, I would like to submit my candidacy to be the PTL of Manila for the Ussuri cycle. I have been a contributor to OpenStack since the Liberty release and a maintainer of Manila and its associated deliverables since the Ocata release. I have had the opportunity to work closely with, and learn from, two stellar engineers who have served as PTLs so far. I've also had the privilege of collaborating with contributors from varied backgrounds. This taught me the technical aspects of orchestrating Open Infrastructure Storage at cloud scale. I attribute the tremendous growth of the project to each of us in the project internalizing and espousing the "OpenStack Way" of upstream open-source development. My strongest qualification for this job is that I wake up excited about the problems we're solving. As an engineer I see features left to implement; as an ambassador, I see untapped use cases; as a maintainer, I see new contributors and technical debt. So, if you'll have me, as the PTL, I will work towards maturing Manila, tackling its technical debt, advocating its usage and sustaining its neutrality. I'll also continue doing the thing I love most: mentoring new members and preserving this well-knit community. In the near term, I propose that you and I: - Continue hard on the path to growing contributors: Stein/Train was an exciting time for us; we worked hard on this goal! We lowered the barrier of entry for new contributors by relaxing our review norms [1] and provided quick and easy tutorials [2] to bootstrap with our free and open source storage drivers, among many other things. We had an opportunity to mentor interns under Outreachy [3], Google Summer of Code [4] and the Open University of Israel [5] internship programs. Let's do more of this and ensure we have able successors. Let's also mentor reviewers and create more maintainers. - Complete integration to openstackclient/openstacksdk and evolve manila-csi by reaching feature parity to the rich feature-set we already provide. - Continue the work on reliability, availability and fault tolerance of individual components and allow for more flexible deployment scenarios. - Gather feedback from edge/telco/scientific computing consumers and address pain points. Thank you for your support, Goutham Pacha Ravi IRC: gouthamr [1] https://docs.openstack.org/manila/latest/contributor/manila-review-policy.html [2] https://docs.openstack.org/manila/latest/contributor/development-environment-devstack.html [3] https://www.outreachy.org/apply/rounds/may-2019-august-2019-outreachy-internships/#openstack-openstack-manila-integration-with-openstack-cli-os [4] https://summerofcode.withgoogle.com/projects/#5067835716403200 [5] https://review.opendev.org/#/q/committer:gilboa.nir%2540gmail.com+status:merged [6] Candidacy submission: https://review.opendev.org/679881 From gouthampravi at gmail.com Tue Sep 3 23:55:20 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 3 Sep 2019 16:55:20 -0700 Subject: [manila][ptl] Non-candidacy for the Ussuri Cycle In-Reply-To: <20190903213011.dxj5254weftgw3tp@barron.net> References: <20190903213011.dxj5254weftgw3tp@barron.net> Message-ID: Thank you so much for your tireless service as our fearless leader Tom. While we'll allow you to retire as PTL for a term or few, I'm glad we'll retain you as a guide and mentor. Thanks for reposing faith in your protégées, I'm only one of many. I'm going to attempt to steal your shoes and try them out. On Tue, Sep 3, 2019 at 2:33 PM Tom Barron wrote: > I want to thank the Manila community for allowing me to serve as PTL > for the last three cycles, but it is time for us to change it up! I > know it's late for this announcement, but I wanted first to make sure > that I won't be leaving an unfilled vacancy and events conspired such > that that took longer than anticipated :) > > So stay tuned for an announcment shortly of a nomination for a new PTL > for Ussuri -- I'm sure you'll be as pleased as I am with what you'll > read. > > I'm not going anywhere. I'll be working on Manila itself and helping > the new PTL, as well as working with actual Manila deployments and the > use of Manila as open infrastructure by adjacent communities. > > Thanks again! > > -- Tom Barron > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Wed Sep 4 01:02:46 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Wed, 4 Sep 2019 09:02:46 +0800 Subject: [kolla-ansible] Correct way to add/remove nodes. In-Reply-To: References: Message-ID: OK, I think I found the probably answer about removing controller. Refer from Oracle. I know there're different between kollacli and kolla-ansible, but I think the few mechanism are the same. To remove the controller that having a problem (the controller already dead caused by hardware failure for example): 1. Remove the node from inventory. 2. Do kolla-ansible reconfigure. Then check if all information has updated. To add the new controller, just do the same as addition from previous mail. For now I have not enough machines to try this, but this is what I approach. Please correct me if there's something wrong. Many thanks, Eddie. Eddie Yen 於 2019年9月3日 週二 下午3:51寫道: > Hi, > > I wanna know the correct way to add/remove nodes since I can't find the > completely document or tutorial about this part. > > Here's what I know for now. > > For addition: > 1. Install OS and setting up network on new servers. > 2. Add new server's information into /etc/hosts and inventory file > 3. Do bootstrapping to these servers by using bootstrap-servers with > --limit option > 4. (For Ceph OSD node) Add disk label to the disks that will become OSD. > 5. Deploy again. > > > For deletion (Compute): > 1. Do migration if there're VMs exist on target node. > 2. Set nova-compute service down on target node. Then remove the service > from nova cluster. > 3. Disable all Neutron agents on target node and remove from Neutron > cluster. > 4. Using kolla-ansible to stop all containers on target node. > 5. Cleanup all containers and left settings by using cleanup-containers > and cleanup-host script. > > > For deletion (Ceph OSD node): > 1. Remove all OSDs on target node by following Ceph tutorial. > 2. Using kolla-ansible to stop all containers on target node. > 3. Cleanup all containers and left settings by using cleanup-containers > and cleanup-host script. > > > > Now I'm not sure about Controller if there's one controller down and want > to add another one into HA cluster. My thought is that add into cluster > first, then delete the informations about corrupted controller. But I have > no clue about the details. Only about Ceph controller (mon, rgw, mds,. etc) > > Does anyone has experience about this? > > > Many thanks, > Eddie. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 4 01:07:23 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Sep 2019 10:07:23 +0900 Subject: [cyborg][election][ptl] PTL candidacy for Ussuri In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5276073B3@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5276073B3@fmsmsx122.amr.corp.intel.com> Message-ID: <16cf9cfd6af.c725c38b180568.3968014926200808575@ghanshyammann.com> Hi Sundar, I think you missed to add the nomination on gerrit. - https://governance.openstack.org/election/#how-to-submit-a-candidacy The nomination period is passed now. -gmann ---- On Mon, 02 Sep 2019 13:52:24 +0900 Nadathur, Sundar wrote ---- > > Hello all, > I would like to announce my candidacy for the PTL role of Cyborg for the Ussuri cycle. > > I have been involved with Cyborg since 2018 Rocky PTG, and have had the privilege of serving as Cyborg PTL for the Train cycle. > > In the Train cycle, Cyborg saw some important developments. We reached an agreement on integration with Nova at the PTG, and the spec that I wrote based on that agreement has been merged. We have seen new developers join the community. We have seen existing Cyborg drivers getting updated and new Cyborg drivers being proposed. We are also in the process of developing a tempest plugin for Cyborg. > > In the U cycle, I’d aim to build on this foundation. While we may support a certain set of VM operations with accelerators with Nova in Train, we can expand on that set in U. We should also focus on Day 2 operations like performance monitoring and health monitoring for accelerator devices. I would like to formalize and expand on the driver addition/development process. > > Thank you for your support. > > Regards, > Sundar > > From fungi at yuggoth.org Wed Sep 4 02:49:41 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 4 Sep 2019 02:49:41 +0000 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results Message-ID: <20190904024941.qaapsjuddklree26@yuggoth.org> Thank you to all candidates who put their name forward for Project Team Lead (PTL) and Technical Committee (TC) in this election. A healthy, open process breeds trust in our decision making capability thank you to all those who make this process possible. Now for the results of the PTL election process, please join me in extending congratulations to the following PTLs: * Adjutant : Adrian Turjak * Barbican : Douglas Mendizábal * Blazar : Pierre Riteau * Cinder : Brian Rosmaita * Cloudkitty : Luka Peschke * Congress : Eric Kao * Documentation : Alexandra Settle * Ec2 Api : Andrey Pavlov * Freezer : geng chc * Glance : Abhishek Kekane * Heat : Rico Lin * Horizon : Akihiro Motoki * Infrastructure : Clark Boylan * Ironic : Julia Kreger * Karbor : Pengju Jiao * Keystone : Colleen Murphy * Kolla : Mark Goddard * Kuryr : Michał Dulko * Loci : Pete Birley * Magnum : Feilong Wang * Manila : Goutham Pacha Ravi * Masakari : Sampath Priyankara * Mistral : Renat Akhmerov * Monasca : Witek Bedyk * Murano : Rong Zhu * Neutron : Sławek Kapłoński * Nova : Eric Fried * Octavia : Adam Harwell * OpenStack Charms : Frode Nordahl * Openstack Chef : Jens Harbott * OpenStack Helm : Pete Birley * OpenStackAnsible : Mohammed Naser * OpenStackClient : Dean Troyer * Oslo : Ben Nemec * Packaging Rpm : Javier Peña * Puppet OpenStack : Shengping Zhong * Qinling : Lingxian Kong * Quality Assurance : Ghanshyam Mann * Rally : Andrey Kurilin * Release Management : Sean McGinnis * Requirements : Matthew Thode * Sahara : Jeremy Freudberg * Searchlight : Trinh Nguyen * Senlin : XueFeng Liu * Solum : Rong Zhu * Storlets : Kota Tsuyuzaki * Swift : Tim Burke * Tacker : dharmendra kushwaha * Telemetry : Rong Zhu * Tricircle : chi zhang * Tripleo : Wes Hayutin * Trove : Lingxian Kong * Vitrage : Eyal Bar-Ilan * Watcher : canwei li * Zaqar : wang hao * Zun : Feng Shengqin Also please join me in congratulating the 6 newly elected members of the TC: Ghanshyam Mann (gmann) Jean-Philippe Evrard (evrardjp) Jay Bryant (jungleboyj) Kevin Carter (cloudnull) Kendall Nelson (diablo_rojo) Nate Johnston (njohnston) Full results: because there were only as many TC candidates as open seats, no poll was held and all candidates were acclaimed Elections: Election process details and results are also available here: https://governance.openstack.org/election/ -- Jeremy Stanley, on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From anmar.salih1 at gmail.com Wed Sep 4 02:51:34 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Tue, 3 Sep 2019 22:51:34 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Hi Lingxian, First of all, I would like to apologize because the email is pretty long. I listed all the steps I went through just to make sure that I did everything correctly. Here are the configurations of the environment I am using: * Operating system (Ubuntu16 server) running on virtual machine. * Openstack version 3.19.0 * Aodh version 1.2.0 ( I executed *aodh --version* command and got response, so I am assuming aodh is working ) * Here is local.conf file I used to install devstack. * Here is a list for all of components I have in my environment after installation. 1- First step is to add the runtime environment by openstack runtime create --name python27 openstackqinling/python-runtime. One minute later the status of runtime switched to available. 2- Creating *hello_world.py* function ( exactly as mentioned at the website) . 3- Creating qinling function by openstack function create --runtime eaeeb0b6-4257-4f17-a336-892c3ec28a3e --entry hello_world.main --file hello_world.py . I got a response that is the function is created. Exactly as mentioned at the website. 4- Creating the webhook for the function by: openstack webhook create --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen capture for the response. I tried to copy and paste the webhook_url " http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into my internet browser, so I got 404 not found. I am not sure if this is normal response or I have something wrong here. 5- Next step is to create an event alarm in Aodh by: aodh alarm create --name qinling-alarm --type event --alarm-action http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke --repeat-action false --event-type compute.instance.create . The response is a little bit different than the one at the website. 6- Simulating an event trigger . 7- Downloading the script and modify the project and file id. by: curl -sSO https://raw.githubusercontent.com/lingxiankong/qinling_utils/master/aodh_notifier_simulator.py . So I have the following config and file id . 8- Executing the aodh alarm simulator by: python aodh_notifier_simulator.py . So I got this response : No handlers could be found for logger "oslo_messaging.notify.messaging" Message sent 9- Checking aodh alarm history by aodh alarm-history show ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response 10- Last step is to check the function execution in qinling and here is the response . (empty bracket). I am not sure what is the problem. Best wishes. Anmar Salih. On Tue, Sep 3, 2019 at 5:45 PM Lingxian Kong wrote: > On Wed, Sep 4, 2019 at 3:57 AM Anmar Salih wrote: > >> Hey all, >> >> I need help trigger aodh alarm to execute a simple function. I am >> following the instructions here >> but it does't >> work. >> > > Hi Anmar, > > Could you please provide more information? e.g. does Qinling webhook > itself work? Is the alarm created successfully? Is the python script in the > guide executed successfully? Any related error logs? > > - > Best regards, > Lingxian Kong > Catalyst Cloud > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 4 02:57:54 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Sep 2019 11:57:54 +0900 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190904024941.qaapsjuddklree26@yuggoth.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <16cfa35052b.edce4b75181025.6895418677759907250@ghanshyammann.com> Thanks Jeremy and all election official for another flawless job. -gmann ---- On Wed, 04 Sep 2019 11:49:41 +0900 Jeremy Stanley wrote ---- > Thank you to all candidates who put their name forward for Project > Team Lead (PTL) and Technical Committee (TC) in this election. A > healthy, open process breeds trust in our decision making capability > thank you to all those who make this process possible. > > Now for the results of the PTL election process, please join me in > extending congratulations to the following PTLs: > > * Adjutant : Adrian Turjak > * Barbican : Douglas Mendizábal > * Blazar : Pierre Riteau > * Cinder : Brian Rosmaita > * Cloudkitty : Luka Peschke > * Congress : Eric Kao > * Documentation : Alexandra Settle > * Ec2 Api : Andrey Pavlov > * Freezer : geng chc > * Glance : Abhishek Kekane > * Heat : Rico Lin > * Horizon : Akihiro Motoki > * Infrastructure : Clark Boylan > * Ironic : Julia Kreger > * Karbor : Pengju Jiao > * Keystone : Colleen Murphy > * Kolla : Mark Goddard > * Kuryr : Michał Dulko > * Loci : Pete Birley > * Magnum : Feilong Wang > * Manila : Goutham Pacha Ravi > * Masakari : Sampath Priyankara > * Mistral : Renat Akhmerov > * Monasca : Witek Bedyk > * Murano : Rong Zhu > * Neutron : Sławek Kapłoński > * Nova : Eric Fried > * Octavia : Adam Harwell > * OpenStack Charms : Frode Nordahl > * Openstack Chef : Jens Harbott > * OpenStack Helm : Pete Birley > * OpenStackAnsible : Mohammed Naser > * OpenStackClient : Dean Troyer > * Oslo : Ben Nemec > * Packaging Rpm : Javier Peña > * Puppet OpenStack : Shengping Zhong > * Qinling : Lingxian Kong > * Quality Assurance : Ghanshyam Mann > * Rally : Andrey Kurilin > * Release Management : Sean McGinnis > * Requirements : Matthew Thode > * Sahara : Jeremy Freudberg > * Searchlight : Trinh Nguyen > * Senlin : XueFeng Liu > * Solum : Rong Zhu > * Storlets : Kota Tsuyuzaki > * Swift : Tim Burke > * Tacker : dharmendra kushwaha > * Telemetry : Rong Zhu > * Tricircle : chi zhang > * Tripleo : Wes Hayutin > * Trove : Lingxian Kong > * Vitrage : Eyal Bar-Ilan > * Watcher : canwei li > * Zaqar : wang hao > * Zun : Feng Shengqin > > Also please join me in congratulating the 6 newly elected members of > the TC: > > Ghanshyam Mann (gmann) > Jean-Philippe Evrard (evrardjp) > Jay Bryant (jungleboyj) > Kevin Carter (cloudnull) > Kendall Nelson (diablo_rojo) > Nate Johnston (njohnston) > > Full results: because there were only as many TC candidates as open > seats, no poll was held and all candidates were > acclaimed > > Elections: > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > -- > Jeremy Stanley, on behalf of the OpenStack Technical Election Officials > From andre at florath.net Wed Sep 4 06:07:31 2019 From: andre at florath.net (Andreas Florath) Date: Wed, 04 Sep 2019 08:07:31 +0200 Subject: [heat] Resource handling in Heat stacks Message-ID: Hello! Can please anybody tell me, if all resources which are created within a Heat stack belong to the stack in the way that all the resources are freed / deleted when the stack is deleted? IMHO all resources which are created during the initial creation or update of a stack, even if they are ephemeral or only internal created, must be deleted when the stack is deleted by OpenStack Heat itself. Correct? My question might see obvious, but I did not find an explicit hint in the documentation stating this. The reason for my question: I have a Heat template which uses two images to create a server (using block_device_mapping_v2). Every time I run an 'openstack stack create' and 'openstack stack delete' cycle one ephemeral volume is left over / gets not deleted. For me this sounds like a problem in OpenStack (Heat). (It looks that this is at least similar to https://review.opendev.org/#/c/341008/ which never made it into master.) Kind regards Andre From ramishra at redhat.com Wed Sep 4 06:34:40 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 4 Sep 2019 12:04:40 +0530 Subject: [heat] Resource handling in Heat stacks In-Reply-To: References: Message-ID: On Wed, Sep 4, 2019 at 11:41 AM Andreas Florath wrote: > Hello! > > > Can please anybody tell me, if all resources which are created > within a Heat stack belong to the stack in the way that > all the resources are freed / deleted when the stack is deleted? > > IMHO all resources which are created during the initial creation or > update of a stack, even if they are ephemeral or only internal > created, must be deleted when the stack is deleted by OpenStack Heat > itself. Correct? > > My question might see obvious, but I did not find an explicit hint in > the documentation stating this. > > > The reason for my question: I have a Heat template which uses two > images to create a server (using block_device_mapping_v2). Every time > I run an 'openstack stack create' and 'openstack stack delete' cycle > one ephemeral volume is left over / gets not deleted. > I think it's due toe delete_on_termination[1] property of bdmv2 which is interpreted as 'False', if not specified. You can set it to 'True' to delete the volumes along with server. I've not checked if it's different from how nova api behaves though. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server-prop-block_device_mapping_v2-*-delete_on_termination > For me this sounds like a problem in OpenStack (Heat). > (It looks that this is at least similar to > https://review.opendev.org/#/c/341008/ > which never made it into master.) > > > Kind regards > > Andre > > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Sep 4 07:29:04 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 04 Sep 2019 09:29:04 +0200 Subject: [openstack-ansible] weekly office hours In-Reply-To: References: Message-ID: On Tue, 2019-09-03 at 16:11 -0400, Mohammed Naser wrote: > Hi everyone, > > Here’s the update of what happened in this week’s OpenStack Ansible > Office Hours. > > - The 42.3 clean up and job state matrix are still pending. > - We decided to retire Ocata and are thinking of retiring Pike since > they are not being used or maintained anymore. > - We also discussed the future of OSA. We noticed a lot of operators > are going more towards containers or kubernetes and the contributions > and traction for OSA have decreased, so we’re wondering if we should > start thinking about adopting a new direction in the future. > > I suggest that you read the eavesdrop for the last point and would > like to ask for input. > > Thanks! > > Regards, > Mohammed > Thanks for the summary! I totally enjoy those every week. I hope some other will step up and say they like them too, or better, write those to free some of your time! :) Regards, JP From andre at florath.net Wed Sep 4 07:51:01 2019 From: andre at florath.net (Andreas Florath) Date: Wed, 04 Sep 2019 09:51:01 +0200 Subject: [heat] Resource handling in Heat stacks In-Reply-To: References: Message-ID: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> Many thanks! Works like a charm! Suggestion: document default value of 'delete_on_termination'. 😉 Kind regards Andre On Wed, 2019-09-04 at 12:04 +0530, Rabi Mishra wrote: > On Wed, Sep 4, 2019 at 11:41 AM Andreas Florath > wrote: > > Hello! > > > > > > > > > > > > Can please anybody tell me, if all resources which are created > > > > within a Heat stack belong to the stack in the way that > > > > all the resources are freed / deleted when the stack is deleted? > > > > > > > > IMHO all resources which are created during the initial creation or > > > > update of a stack, even if they are ephemeral or only internal > > > > created, must be deleted when the stack is deleted by OpenStack > > Heat > > > > itself. Correct? > > > > > > > > My question might see obvious, but I did not find an explicit hint > > in > > > > the documentation stating this. > > > > > > > > > > > > The reason for my question: I have a Heat template which uses two > > > > images to create a server (using block_device_mapping_v2). Every > > time > > > > I run an 'openstack stack create' and 'openstack stack delete' > > cycle > > > > one ephemeral volume is left over / gets not deleted. > > > > > I think it's due toe delete_on_termination[1] property of bdmv2 which > is interpreted as 'False', if not specified. You can set it to 'True' > to delete the volumes along with server. I've not checked if it's > different from how nova api behaves though. > > [1] > https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server-prop-block_device_mapping_v2-*-delete_on_termination > > > For me this sounds like a problem in OpenStack (Heat). > > > > (It looks that this is at least similar to > > > > https://review.opendev.org/#/c/341008/ > > > > which never made it into master.) > > > > > > > > > > > > Kind regards > > > > > > > > Andre > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andres at opennodecloud.com Wed Sep 4 08:43:49 2019 From: andres at opennodecloud.com (Andres Toomsalu) Date: Wed, 4 Sep 2019 11:43:49 +0300 Subject: [ops] Introducing Waldur - open-source platform for openstack operators Message-ID: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> I would like to introduce open-source Waldur platform - targeted for openstack cloud operators and implementing service delivery pipeline towards end customers. It uses modular approach and has the following core features: * integrated marketplace and self-service for the end users (REST API and webapp) * built-in accounting, integrations with backend billing systems and payment gateways * built-in customer support/servicedesk functionality and integrations with backend servicedesk systems (Atlassian Servicedesk for example) * built-in organisation membership and user management * multi-cloud (ie multiple openstack API endpoints) support inside customer project containers This diagram provides overview about available modules and their relations: https://waldur.com/assets/doc/waldur-diagram.pdf Waldur openstack support is fairly mature - it has been successfully used in production deployments since 2015. Its web-based self-service for end users implements somewhat opiniated openstack tenant and resource management - based on our real-world experience and some best practices delivered from it. But also Horizon access provisioning and out-of-band tenant changes sync are supported. More details about Waldur platform and its openstack support can be found here: https://waldur.com/#openstack Source code is available from OpenNode github repositories: https://github.com/opennode/waldur-mastermind https://github.com/opennode/waldur-homeport Documentation is available here: http://docs.waldur.com All the best, Andres Toomsalu andres at opennodeloud.com From wesley.peng1 at googlemail.com Wed Sep 4 08:51:09 2019 From: wesley.peng1 at googlemail.com (Wesley Peng) Date: Wed, 4 Sep 2019 16:51:09 +0800 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> Message-ID: on 2019/9/4 16:43, Andres Toomsalu wrote: > I would like to introduce open-source Waldur platform - targeted for > openstack cloud operators and implementing service delivery pipeline > towards end customers. It uses modular approach and has the following > core features: > > * integrated marketplace and self-service for the end users (REST API > and webapp) > * built-in accounting, integrations with backend billing systems and > payment gateways > * built-in customer support/servicedesk functionality and integrations > with backend servicedesk systems (Atlassian Servicedesk for example) > * built-in organisation membership and user management > * multi-cloud (ie multiple openstack API endpoints) support inside > customer project containers Nice to know it,thanks. btw, does it support registrar's stuff? like domain management, DNS operations, CDN setup etc. regards. From andres at opennodecloud.com Wed Sep 4 08:59:03 2019 From: andres at opennodecloud.com (Andres Toomsalu) Date: Wed, 4 Sep 2019 11:59:03 +0300 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> Message-ID: <418a774c-1460-ea22-c853-e3869b86d6fb@opennodecloud.com> Not yet -  but as Waldur is modular and open-source there are several ways to achieve this :) Wesley Peng wrote on 04/09/2019 11:51: > > > on 2019/9/4 16:43, Andres Toomsalu wrote: >> I would like to introduce open-source Waldur platform - targeted for >> openstack cloud operators and implementing service delivery pipeline >> towards end customers. It uses modular approach and has the following >> core features: >> >> * integrated marketplace and self-service for the end users (REST API >> and webapp) >> * built-in accounting, integrations with backend billing systems and >> payment gateways >> * built-in customer support/servicedesk functionality and >> integrations with backend servicedesk systems (Atlassian Servicedesk >> for example) >> * built-in organisation membership and user management >> * multi-cloud (ie multiple openstack API endpoints) support inside >> customer project containers > > Nice to know it,thanks. > btw, does it support registrar's stuff? like domain management, DNS > operations, CDN setup etc. > > regards. > -- ---------------------------------------------- Andres Toomsalu,andres at opennodecloud.com http://www.opennodecloud.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesley.peng1 at googlemail.com Wed Sep 4 09:12:52 2019 From: wesley.peng1 at googlemail.com (Wesley Peng) Date: Wed, 4 Sep 2019 17:12:52 +0800 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> Message-ID: <2a90e6ac-6e46-2080-c80f-f8ecb49dbe56@googlemail.com> Hi on 2019/9/4 16:43, Andres Toomsalu wrote: > All the best, > > Andres Toomsalu > andres at opennodeloud.com Your signature's domain is typo. Should it be: opennodecloud.com, a "c" gets lost. regards. From andres at opennodecloud.com Wed Sep 4 09:16:36 2019 From: andres at opennodecloud.com (Andres Toomsalu) Date: Wed, 4 Sep 2019 12:16:36 +0300 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: <2a90e6ac-6e46-2080-c80f-f8ecb49dbe56@googlemail.com> References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> <2a90e6ac-6e46-2080-c80f-f8ecb49dbe56@googlemail.com> Message-ID: <11fc43e8-2faa-ead9-6f47-1a53e76b491c@opennodecloud.com>  Correct - its andres at opennodecloud.com yes. Thank you for spotting! Wesley Peng wrote on 04/09/2019 12:12: > Hi > > on 2019/9/4 16:43, Andres Toomsalu wrote: >> All the best, >> >> Andres Toomsalu >> andres at opennodeloud.com > > Your signature's domain is typo. > Should it be: opennodecloud.com, a "c" gets lost. > > regards. > From cdent+os at anticdent.org Wed Sep 4 09:32:59 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 4 Sep 2019 10:32:59 +0100 (BST) Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190904024941.qaapsjuddklree26@yuggoth.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: On Wed, 4 Sep 2019, Jeremy Stanley wrote: > Thank you to all candidates who put their name forward for Project > Team Lead (PTL) and Technical Committee (TC) in this election. A > healthy, open process breeds trust in our decision making capability > thank you to all those who make this process possible. Congratulations and thank you to the people taking on these roles. We need to talk about the fact that there was no opportunity to vote in these "elections" (PTL or TC) because there were insufficient candidates. No matter the quality of new leaders (this looks like a good group), something is amiss. We danced around these issue for the two years I was on the TC, but we never did anything concrete to significantly change things, carrying on doing things in the same way in a world where those ways no longer seemed to fit. We can't claim any "seem" about it any more: OpenStack governance and leadership structures do not fit and we need to figure out the necessary adjustments. I haven't got any new ideas (which is part of why I left the TC). My position has always been that with a vendor and enterprise led project like OpenStack, where those vendors and enterprises are operating in a huge market, staffing the commonwealth in a healthy fashion is their responsibility. In large part because they are responsible for making OpenStack resistant to "casual" contribution in the first place (e.g., "hardware defined software"). We get people, sometimes, but it is not healthy: i may see different cross-sections of the community than others do, but i feel like there's been a strong tone of burnout since 2012 [1] We drastically need to change the expectations we place on ourselves in terms of velocity. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-09-04.log.html#t2019-09-04T00:26:35 > Ghanshyam Mann (gmann) > Jean-Philippe Evrard (evrardjp) > Jay Bryant (jungleboyj) > Kevin Carter (cloudnull) > Kendall Nelson (diablo_rojo) > Nate Johnston (njohnston) Since there was no need to vote, there was no need to campaign, which means we will be missing out on the Q&A period. I've found those very useful for understanding the issues that are present in the community and for generating ideas on what to about them. I think it is good to have that process anyway so I'll start: What do you think we, as a community, can do about the situation described above? What do you as a TC member hope to do yourself? Thanks -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From pawel.konczalski at everyware.ch Wed Sep 4 09:46:35 2019 From: pawel.konczalski at everyware.ch (Pawel Konczalski) Date: Wed, 4 Sep 2019 11:46:35 +0200 Subject: Octavia LB flavor recommendation for Amphora VMs Message-ID: Hello everyone / Octavia Team, what is your experience / recommendation for a Octavia flavor with is used to deploy Amphora VM for small / mid size setups? (RAM / Cores / HDD) BR Pawel -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5227 bytes Desc: not available URL: From mark at stackhpc.com Wed Sep 4 10:00:27 2019 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 4 Sep 2019 11:00:27 +0100 Subject: [kolla] Cancelling today's meeting Message-ID: Hi, I can't make today's meeting, and we're missing a number of cores so I'll cancel. Please get in touch on IRC if there is anything to update. Cheers, Mark From jean-philippe at evrard.me Wed Sep 4 10:27:37 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 04 Sep 2019 12:27:37 +0200 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <5adcf773a6f9a4e5771eebe2e801a3ea77692e74.camel@evrard.me> On Wed, 2019-09-04 at 10:32 +0100, Chris Dent wrote: > > We need to talk about the fact that there was no opportunity to vote > in these "elections" (PTL or TC) because there were insufficient > candidates. > (snipped) I think people agreed on reducing the TC members to 9. This will not change things fundamentally, but will open the chance for elections. > We can't claim any "seem" about it any more: OpenStack governance > and leadership structures do not fit and we need to figure out > the necessary adjustments. I will propose a series of adjustments, but these are not crazy ideas. I would like to brainstorm that with you, as I might have some more crazy ideas. > We drastically need to change the expectations we place on ourselves > in terms of velocity. I think there are a few ideas floating around. OpenStack is more stable nowadays too. I want to bring more fun and less pressure in OpenStack. This is something the TC will need to speak with the foundation, as it might impact them (impact on events for example). Good that we have some members on the foundation onboard :) > Since there was no need to vote, there was no need to campaign, > which means we will be missing out on the Q&A period. In fact I was looking forward the Q&A. I am weirdly not considering myself elected without this! AMA :) > What do you think we, as a community, can do about the situation > described above? What do you as a TC member hope to do yourself? This is by far too big to answer in a single email, and I would prefer if we split that into a different thread(s), if you don't mind :) My candidacy letter also wants to address some of those points, but not all of them, so I am glad you're raising them. What I would like to see: changes in the TC, changes in the release cadence, tech debt reduction, make the code (more) fun to deal with, allow us to try new things. Regards, JP From allprog at gmail.com Wed Sep 4 10:32:46 2019 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Wed, 4 Sep 2019 12:32:46 +0200 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team Message-ID: I would like to invite Oleg Ovcharuk to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. Thank you, Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Sep 4 12:18:34 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 4 Sep 2019 14:18:34 +0200 Subject: [keystone][edge] Edge Hacking Days - September 6,9, 13 In-Reply-To: <017B8545-8153-42E0-99B1-CF3775DBD4CA@gmail.com> References: <017B8545-8153-42E0-99B1-CF3775DBD4CA@gmail.com> Message-ID: Hi, I hope you had a great Summer and ready to dive back into edge computing with some new energy! :) We have three potential days for September based on the Doodle poll: September 6, 9, 13 and we are using the same etherpad for tracking and ideas: https://etherpad.openstack.org/p/osf-edge-hacking-days As a reminder, the Edge Hacking Days initiative is a remote gathering to work on edge computing related items, such as the reference architecture work or feature development or bug fixing items in relevant OpenStack services. __Please sign up on the etherpad for the days when you are available with time slots (including your time zone) when you are planning to be around if you’re interested in joining.__ Let me know if you have any questions. Thanks and Best Regards, Ildikó > On 2019. Aug 15., at 15:40, Ildiko Vancsa wrote: > > Hi, > > It is a friendly reminder that we are having the second edge hacking days in August this Friday (August 16). > > The dial-in information is the same, you can find the details here: https://etherpad.openstack.org/p/osf-edge-hacking-days > > If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. > > We will keep on working on two items: > * Keystone to Keystone federation testing in DevStack > * Building the centralized edge reference architecture on Packet HW using TripleO > > Please let me know if you have any questions. > > See you on Friday! :) > > Thanks, > Ildikó From a.settle at outlook.com Wed Sep 4 12:30:08 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 4 Sep 2019 12:30:08 +0000 Subject: [all] [tc] PDF goal change Message-ID: Hi all, Further work on the PDF generation showed that many projects have project logos as svg files with the name PROJECT.svg, those get converted to PROJECT.pdf and collide with the default name used here. We have now changed the default to "doc-PROJECT.pdf" to have a unique name. The review [1] merged yesterday, so please be aware of this change when you are working on the goal. Thanks, Alex [1] https://review.opendev.org/#/c/679777 -- Alexandra Settle IRC: asettle From gaetan.trellu at incloudus.com Wed Sep 4 13:27:23 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Wed, 04 Sep 2019 09:27:23 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: Message-ID: <56d312af-2b52-49e4-afbc-446162cb08c8@email.android.com> An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Wed Sep 4 13:53:49 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 4 Sep 2019 09:53:49 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <16cfa35052b.edce4b75181025.6895418677759907250@ghanshyammann.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <16cfa35052b.edce4b75181025.6895418677759907250@ghanshyammann.com> Message-ID: <20190904135349.3vlueuttca6quztv@bishop> On Wed, Sep 04, 2019 at 11:57:54AM +0900, Ghanshyam Mann wrote: > Thanks Jeremy and all election official for another flawless job. > > -gmann I agree - thanks to the election officials! I had a short stint as an election official and I can say, it is a far more complicated job than it appears. And their work is essential to the functioning of the community. Nate > ---- On Wed, 04 Sep 2019 11:49:41 +0900 Jeremy Stanley wrote ---- > > Thank you to all candidates who put their name forward for Project > > Team Lead (PTL) and Technical Committee (TC) in this election. A > > healthy, open process breeds trust in our decision making capability > > thank you to all those who make this process possible. > > > > Now for the results of the PTL election process, please join me in > > extending congratulations to the following PTLs: > > > > * Adjutant : Adrian Turjak > > * Barbican : Douglas Mendizábal > > * Blazar : Pierre Riteau > > * Cinder : Brian Rosmaita > > * Cloudkitty : Luka Peschke > > * Congress : Eric Kao > > * Documentation : Alexandra Settle > > * Ec2 Api : Andrey Pavlov > > * Freezer : geng chc > > * Glance : Abhishek Kekane > > * Heat : Rico Lin > > * Horizon : Akihiro Motoki > > * Infrastructure : Clark Boylan > > * Ironic : Julia Kreger > > * Karbor : Pengju Jiao > > * Keystone : Colleen Murphy > > * Kolla : Mark Goddard > > * Kuryr : Michał Dulko > > * Loci : Pete Birley > > * Magnum : Feilong Wang > > * Manila : Goutham Pacha Ravi > > * Masakari : Sampath Priyankara > > * Mistral : Renat Akhmerov > > * Monasca : Witek Bedyk > > * Murano : Rong Zhu > > * Neutron : Sławek Kapłoński > > * Nova : Eric Fried > > * Octavia : Adam Harwell > > * OpenStack Charms : Frode Nordahl > > * Openstack Chef : Jens Harbott > > * OpenStack Helm : Pete Birley > > * OpenStackAnsible : Mohammed Naser > > * OpenStackClient : Dean Troyer > > * Oslo : Ben Nemec > > * Packaging Rpm : Javier Peña > > * Puppet OpenStack : Shengping Zhong > > * Qinling : Lingxian Kong > > * Quality Assurance : Ghanshyam Mann > > * Rally : Andrey Kurilin > > * Release Management : Sean McGinnis > > * Requirements : Matthew Thode > > * Sahara : Jeremy Freudberg > > * Searchlight : Trinh Nguyen > > * Senlin : XueFeng Liu > > * Solum : Rong Zhu > > * Storlets : Kota Tsuyuzaki > > * Swift : Tim Burke > > * Tacker : dharmendra kushwaha > > * Telemetry : Rong Zhu > > * Tricircle : chi zhang > > * Tripleo : Wes Hayutin > > * Trove : Lingxian Kong > > * Vitrage : Eyal Bar-Ilan > > * Watcher : canwei li > > * Zaqar : wang hao > > * Zun : Feng Shengqin > > > > Also please join me in congratulating the 6 newly elected members of > > the TC: > > > > Ghanshyam Mann (gmann) > > Jean-Philippe Evrard (evrardjp) > > Jay Bryant (jungleboyj) > > Kevin Carter (cloudnull) > > Kendall Nelson (diablo_rojo) > > Nate Johnston (njohnston) > > > > Full results: because there were only as many TC candidates as open > > seats, no poll was held and all candidates were > > acclaimed > > > > Elections: > > > > Election process details and results are also available here: > > https://governance.openstack.org/election/ > > > > -- > > Jeremy Stanley, on behalf of the OpenStack Technical Election Officials > > > > From amotoki at gmail.com Wed Sep 4 14:06:11 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 4 Sep 2019 23:06:11 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> Message-ID: On Wed, Sep 4, 2019 at 12:43 AM Ian Y. Choi wrote: > > Akihiro Motoki wrote on 9/3/2019 11:12 PM: > > On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: > >> > >> > >>> On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: > >>> > >>> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: > >>>>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > >>>>> > >>>>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > >>>>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > >>>>> [snip] > >>>>> > >>>>>>> When the goal is defined the docs team thought the doc gate job can > >>>>>>> handle the PDF build > >>>>>>> without extra tox env and zuul job configuration. However, during > >>>>>>> implementing the zuul job support > >>>>>>> it turns out at least a new tox env or an extra zuul job configuration > >>>>>>> is required in each project > >>>>>>> to make the docs job fail when PDF build failure is detected. As a > >>>>>>> result, we changes the approach > >>>>>>> and the new tox target is now required in each project repo. > >>>>>> The whole point of structuring the goal the way we did was that we do > >>>>>> not want to update every single repo this cycle so we could roll out > >>>>>> PDF building transparently. We said we would allow the job to pass > >>>>>> even if the PDF build failed, because this was phase 1 of making all > >>>>>> of this work. > >>>>>> > >>>>>> The plan was to 1. extend the current job to make PDF building > >>>>>> optional; 2. examine the results to see how many repos need > >>>>>> significant work; 3. add a feature flag via a setting somewhere in > >>>>>> the repo to control whether the job fails if PDFs cannot be built. > >>>>>> That avoids a second doc job running in parallel, and still allows us > >>>>>> to roll out the PDF build requirement over time when we have enough > >>>>>> information to do so. > >>>>> Unfortunately when we tried to implement this we found that virtually > >>>>> every project we looked at required _some_ amount of tweaks just to > >>>>> build, let alone look sensible. This was certainly true of the big > >>>>> service projects (nova, neutron, cinder, ...) which all ran afoul of a > >>>>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous > >>>>> approach, such as the inability to easily reproduce locally and the > >>>>> general "hackiness" of the thing, along with the fact that we now had > >>>>> to submit changes against projects anyway, a collective decision was > >>>>> made [2] to drop that plan and persue the 'pdfdocs' tox target > >>>>> approach. > >>>> We wanted to avoid making a bunch of the same changes to projects just to > >>>> add the PDF building instructions. If the *content* of a project’s documentation > >>>> needs work, that’s different. We should make those changes. > >>> I thought the only reason to hack the docs venv in a Zuul job was to > >>> avoid having to mass patch projects to add tox configuration? As such, > >>> if we're already having to mass patch projects because they don't build > >>> otherwise, why wouldn't we add the tox configuration? Was there another > >>> reason to pursue the zuul-only approach that I've forgotten about/never > >>> knew? > >> I expected to need to fix formatting (even up to the point of commenting things > >> out, like we found with the giant config sample files). Those are content changes, > >> and would be mostly unique across projects. > >> > >> I wanted to avoid a large number of roughly identical changes to add tox environments, > >> zuul jobs, etc. because having a lot of patches like that across all the repos makes > >> extra work for small gain, especially when we can get the same results with a small > >> number of changes in one repository. > >> > >> The approach we discussed was to update the docs job to run some extra steps using > >> scripts that lived in the openstackdocstheme repository. That shouldn’t require > >> adding any extra software or otherwise modifying the tox environments. Did that approach > >> not work out? > > We explored ways only to update the docs job to run extra commands to > > build PDF docs, > > but there is one problem that the job cannot know whether PDF build is > > ready or not. > > If we ignore an error from PDF build, it works for repositories which > > are not ready for PDF build, > > but we cannot prevent PDF build failure again for repositories ready > > for PDF build > > As my project team hat of neutron team, we don't want to have PDF > > build failure again > > once the PDF build starts to work. > > To avoid this, stephenfin, asettle, AJaeger and I agree that some flag > > to determine if the PDF build > > is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. > > Another way we considered is a variable in openstack-tox-docs job, but > > we cannot pass a variable > > to zuul project template, so we didn't use this way. > > If there is a more efficient way, I am happy to use it. > > > > Thanks, > > Akihiro > > > Hello, > > > Sorry for joining in this thread late, but to I first would like to try > to figure out the current status regarding the current discussion on the > thread: > > - openstackdocstheme has docstheme-build-pdf script [1] > > - build-pdf-docs Zuul job in openstack-zuul-jobs pre-installs all > required packages [2] > > - Current guidance for project repos is that 1) is to just add to > latex_documents settings [3] and add pdf-docs environment for trigger [4] > > - Project repos additionally need to change more for successful PDF > builds like adding more options on conf.py [5] and changing more on rst > files to explictly options like [6] . Thanks Ian. Your understanding on the current situations is correct. Good summary, thanks. > > > Now my questions from comments are: > > a) How about checking an option in somewhere else like .zuul.yaml or > using grep in docs env part, not doing grep to check the existance of > "pdf-docs" tox env [3]? I am not sure how your suggestion works more efficiently than the current pdf-docs tox env approach. We explored an option to introduce a flag variable to the openstack-tox-docs job but we use a zuul project-template which wraps openstack-tox-docs job and another job. The current zuul project-template does not accept a variable and projects who want to specify a flag explicitly needs to copy the content of the project-template. Considering this we gave up this route. Regarding "using grep in docs env part", I haven't understood what you think, but it looks similar to the current approach. > > b) Can we call docstheme-build-pdf in openstackdocstheme [1] instead of > direct Sphinx & make commands in "pdf-docs" environment [4]? It can, but I am not sure whether we need to update the current proposed patches. The only advantage of using docstheme-build-pdf is that we don't need to change project repositories when we update the command lines in future, but it sounds a matter of taste. > > c) Ultimately, would executing docstheme-build-pdf command in > build-pdf-docs Zuul job with another kind of trigger like bullet a) be > feasible and/or be implemented by the end of this cycle? We can, but again it is a matter of taste to me and most important thing is how we handle a flag to enable PDF build. Thanks, Akihiro > > > > With many thanks, > > > /Ian > > > [1] https://review.opendev.org/#/c/665163/ > > [2] > https://review.opendev.org/#/c/664555/25/roles/prepare-build-pdf-docs/tasks/main.yaml at 3 > > [3] https://review.opendev.org/#/c/678393/4/doc/source/conf.py > > [4] https://review.opendev.org/#/c/678393/4/tox.ini > > [5] https://review.opendev.org/#/c/678747/1/doc/source/conf.py at 270 > > [6] https://review.opendev.org/#/c/678747/1/doc/source/index.rst at 13 > From skaplons at redhat.com Wed Sep 4 14:37:05 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 4 Sep 2019 16:37:05 +0200 Subject: [neutron] CI issues Message-ID: <2BBD3139-A073-42D1-8A2A-A4847F9CBA4D@redhat.com> Hi neutrinos, We are currently having some issues in our gate. Please see [1], [2] and [3] for details. If Your Neutron patch failed on neutron-functional, neutron-functional-python27 or networking-ovn-tempest-dsvm-ovs-release jobs, please don’t recheck before all those issues will be solved. Recheck will not help and You will only use infra resources. [1] https://bugs.launchpad.net/neutron/+bug/1842659 [2] https://bugs.launchpad.net/neutron/+bug/1842482 [3] https://bugs.launchpad.net/bugs/1842657 — Slawek Kaplonski Senior software engineer Red Hat From dtantsur at redhat.com Wed Sep 4 15:24:14 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 4 Sep 2019 17:24:14 +0200 Subject: [ironic] opensuse-15 jobs are temporary non-voting on bifrost Message-ID: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> Hi all, JFYI we had to disable opensuse-15 jobs because they kept failing with repository issues. Help with debugging appreciated. Dmitry From mnaser at vexxhost.com Wed Sep 4 15:54:37 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Sep 2019 11:54:37 -0400 Subject: [tc] monthly meeting agenda Message-ID: Hi everyone, Here’s the agenda for our monthly TC meeting. It will happen tomorrow (Thursday the 5th) at 1400 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Follow up on past action items ** mnaser to contact Alan to mention that TC will have some presence at Shanghai leadership meeting ** ricolin update SIG guidelines to simplify the process for new SIGs ** ttx contact interested parties in a new 'large scale' SIG (help with mnaser, jroll reaching out to Verizon Media) * Active Initiatives ** mugsie to sync with dhellmann or release-team to resolve proposal bot for project-template patches ** Shanghai TC sessions: https://etherpad.openstack.org/p/PVG-TC-brainstorming ** Forum selection commitee: http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008188.html ** Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/ ) Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Wed Sep 4 16:20:12 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Sep 2019 12:20:12 -0400 Subject: [ansible-sig] weekly meetings Message-ID: Hi everyone, For those interested in getting involved, the ansible-sig meetings will be held weekly on Fridays at 2:00 pm UTC starting next week (13 September 2019). Looking forward to discussing details and ideas with all of you! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From chris at openstack.org Wed Sep 4 16:23:58 2019 From: chris at openstack.org (Chris Hoge) Date: Wed, 4 Sep 2019 09:23:58 -0700 Subject: Thank you Stackers for five amazing years! Message-ID: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Hi everyone, After more than nine years working in cloud computing and on OpenStack, I've decided that it is time for a change and will be moving on from the OpenStack Foundation. For the last five years I've had the honor of helping to support this vibrant community, and I'm going to deeply miss being a part of it. OpenStack has been a central part of my life for so long that it's hard to imagine a work life without it. I'm proud to have helped in some small way to create a lasting project and community that has, and will continue to, transform how infrastructure is managed. September 12 will officially be my last day with the OpenStack Foundation. As I make the move away from my responsibilities, I'll be working with community members to help ensure continuity of my efforts. Thank you to everyone for building such an incredible community filled with talented, smart, funny, and kind people. You've built something special here, and we're all better for it. I'll still be involved with open source. If you ever want to get in touch, be it with questions about work I've been involved with or to talk about some exciting new tech or to just catch up over a tasty meal, I'm just a message away in all the usual places. Sincerely, Chris chris at hogepodge.com Twitter/IRC/everywhere else: @hogepodge From mnaser at vexxhost.com Wed Sep 4 16:30:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Sep 2019 12:30:36 -0400 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: On Wed, Sep 4, 2019 at 12:26 PM Chris Hoge wrote: > > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. Thanks for being such a great asset in our community, your work across many different communities and involvement (specifically within the interaction across other projects, likes Kubernetes) has definitely left a long left impact! > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From juliaashleykreger at gmail.com Wed Sep 4 16:38:55 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 4 Sep 2019 12:38:55 -0400 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Chris, Thank you for everything you've done! We wouldn't be here without your hard work! -Julia On Wed, Sep 4, 2019 at 12:28 PM Chris Hoge wrote: > > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge From jungleboyj at gmail.com Wed Sep 4 17:15:01 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 4 Sep 2019 12:15:01 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <3148cdbd-f232-4247-a40c-a0f8c2614df4@gmail.com> Chris, Thank you for your questions.  I agree that not having the election deprived the community of a chance to get to know the candidates better so I am happy to help out here.  :-) Hope my thoughts in-line below make sense! Jay On 9/4/2019 5:32 AM, Chris Dent wrote: > On Wed, 4 Sep 2019, Jeremy Stanley wrote: > >> Thank you to all candidates who put their name forward for Project >> Team Lead (PTL) and Technical Committee (TC) in this election. A >> healthy, open process breeds trust in our decision making capability >> thank you to all those who make this process possible. > > Congratulations and thank you to the people taking on these roles. > > We need to talk about the fact that there was no opportunity to vote > in these "elections" (PTL or TC) because there were insufficient > candidates. No matter the quality of new leaders (this looks like a > good group), something is amiss. We danced around these issue for > the two years I was on the TC, but we never did anything concrete to > significantly change things, carrying on doing things in the same > way in a world where those ways no longer seemed to fit. > > We can't claim any "seem" about it any more: OpenStack governance > and leadership structures do not fit and we need to figure out > the necessary adjustments. > I was surprised that we didn't have any PTL elections.  I don't know that this is all bad.  At least in the case of the Cinder team it seems to be a process that we have just kind-of internalized.  I got my chance to be PTL and was ready for a break.  I had reached out to Brian Rosmaita some time ago and had been grooming him to take over.  I had discussions with other people knew Brian was interested, so we went forward that way. I think this is a natural progression for where OpenStack is at right now.  There isn't a lot of contention over how the project needs to be  run right now.  In the future that may change and I think having our election process is important for if and when that happens. > I haven't got any new ideas (which is part of why I left the TC). > My position has always been that with a vendor and enterprise led > project like OpenStack, where those vendors and enterprises are > operating in a huge market, staffing the commonwealth in a healthy > fashion is their responsibility. In large part because they are > responsible for making OpenStack resistant to "casual" contribution > in the first place (e.g., "hardware defined software"). > > We get people, sometimes, but it is not healthy: > >     i may see different cross-sections of the community than others >     do, but i feel like there's been a strong tone of burnout since >     2012 [1] > This is a very real concern for me.  We do have a very few people who have taken over a lot of responsibility for OpenStack and are getting burned out.  We also need to have more companies start investing in OpenStack again.  We can't, however, force them to participate. I know from my last year or so at Lenovo that there are customers with real interest in OpenStack.  OpenStack is running in the real world.  I don't know if it is just working for people or if the customers are modifying it themselves and not contributing back. It would be interesting to get numbers on this.  Not sure how we can do that.  I am afraid, in the past, that the community got a reputation of being 'too hard to contribute to'.  If that perception is still hurting us now it is something that we need to address. I think that some of the lack of participation is also due to cultural differences in the geos where OpenStack has been expanding.  That is a very hard problem to address. > We drastically need to change the expectations we place on ourselves > in terms of velocity. > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-09-04.log.html#t2019-09-04T00:26:35 > >>  Ghanshyam Mann (gmann) >>  Jean-Philippe Evrard (evrardjp) >>  Jay Bryant (jungleboyj) >>  Kevin Carter (cloudnull) >>  Kendall Nelson (diablo_rojo) >>  Nate Johnston (njohnston) > > Since there was no need to vote, there was no need to campaign, > which means we will be missing out on the Q&A period. I've found > those very useful for understanding the issues that are present in > the community and for generating ideas on what to about them. I > think it is good to have that process anyway so I'll start: > > What do you think we, as a community, can do about the situation > described above? What do you as a TC member hope to do yourself? > I addressed this a bit in my candidacy note.  I think that we need to continue to improve our education and on-boarding processes.  Though I don't think it is hard to contribute successfully to OpenStack, there is a lot of tribal knowledge required to be successful in OpenStack.  Documenting those things will help. I would like to work with the foundation to reach out to companies and find out why they are less likely to participate than they used to be.  People are using OpenStack ... why aren't they contributing.  Perhaps it is a question that we could add to the user survey.  I know when I had the foundation reach out to companies that were about to lose their drivers from Cinder, we got responses.  So, I think that is a path we could consider. > Thanks > From Albert.Braden at synopsys.com Wed Sep 4 17:18:43 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 4 Sep 2019 17:18:43 +0000 Subject: Nova causes MySQL timeouts In-Reply-To: References: Message-ID: We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: https://docs.openstack.org/keystone/stein/configuration/config-options.html Document says: [api_database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. [database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? My settings are default: [api_database]: #connection_recycle_time = 3600 #max_overflow = #max_pool_size = [database]: #connection_recycle_time = 3600 #min_pool_size = 1 #max_overflow = 50 #max_pool_size = 5 It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? From: Gaëtan Trellu Sent: Tuesday, September 3, 2019 1:37 PM To: Albert Braden Cc: openstack-discuss at lists.openstack.org Subject: Re: Nova causes MySQL timeouts Hi Albert, It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. Keep in mind than more workers you will have more connections will be opened on the database. Gaetan (goldyfruit) On Sep 3, 2019 4:31 PM, Albert Braden > wrote: It looks like nova is keeping mysql connections open until they time out. How are others responding to this issue? Do you just ignore the mysql errors, or is it possible to change configuration so that nova closes and reopens connections before they time out? Or is there a way to stop mysql from logging these aborted connections without hiding real issues? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Sep 4 17:21:03 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 4 Sep 2019 12:21:03 -0500 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: <592533e2-c4e0-ff37-14cf-e00ebcfec832@gmail.com> Chris, Thank you for all you have done!  Sorry to see you go. Wishing you the best of luck with your future endeavors! Jay On 9/4/2019 12:23 PM, Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge From amy at demarco.com Wed Sep 4 17:28:17 2019 From: amy at demarco.com (Amy Marrich) Date: Wed, 4 Sep 2019 12:28:17 -0500 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Thanks for everything you've done over the years you will be missed! Amy (spotz) On Wed, Sep 4, 2019 at 11:24 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 4 19:35:28 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 4 Sep 2019 12:35:28 -0700 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc Message-ID: Hello :) Wanted to split the question Chris Dent asked here[1] into its own thread so people down the road and those tracking it now can find more easily. To kind of rephrase for everyone (Chris, correct me if I am wrong or not getting all of it): What do you think we, as a community, can do about the lack of candidates for roles like TC or PTL? How can we adjust, as a community, to make our governance structures fit better? In what wasy can we address and prevent burnout? I think JP[2] and Jay[3] already started to enumerate good ideas on the other thread, so to summarize/ expand/ add to their lists: - Reducing the number of TC members to 9, and maybe someday down to 7. When we were having polls for every election (maybe not every project) it was at a time where the electorate (and theoretically the number of possible candidates) was also huge. Since we have move past the hype curve and stabilized as a project, the number of polls we've (I say we because I used to be and still plan to help with elections) had to make have decreased. It seems to be a matter of proportions. - Continuing to improve education and onboarding process. Agreed 100%, but this should be an ongoing focus for everyone too- every contributor TC, PTL, or otherwise. The best way to get more people involved faster is a lower barrier to entry, but we all know that. Yes some things like gerrit and IRC are hard for people to get past and likely won't be changing for our community any time soon, but there are things like that with every community (I don't know if you have ever tried to push patches to k8s but their tagging of PRs is something they are working on making less complicated and better documented). Breaking down the onboarding process we have at the moment into smaller modules and clearly documenting the progression through those modules for new comers to easily find and work through is important. Also, though, having that be the only place that we, as a community, point to (meaning no duplicate information in multiple places like we have today) when new contributors have issues. - Better documentation of tribal knowledge. I proposed as a community goal for the U release[4], to formalize project specific onboarding information (some teams have already done this) and project specific guides for PTLs (I know we already have the broad strokes for all PTLs documented fairly well, but there's always project specific stuff) so that when there is a turn over mid release, its easier for someone new to step up. - Utilize the user survey to gather info about how/why contribution is happening or why they aren't contributing if that's the case. There are already several questions there from the TC about this topic in the survey, but perhaps they can be re-framed if we aren't getting the info we want from them. As a reminder, here they are: -- To which projects does your organization contribute maintenance resources, such as patches for bug fixes and code reviews on master or stable branches? -- What prevents you or your organization from contributing more maintenance resources, or makes contributing difficult? -- How do members of your organization contribute to OpenStack? I think the real issue is getting larger vendors of OpenStack to get their users to take the user survey. We have a pretty solid reach as it is, but there are a lot of people using OpenStack that don't take the survey that we don't know about even because they are confidential (their results can still be confidential if they take the survey). - Longer release cycle. I know this has come up a dozen or more times (and I'm a little sorry for bringing it up again), but I think OpenStack has stabilized enough that 6 months is a little short and now may finally be the time to lengthen things a bit. 9 months might be a better fit. With longer release cycles comes more time to get work done as well which I've heard has been a complaint of more part time contributors when this discussion has come up in the past. - Co-PTL type position? I've noticed and talked to several PTLs on a variety of projects that need a little extra help with the role. They either don't feel like they have all the experience they need to be PTL yet and so they want the previous PTL to help out still or maybe they want to do it, but there are enough variables in their day to day work (or lack of overlap tz wise with most of the other contributors to that project), that having a backup person to help out and backfill when they need help. - Talking to each other. I really honestly think just talking to one another could help too. When you find yourself in a conversation with someone about how unmotivated they are because they have a ton of work to do. You might offer to take something off their plate. Or help them see maybe they need to not take on anything new till some other work gets wrapped up. We are a community that succeeds together, so if you see someone burning themselves out do what you can to help lighten their load (helping directly is great, but there are plenty of other people in our community that you could call on to help too). Hopefully goes without saying, but don't burn yourself out trying to help someone else either. Some of these things are more actionable, others are still high level and need to have concrete actions tied to them, but I think there are plenty of things we can do to make progress here. -Kendall (diablo_rojo) [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009084.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009087.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009101.html [4] https://etherpad.openstack.org/p/PVG-u-series-goals -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 4 19:36:45 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 4 Sep 2019 12:36:45 -0700 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <3148cdbd-f232-4247-a40c-a0f8c2614df4@gmail.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <3148cdbd-f232-4247-a40c-a0f8c2614df4@gmail.com> Message-ID: Started a new thread to organize all this info better: http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009105.html -Kendall (diablo_rojo) On Wed, Sep 4, 2019 at 10:16 AM Jay Bryant wrote: > Chris, > > Thank you for your questions. I agree that not having the election > deprived the community of a chance to get to know the candidates better > so I am happy to help out here. :-) > > Hope my thoughts in-line below make sense! > > Jay > > On 9/4/2019 5:32 AM, Chris Dent wrote: > > On Wed, 4 Sep 2019, Jeremy Stanley wrote: > > > >> Thank you to all candidates who put their name forward for Project > >> Team Lead (PTL) and Technical Committee (TC) in this election. A > >> healthy, open process breeds trust in our decision making capability > >> thank you to all those who make this process possible. > > > > Congratulations and thank you to the people taking on these roles. > > > > We need to talk about the fact that there was no opportunity to vote > > in these "elections" (PTL or TC) because there were insufficient > > candidates. No matter the quality of new leaders (this looks like a > > good group), something is amiss. We danced around these issue for > > the two years I was on the TC, but we never did anything concrete to > > significantly change things, carrying on doing things in the same > > way in a world where those ways no longer seemed to fit. > > > > We can't claim any "seem" about it any more: OpenStack governance > > and leadership structures do not fit and we need to figure out > > the necessary adjustments. > > > I was surprised that we didn't have any PTL elections. I don't know > that this is all bad. At least in the case of the Cinder team it seems > to be a process that we have just kind-of internalized. I got my chance > to be PTL and was ready for a break. I had reached out to Brian > Rosmaita some time ago and had been grooming him to take over. I had > discussions with other people knew Brian was interested, so we went > forward that way. > > I think this is a natural progression for where OpenStack is at right > now. There isn't a lot of contention over how the project needs to be > run right now. In the future that may change and I think having our > election process is important for if and when that happens. > > > I haven't got any new ideas (which is part of why I left the TC). > > My position has always been that with a vendor and enterprise led > > project like OpenStack, where those vendors and enterprises are > > operating in a huge market, staffing the commonwealth in a healthy > > fashion is their responsibility. In large part because they are > > responsible for making OpenStack resistant to "casual" contribution > > in the first place (e.g., "hardware defined software"). > > > > We get people, sometimes, but it is not healthy: > > > > i may see different cross-sections of the community than others > > do, but i feel like there's been a strong tone of burnout since > > 2012 [1] > > > This is a very real concern for me. We do have a very few people who > have taken over a lot of responsibility for OpenStack and are getting > burned out. We also need to have more companies start investing in > OpenStack again. We can't, however, force them to participate. > > I know from my last year or so at Lenovo that there are customers with > real interest in OpenStack. OpenStack is running in the real world. I > don't know if it is just working for people or if the customers are > modifying it themselves and not contributing back. It would be > interesting to get numbers on this. Not sure how we can do that. I am > afraid, in the past, that the community got a reputation of being 'too > hard to contribute to'. If that perception is still hurting us now it > is something that we need to address. > > I think that some of the lack of participation is also due to cultural > differences in the geos where OpenStack has been expanding. That is a > very hard problem to address. > > > We drastically need to change the expectations we place on ourselves > > in terms of velocity. > > > > [1] > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-09-04.log.html#t2019-09-04T00:26:35 > > > >> Ghanshyam Mann (gmann) > >> Jean-Philippe Evrard (evrardjp) > >> Jay Bryant (jungleboyj) > >> Kevin Carter (cloudnull) > >> Kendall Nelson (diablo_rojo) > >> Nate Johnston (njohnston) > > > > Since there was no need to vote, there was no need to campaign, > > which means we will be missing out on the Q&A period. I've found > > those very useful for understanding the issues that are present in > > the community and for generating ideas on what to about them. I > > think it is good to have that process anyway so I'll start: > > > > What do you think we, as a community, can do about the situation > > described above? What do you as a TC member hope to do yourself? > > > I addressed this a bit in my candidacy note. I think that we need to > continue to improve our education and on-boarding processes. Though I > don't think it is hard to contribute successfully to OpenStack, there is > a lot of tribal knowledge required to be successful in OpenStack. > Documenting those things will help. > > I would like to work with the foundation to reach out to companies and > find out why they are less likely to participate than they used to be. > People are using OpenStack ... why aren't they contributing. Perhaps it > is a question that we could add to the user survey. I know when I had > the foundation reach out to companies that were about to lose their > drivers from Cinder, we got responses. So, I think that is a path we > could consider. > > > Thanks > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 4 20:53:20 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 4 Sep 2019 16:53:20 -0400 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: Message-ID: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> > On Sep 4, 2019, at 3:35 PM, Kendall Nelson wrote: > > - Talking to each other. I really honestly think just talking to one another could help too. When you find yourself in a conversation with someone about how unmotivated they are because they have a ton of work to do. You might offer to take something off their plate. Or help them see maybe they need to not take on anything new till some other work gets wrapped up. We are a community that succeeds together, so if you see someone burning themselves out do what you can to help lighten their load (helping directly is great, but there are plenty of other people in our community that you could call on to help too). Hopefully goes without saying, but don't burn yourself out trying to help someone else either. I would take this a step further, and remind everyone in leadership positions that your job is not to do things *for* anyone, but to enable others to do things *for themselves*. Open source is based on collaboration, and ensuring there is a healthy space for that collaboration is your responsibility. You are neither a free workforce nor a charity. By all means, you should help people to achieve their goals in a reasonable way by reducing barriers, simplifying processes, and making tools reusable. But do not for a minute believe that you have to do it all for them, even if you think they have a great idea. Make sure you say “yes, you should do that” more often than “yes, I will do that." Doug From mthode at mthode.org Wed Sep 4 21:23:11 2019 From: mthode at mthode.org (Matthew Thode) Date: Wed, 4 Sep 2019 16:23:11 -0500 Subject: [zaqar][requirements] - release zaqarclient please? Message-ID: <20190904212311.6ruqv3vxqopw6ohb@mthode.org> Hi Zaqar team, I tried to contact you via IRC but that didn't seem to go very well. The requirements team is looking for a release of the client so that we can move on updating jsonschema. We'd like to update it before the freeze time (starts Monday the 9th) so that all the other projects that use it can use it for a while in gate before release. held back because waiting for zaqarclient -jsonschema===3.0.2 +jsonschema===2.6.0 Is it possible to release a new version of the client (for instance as novaclient just recently did (among others))? Thanks, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From morgan.fainberg at gmail.com Wed Sep 4 23:20:58 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Wed, 4 Sep 2019 16:20:58 -0700 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Chris, Thanks for all the hard work and being an amazing part of this community. I hope we continue to run across each other professionally (conferences or otherwise). Best wishes and good luck on your new endeavors, --Morgan On Wed, Sep 4, 2019 at 9:26 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Sep 5 00:53:23 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 04 Sep 2019 19:53:23 -0500 Subject: Open Infrastructure Summit Shanghai: Forum Submissions Open Message-ID: <5D705C83.8020203@openstack.org> Hello Everyone! We are now accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Shanghai [2]. Please submit your ideas through the Summit CFP tool [3] through September20th. Don't forget to put your brainstorming etherpad up on the Shanghai Forum page [4]. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. Keep in mind, Forum submissions are for discussions, not presentations. The timeline for submissions is as follows: Sep 4th | Formal topic submission tool opens: https://cfp.openstack.org. Sep 20th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. Sep 30th | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. Oct 7th | Scheduling committee final meeting Oct 14th | Forum schedule final Nov 4-6| Forum Time! If you have questions or concerns, please reach out to speakersupport at openstack.org . Cheers, Jimmy [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/shanghai-2019/ [3] https://cfp.openstack.org [4] https://wiki.openstack.org/wiki/Forum/Shanghai2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesley.peng1 at googlemail.com Thu Sep 5 01:12:32 2019 From: wesley.peng1 at googlemail.com (Wesley Peng) Date: Thu, 5 Sep 2019 09:12:32 +0800 Subject: [ansible-sig] weekly meetings In-Reply-To: References: Message-ID: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> Hi on 2019/9/5 0:20, Mohammed Naser wrote: > For those interested in getting involved, the ansible-sig meetings > will be held weekly on Fridays at 2:00 pm UTC starting next week (13 > September 2019). > > Looking forward to discussing details and ideas with all of you! Is it a onsite meeting? where is the location? thanks. From kevin at cloudnull.com Thu Sep 5 01:21:39 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Wed, 4 Sep 2019 20:21:39 -0500 Subject: [ansible-sig] weekly meetings In-Reply-To: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> References: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> Message-ID: Thanks Mohammed, I've added it to my calendar and look forward to getting started. -- Kevin Carter IRC: Cloudnull On Wed, Sep 4, 2019 at 8:17 PM Wesley Peng wrote: > Hi > > on 2019/9/5 0:20, Mohammed Naser wrote: > > For those interested in getting involved, the ansible-sig meetings > > will be held weekly on Fridays at 2:00 pm UTC starting next week (13 > > September 2019). > > > > Looking forward to discussing details and ideas with all of you! > > Is it a onsite meeting? where is the location? > This is a good question, I assume the meeting will be on IRC, on freenode, but what channel will we be using? #openstack-ansible-sig ? > > thanks. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nitin.Uikey at nttdata.com Thu Sep 5 02:54:20 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Thu, 5 Sep 2019 02:54:20 +0000 Subject: [dev][tacker] Steps to setup tacker for testing VNF packages Message-ID: Hi All, Please find below the steps to set-up tacker for managing vnf packages. Steps to set-up tacker for managing vnf packages:- 1. Api-paste.ini [composite:tacker] /vnfpkgm/v1: vnfpkgmapi_v1 [composite:vnfpkgmapi_v1] use = call:tacker.auth:pipeline_factory noauth = request_id catch_errors extensions vnfpkgmapp_v1 keystone = request_id catch_errors authtoken keystonecontext extensions vnfpkgmapp_v1 [app:vnfpkgmapp_v1] paste.app_factory = tacker.api.vnfpkgm.v1.router:VnfpkgmAPIRouter.factory You can also copy api-paste.ini available in patch : https://review.opendev.org/#/c/675593 2. Configuration options changes : tacker.conf a) Periodic task to delete the vnf package artifacts from nodes and glance store. default configuration in tacker/tacker/conf/conductor.py vnf_package_delete_interval = 1800 b) Path to store extracted CSAR file on compute node default configuration in tacker/conf/vnf_package.py vnf_package_csar_path = /var/lib/tacker/vnfpackages/ vnf_package_csar_path should have Read and Write access (+rw) c) Path to store CSAR file at glance store default configuration in /devstack/lib/tacker filesystem_store_datadir = /var/lib/tacker/csar_files filesystem_store_datadir should have Read and Write access (+rw) 3. Apply python-tackerclient patches https://review.opendev.org/#/c/679956/ https://review.opendev.org/#/c/679957/ https://review.opendev.org/#/c/679958/ 4. Apply tosca parser changes https://review.opendev.org/#/c/675561/ 5. Sample CSAR file to create VNF package tacker/tacker/samples/vnf_packages/sample_vnf_pkg.zip 6. Commands to manage VNF packages To create a VNF package - openstack vnfpack create —user-data key=value will be generated by this command which will be used in other commands to manage VNF Package. To upload the CSAR file 1. using direct path - openstack vnfpack upload --upload-method direct-file --path 2. using web - openstack vnfpack upload --upload-method web-download --path To list all the VNF Package - openstack vnfpack list To show a VNF package details - openstack vnfpack show To delete a VNF package - openstack vnfpack delete use `openstack vnfpack --help` command for more information Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From renat.akhmerov at gmail.com Thu Sep 5 04:32:38 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 5 Sep 2019 11:32:38 +0700 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: References: Message-ID: Andras, You just went one step ahead of me! I was going to promote Oleg in the end of this week :) I’m glad that we coincided at this. Thanks! I’m for it with my both hands! Renat Akhmerov @Nokia On 4 Sep 2019, 17:33 +0700, András Kövi , wrote: > I would like to invite Oleg Ovcharuk to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. > > Thank you, > Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nitin.Uikey at nttdata.com Thu Sep 5 06:05:50 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Thu, 5 Sep 2019 06:05:50 +0000 Subject: [dev][tacker] Steps to setup tacker for testing VNF packages In-Reply-To: References: Message-ID: Hi All, Small correction. Added `default_backend = file` because `default_store` option is deprecated. >c) Path to store CSAR file at glance store >default configuration in /devstack/lib/tacker >filesystem_store_datadir = /var/lib/tacker/csar_files default_backend = file >filesystem_store_datadir should have Read and Write access (+rw) Regards, Nitin Uikey From: Uikey, Nitin Sent: Thursday, September 5, 2019 11:54 AM To: openstack-discuss at lists.openstack.org Subject: [dev][tacker] Steps to setup tacker for testing VNF packages Hi All, Please find below the steps to set-up tacker for managing vnf packages. Steps to set-up tacker for managing vnf packages:- 1. Api-paste.ini [composite:tacker] /vnfpkgm/v1: vnfpkgmapi_v1 [composite:vnfpkgmapi_v1] use = call:tacker.auth:pipeline_factory noauth = request_id catch_errors extensions vnfpkgmapp_v1 keystone = request_id catch_errors authtoken keystonecontext extensions vnfpkgmapp_v1 [app:vnfpkgmapp_v1] paste.app_factory = tacker.api.vnfpkgm.v1.router:VnfpkgmAPIRouter.factory You can also copy api-paste.ini available in patch : https://review.opendev.org/#/c/675593 2. Configuration options changes : tacker.conf a) Periodic task to delete the vnf package artifacts from nodes and glance store. default configuration in tacker/tacker/conf/conductor.py vnf_package_delete_interval = 1800 b) Path to store extracted CSAR file on compute node default configuration in tacker/conf/vnf_package.py vnf_package_csar_path = /var/lib/tacker/vnfpackages/ vnf_package_csar_path should have Read and Write access (+rw) c) Path to store CSAR file at glance store default configuration in /devstack/lib/tacker filesystem_store_datadir = /var/lib/tacker/csar_files filesystem_store_datadir should have Read and Write access (+rw) 3. Apply python-tackerclient patches https://review.opendev.org/#/c/679956/ https://review.opendev.org/#/c/679957/ https://review.opendev.org/#/c/679958/ 4. Apply tosca parser changes https://review.opendev.org/#/c/675561/ 5. Sample CSAR file to create VNF package tacker/tacker/samples/vnf_packages/sample_vnf_pkg.zip 6. Commands to manage VNF packages To create a VNF package - openstack vnfpack create —user-data key=value will be generated by this command which will be used in other commands to manage VNF Package. To upload the CSAR file 1. using direct path - openstack vnfpack upload --upload-method direct-file --path 2. using web - openstack vnfpack upload --upload-method web-download --path To list all the VNF Package - openstack vnfpack list To show a VNF package details - openstack vnfpack show To delete a VNF package - openstack vnfpack delete use `openstack vnfpack --help` command for more information Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From Nitin.Uikey at nttdata.com Thu Sep 5 07:32:51 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Thu, 5 Sep 2019 07:32:51 +0000 Subject: [dev][tosca-parser] review of toscadefinition1.2-support code Message-ID: Dear core-reviewers, We have submitted one patch regarding `Add support for tosca definition version 1.2` under topic `toscadefinition1.2-support`. Ref: https://review.opendev.org/#/c/675561/ We are adding a new feature in tacker to implement ETSI specs. In the beginning adding interface to manage vnf packages. For this feature spec [1] to merge, we need tosca-parser patch to be merged. We would appreciate if you can take a look at the patch and give your feedback. Thank you in advance. [1] : https://review.opendev.org/#/c/582930 Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From dirk at dmllr.de Thu Sep 5 08:11:55 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Thu, 5 Sep 2019 10:11:55 +0200 Subject: [zaqar][requirements] - release zaqarclient please? In-Reply-To: <20190904212311.6ruqv3vxqopw6ohb@mthode.org> References: <20190904212311.6ruqv3vxqopw6ohb@mthode.org> Message-ID: Hi Matthew, thanks for raising the topic. I created a review for this, requires approval from PTL / release liason: https://review.opendev.org/#/c/679842/ Greetings, Dirk From thierry at openstack.org Thu Sep 5 09:59:22 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 5 Sep 2019 11:59:22 +0200 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Chris Dent wrote: > [...] > We need to talk about the fact that there was no opportunity to vote > in these "elections" (PTL or TC) because there were insufficient > candidates. No matter the quality of new leaders (this looks like a > good group), something is amiss. The reality is, with less hype around OpenStack, it's just harder to justify the time you spend on "stewardship" positions. The employer does not value having their employees hold those positions as much as they used to. That affects things like finding volunteers to officiate elections, finding candidates for the TC, and also finding PTLs for every project. As far as PTL/TC elections are concerned I'd suggest two things: - reduce the number of TC members from 13 to 9 (I actually proposed that 6 months ago at the PTG but that was not as popular then). A group of 9 is a good trade-off between the difficulty to get enough people to do project stewardship and the need to get a diverse set of opinions on governance decision. - allow "PTL" role to be multi-headed, so that it is less of a superhuman and spreading the load becomes more natural. We would not elect/choose a single person, but a ticket with one or more names on it. From a governance perspective, we still need a clear contact point and a "bucket stops here" voice. But in practice we could (1) contact all heads when we contact "the PTL", and (2) consider that as long as there is no dissent between the heads, it is "the PTL voice". To actually make it work in practice I'd advise to keep the number of heads low (think 1-3). > [...] > We drastically need to change the expectations we place on ourselves > in terms of velocity. In terms of results, train cycle activity (as represented by merged commits/day) is globally down 9.6% compared to Stein. Only considering "core" projects, that's down 3.8%. So maybe we still have the same expectations, but we are definitely reducing our velocity... Would you say we need to better align our expectations with our actual speed? Or that we should reduce our expectations further, to drive velocity further down? -- Thierry Carrez (ttx) From cdent+os at anticdent.org Thu Sep 5 10:04:39 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 5 Sep 2019 11:04:39 +0100 (BST) Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: On Thu, 5 Sep 2019, Thierry Carrez wrote: > So maybe we still have the same expectations, but we are definitely reducing > our velocity... Would you say we need to better align our expectations with > our actual speed? Or that we should reduce our expectations further, to drive > velocity further down? We should slow down enough that the vendors and enterprises start to suffer. If they never notice, then it's clear we're trying too hard and can chill out. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From gmann at ghanshyammann.com Thu Sep 5 10:33:02 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 05 Sep 2019 19:33:02 +0900 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent wrote ---- > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > So maybe we still have the same expectations, but we are definitely reducing > > our velocity... Would you say we need to better align our expectations with > > our actual speed? Or that we should reduce our expectations further, to drive > > velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. +1 on this but instead of slow down and make vendors suffer we need the proper way to notify or make them understand about the future cutoff effect on OpenStack as software. I know we have been trying every possible way but I am sure there are much more managerial steps can be taken. I expect Board of Director to come forward on this as an accountable entity. TC should raise this as high priority issue to them (in meetings, joint leadership meeting etc). I am sure this has been brought up before, can we make OpenStack membership company to have a minimum set of developers to maintain upstream. With the current situation, I think it make sense to ask them to contribute manpower also along with membership fee. But again this is more of BoD and foundation area. I agree on ttx proposal to reduce the TC number to 9 or 7, I do not think this will make any difference or slow down on any of the TC activity. 9 or 7 members are enough in TC. As long as we get PTL(even without an election) we are in a good position. This time only 7 leaderless projects (6 actually with Cyborg PTL missing to propose nomination in election repo and only on ML) are not so bad number. But yes this is a sign of taking action before it goes into more worst situation. -gmann > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From anlin.kong at gmail.com Thu Sep 5 10:32:54 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 5 Sep 2019 22:32:54 +1200 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Thank you for all the amazing work you've done, either in OpenStack or in k8s/cloud-provider-openstack. We will miss you! - Best regards, Lingxian Kong Catalyst Cloud On Thu, Sep 5, 2019 at 4:27 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Thu Sep 5 11:36:36 2019 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 5 Sep 2019 07:36:36 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> Message-ID: <20190905113636.qwxa4fjxnju7tmip@barron.net> On 05/09/19 19:33 +0900, Ghanshyam Mann wrote: > ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent wrote ---- > > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > > > So maybe we still have the same expectations, but we are definitely reducing > > > our velocity... Would you say we need to better align our expectations with > > > our actual speed? Or that we should reduce our expectations further, to drive > > > velocity further down? > > > > We should slow down enough that the vendors and enterprises start to > > suffer. If they never notice, then it's clear we're trying too hard > > and can chill out. > >+1 on this but instead of slow down and make vendors suffer we need the proper >way to notify or make them understand about the future cutoff effect on OpenStack >as software. I know we have been trying every possible way but I am sure there are >much more managerial steps can be taken. I expect Board of Director to come forward >on this as an accountable entity. TC should raise this as high priority issue to them (in meetings, >joint leadership meeting etc). > >I am sure this has been brought up before, can we make OpenStack membership company >to have a minimum set of developers to maintain upstream. With the current situation, I think >it make sense to ask them to contribute manpower also along with membership fee. But again >this is more of BoD and foundation area. +1 IIUC Gold Membership in the Foundation provides voting privileges at a cost of $50-200K/year and Corporate Sponsorship provides these plus various marketing benefits at a cost of $10-25K/year. So far as I can tell there is not a requirement of a commitment of contributors and maintainers with the exception of the (currently closed) Platinum Membership, which costs $500K/year and requires at least 2 FTE equivalents contributing to OpenStack. In general I see requirements for annual cash expenditure to the Foundation, as for membership in any joint commercial enterprise, but little that ensures the availability of skilled labor for ongoing maintenance of our projects. -- Tom Barron > >I agree on ttx proposal to reduce the TC number to 9 or 7, I do not think this will make any >difference or slow down on any of the TC activity. 9 or 7 members are enough in TC. > >As long as we get PTL(even without an election) we are in a good position. This time only >7 leaderless projects (6 actually with Cyborg PTL missing to propose >nomination in election repo and only on ML) are >not so bad number. But yes this is a sign of taking action before it goes into more worst situation. > >-gmann > > > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent > > From smooney at redhat.com Thu Sep 5 11:41:29 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 05 Sep 2019 12:41:29 +0100 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: On Thu, 2019-09-05 at 11:04 +0100, Chris Dent wrote: > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > So maybe we still have the same expectations, but we are definitely reducing > > our velocity... Would you say we need to better align our expectations with > > our actual speed? Or that we should reduce our expectations further, to drive > > velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. well openstack has already slowed alot. i think i dont really agree with Thierry's assertion that lack of participation is driven by vendors being less interested in openstack. i have not felt that at least in my time at redhat. when i was at intel i did feel that part of the reason that the investment that was been made was reducing was not driven by the lack of hype but by how slow adding some feature that really mattered were in openstack already. there are still feature that i had working in lab envs that were proposed upstream and are only now finally being addressed/fix that have been in flight for 4+ years. im not trying to pick on any project in particular with that comment because i have experience several multi cycle delays acorss several project either directly or via the people i work with day to day, in the time i have been working on openstack. our core teams to a lot of really good work, they do land alot of important feature and have been driving to improve the quality of the code and our documentation. Asking a core to also take on the durties of PTL is a lot on top of that. Until recently i assumed as i think many did that to run for PTL you had to be a core team member, not that i was really considering it in anycase but similarly many people assume to be a stable core you have to a core or to be on the technical commit you have to be well technical. part of the lack of engagement might be that not everyone knows they can tack part in some of the governance activities be they technical or organisational. i comment on TC and governace topics from time to time but i also personally feel that getting involed with either a PTL role or TC role would be a daunting task, even though i know many of the people invovled, it would still be out of my comfort zone. which is why if feel comfortable engaging with the campaigns and voting in the election but have never self nominated. spreading the load would help with that. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From gmann at ghanshyammann.com Thu Sep 5 11:46:43 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 05 Sep 2019 20:46:43 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-6 Update Message-ID: <16d013f861a.ccea0963228871.1777128122392549699@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R6 week. I am preparing the legacy base job for IPv6 deployment. NOTE: As first step, I am going to set up the job with all IPv6 deployment setting and basic verification whether service listens on IPv6 or not. As the second step, we will add post-run script to audit the tcpdump/logs etc for unwanted IPv4 traffic. Summary: * Number of Ipv6 jobs proposed Projects: 28 * Number of pass projects: 18 ** Number of project merged out of pass project: 13 * Number of failing projects: 10 Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 Current status: ============ 1. Cinder and devstack fix are merged for cinder IPv6 job and I did recheck on cinder patch. 2. Preparing the legacy base job with IPv6 setting - https://review.opendev.org/#/c/680233/ 3. Zun, Watcher, Telemetry(thanks to zhurong) are merged. I have proposed to run telemetry ipv6 job on Panko and Aodh gate also. 4. This week new projects ipv6 jobs patch and status: - Tacker: link: https://review.opendev.org/#/c/676918/ status: Current functional jobs are n-v so I am not sure IPv6 job will pass or not. waiting for gate result. Need Help from the project team: 1. Monasca: waiting for new kafka client patches merge - https://review.opendev.org/#/c/674814/2 2. Sahara: https://review.opendev.org/#/c/676903/ Job is failing to start the sahara service. I could not find the logs for sahara service(it shows an empty log under apache). Need help from sahara team. 3. Searchlight: https://review.opendev.org/#/c/678391/ python-searchlightclient error, Trinh will be looking into this. 4. Senlin: https://review.opendev.org/#/c/676910/ Not able to connect on auth url - https://zuul.opendev.org/t/openstack/build/0ad3b4aac0424ad78171ca7546421f5e/log/job-output.txt#43011 5. qinling: https://review.opendev.org/#/c/673506/1 logs are not there so i did recheck to get the fresh log for debugging. IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/677524/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific verification then it can be added in project side job as post-run playbooks as described in wiki page[1]. [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From marek.lycka at ultimum.io Thu Sep 5 11:52:24 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Thu, 5 Sep 2019 13:52:24 +0200 Subject: [Horizon] Paging and Angular... Message-ID: Hi all, I took apart the Horizon paging mechanism while working on [1] and have a few of findings: - Paging is unimplemented/turned off for many (if not most) panels, not just Routers and Networks - Currently, single page data loads could potentially bump up against API hard limits - Sorting is also broken in places where paging is enabled (Old images...), see [2] - The Networks table loads data via three API calls due to neutron API limitations, which makes the marker based mechanism unusable - There is at least one more minor bug which breaks pagination, there may be more While some of these things may be fixable in different hacky and/or inefficient ways, we already have Angular implementations which fix many of them and make improving and fixing the rest easier. Since Angular ports would help with other unrelated issues as well and allow us to start deprecating old code, I was wondering two things: 1) What would it take to increase the priority of Angularization in general? 2) Can the Code Review process be modified/improved to increase the chance for Angularization changes to be code reviewed and merged if they do happen? My previous attempts in this area have failed because of lack of code reviewers... Since full Angularization is still the goal for Horizon as far as I know, I'd rather spend time doing that than hacking solutions to different problems in legacy code which is slated deprecation. Best Regards, Marek [1] https://bugs.launchpad.net/horizon/+bug/1746184 [2] https://bugs.launchpad.net/horizon/+bug/1782732 -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 5 13:24:57 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 5 Sep 2019 09:24:57 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <65A0A6F8-CF5A-4403-B4D7-54B4B37A23AE@doughellmann.com> > On Sep 5, 2019, at 6:04 AM, Chris Dent wrote: > > On Thu, 5 Sep 2019, Thierry Carrez wrote: > >> So maybe we still have the same expectations, but we are definitely reducing our velocity... Would you say we need to better align our expectations with our actual speed? Or that we should reduce our expectations further, to drive velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. As much as I support the labor movement, I don’t think *starting* from an adversarial “we’ll show them!” position with our employers and potential contributors is the most effective way to establish the sort of change we want. It would much more likely instill the idea that this community won’t work with new contributors, which isn’t going to be any healthier than the current situation over the long term. That said, I do agree with the “chill out” approach. Do what you can and then emphasize collaboration over doing things for non-contributors, to turn them into contributors. Be honest about the need for help, and clear about what sort of help is needed, so that someone who *is* motivated can get involved. And make it easy for others to join and fulfill those needs, so the bureaucracy doesn’t demotivate them into looking for other communities to join instead. Also, accept that either approach is going to mean things will not be done, and that is OK. Look for ways to minimize the amount of effort for tasks that must be done, but let “good ideas” go. If they’re good enough, and you make it possible for others to contribute, someone will step up. But if that doesn’t happen, it should not be a source of stress for anyone. That means the “good idea” doesn’t meet the bar of economic viability. Doug From doug at doughellmann.com Thu Sep 5 14:59:06 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 5 Sep 2019 10:59:06 -0400 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance Message-ID: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> Following the U cycle election there was no candidate for either the winstackers or powervmstackers team PTL role. This is the second cycle in a row where that problem has occurred for both teams, which indicates that the teams are not active in the community. During the TC meeting today [1] we discussed removing the teams from governance, so I have proposed the patches to do that [2][3]. Doug [1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-148 [2] https://review.opendev.org/680438 remove powervmstackers team [3] https://review.opendev.org/680439 remove winstackers team From adrianc at mellanox.com Thu Sep 5 15:10:17 2019 From: adrianc at mellanox.com (Adrian Chiris) Date: Thu, 5 Sep 2019 15:10:17 +0000 Subject: [tc][neutron] Supported Linux distributions and their kernel Message-ID: Greetings, I was wondering what is the guideline in regards to which kernels are supported by OpenStack in the various Linux distributions. Looking at [1], Taking for example latest CentOS major (7): Every "minor" version is released with a different kernel version, the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) While I understand that OpenStack projects are expected to support all CentOS 7.x releases. Does the same applies for the kernels they originally came out with? The reason I'm asking, is because I was working on doing some cleanup in neutron [2] for a workaround introduced because of an old kernel bug, It is unclear to me if it is safe to introduce this change. [1] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions [2] https://review.opendev.org/#/c/677095/ Thanks, Adrian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 5 15:10:33 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 5 Sep 2019 11:10:33 -0400 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> Message-ID: <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> > On Sep 5, 2019, at 10:59 AM, Doug Hellmann wrote: > > Following the U cycle election there was no candidate for either the winstackers or powervmstackers team PTL role. This is the second cycle in a row where that problem has occurred for both teams, which indicates that the teams are not active in the community. During the TC meeting today [1] we discussed removing the teams from governance, so I have proposed the patches to do that [2][3]. > > Doug > > [1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-148 > [2] https://review.opendev.org/680438 remove powervmstackers team > [3] https://review.opendev.org/680439 remove winstackers team I neglected to mention that we did consider both teams as good candidates for SIGs, but will leave it up to the contributors on those teams to propose creating the SIGs, if they choose to do so. Doug From cboylan at sapwetik.org Thu Sep 5 15:20:43 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 05 Sep 2019 08:20:43 -0700 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: Message-ID: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> On Thu, Sep 5, 2019, at 8:10 AM, Adrian Chiris wrote: > > Greetings, > > I was wondering what is the guideline in regards to which kernels are > supported by OpenStack in the various Linux distributions. > > > Looking at [1], Taking for example latest CentOS major (7): > > Every “minor” version is released with a different kernel version, > > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and > the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > > While I understand that OpenStack projects are expected to support all > CentOS 7.x releases. It is my understanding that CentOS (and RHEL?) only support the current/latest point release of their distro [3]. We only test against that current point release. I don't expect we can be expected to support a distro release which the distro doesn't even support. All that to say I would only worry about the most recent point release. > > Does the same applies for the kernels they _originally_ came out with? > > > The reason I’m asking, is because I was working on doing some cleanup > in neutron [2] for a workaround introduced because of an old kernel bug, > > It is unclear to me if it is safe to introduce this change. > > > [1] > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > [2] https://review.opendev.org/#/c/677095/ [3] https://wiki.centos.org/FAQ/General#head-dcca41e9a3d5ac4c6d900a991990fd11930867d6 From gmann at ghanshyammann.com Thu Sep 5 15:23:14 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Sep 2019 00:23:14 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position Message-ID: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Hello Everyone, With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. We would like to know if anyone from Placement core team, Nova core team or PTL (as placement main consumer) of any other interested/related developer is interested to take the PTL position? [1] https://governance.openstack.org/election/results/ussuri/ptl.html [2] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-250 -TC (gmann) From cdent+os at anticdent.org Thu Sep 5 16:20:39 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 5 Sep 2019 17:20:39 +0100 (BST) Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. > In today TC meeting[2], we discussed the few possibilities and decided to reach out to the > eligible candidates to serve the PTL position. Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From mthode at mthode.org Thu Sep 5 16:25:17 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 5 Sep 2019 11:25:17 -0500 Subject: [keystone][horizon][zaqar][tempest][requirements] library updates breaking projects Message-ID: <20190905162516.mxdxg4dl3epwwwfi@mthode.org> I emailed a while ago about problem updates and wanted to give an update. I'm hoping we can get these fixed before the freeze which is on Monday iirc. horizon This is a newer issue which e0ne and amotoki know about but no existing review to fix it. please test against https://review.opendev.org/680457 -semantic-version===2.8.1 +semantic-version===2.6.0 tempest STILL has failures I thought the following commit would fix it, but nope https://github.com/mtreinish/stestr/commit/136027c005fc437341bc23939a18a5f3314194f1 -stestr===2.5.1 +stestr===2.4.0 python-zaqarclient waiting on https://review.opendev.org/679842 may be merging today -jsonschema===3.0.2 +jsonschema===2.6.0 keystone a review is out there that seems to have tests passing https://review.opendev.org/677511/ -oauthlib===3.1.0 +oauthlib===3.0.2 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Sep 5 16:57:01 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 5 Sep 2019 11:57:01 -0500 Subject: [release] Release countdown for week R-5, September 9-13 Message-ID: <20190905165701.GA29404@sm-workstation> Development Focus ----------------- We are getting close to the end of the Train cycle! Next week on September 12 is the train-3 milestone, also known as feature freeze. It's time to wrap up feature work in the services and their client libraries, and defer features that won't make it to the Ussuri cycle. General Information ------------------- This coming week is the deadline for client libraries: their last feature release needs to happen before "Client library freeze" on September 12. Only bugfix releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/train branching request with the review (as an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2) September 12 is also the deadline for feature work in all OpenStack deliverables following the cycle-with-rc model. To help those projects produce a first release candidate in time, only bugfixes should be allowed in the master branch beyond this point. Any feature work past that deadline has to be approved by the team PTL. Finally, feature freeze is also the deadline for submitting a first version of your cycle-highlights. Cycle highlights are the raw data hat helps shape what is communicated in press releases and other release activity at the end of the cycle, avoiding direct contacts from marketing folks. See https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights for more details. Upcoming Deadlines & Dates -------------------------- Train-3 milestone (feature freeze): September 12 (R-5 week) RC1 deadline: September 26 (R-3 week) Train final release: October 16 Forum+PTG at Shanghai summit: November 4 From dirk at dmllr.de Thu Sep 5 18:09:58 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Thu, 5 Sep 2019 20:09:58 +0200 Subject: [ironic] opensuse-15 jobs are temporary non-voting on bifrost In-Reply-To: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> References: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> Message-ID: Hi Dmitry, Am Mi., 4. Sept. 2019 um 17:25 Uhr schrieb Dmitry Tantsur : > JFYI we had to disable opensuse-15 jobs because they kept failing with > repository issues. Help with debugging appreciated. The nodeset is incorrect, https://review.opendev.org/680450 should get you help started. Greetings, Dirk From bansalnehal26 at gmail.com Wed Sep 4 12:30:10 2019 From: bansalnehal26 at gmail.com (Nehal Bansal) Date: Wed, 4 Sep 2019 18:00:10 +0530 Subject: [Tacker] [Mistral] Regarding Inputs in Network Service Descriptors Message-ID: Hi, I have been trying to create a Network Service Descriptor which takes flavor, image, network_name as inputs from a parameter file and then passes it on to the VNF Descriptor but so far my attempts have been unsuccessful. Is there a standard template available because I could not find even a single one which took image_name or flavor_name from a parameter file. Thank you. Regards, Nehal Bansal -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Sep 5 18:48:31 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 5 Sep 2019 11:48:31 -0700 Subject: [all][PTL] Call for Cycle Highlights for Train Message-ID: Hello Everyone! As you may or may not have read last week in the release update from Sean, its time to call out 'cycle-highlights' in your deliverables! As PTLs, you probably get many pings towards the end of every release cycle by various parties (marketing, management, journalists, etc) asking for highlights of what is new and what significant changes are coming in the new release. By putting them all in the same place it makes them easy to reference because they get compiled into a pretty website like this from Rocky[1] or this one for Stein[2]. We don't need a fully fledged marketing message, just a few highlights (3-4 ideally), from each project team. *The deadline for cycle highlights is the end of the R-5 week [3] on Sept 13th.* How To Reminder: ------------------------- Simply add them to the deliverables/train/$PROJECT.yaml in the openstack/releases repo similar to this: cycle-highlights: - Introduced new service to use unused host to mine bitcoin. The formatting options for this tag are the same as what you are probably used to with Reno release notes. Also, you can check on the formatting of the output by either running locally: tox -e docs And then checking the resulting doc/build/html/train/highlights.html file or the output of the build-openstack-sphinx-docs job under html/train/ highlights.html. Thanks :) -Kendall Nelson (diablo_rojo) [1] https://releases.openstack.org/rocky/highlights.html [2] https://releases.openstack.org/stein/highlights.html [3] https://releases.openstack.org/train/schedule.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Sep 5 19:13:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Sep 2019 19:13:10 +0000 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> Message-ID: <20190905191310.jacwzbion5zf3jhv@yuggoth.org> On 2019-09-04 16:53:20 -0400 (-0400), Doug Hellmann wrote: > > On Sep 4, 2019, at 3:35 PM, Kendall Nelson wrote: [...] > > Hopefully goes without saying, but don't burn yourself out > > trying to help someone else either. This is the point in the flight safety demonstration where we remind passengers to affix their own oxygen masks before assisting others. > I would take this a step further, and remind everyone in > leadership positions that your job is not to do things *for* > anyone, but to enable others to do things *for themselves*. Open > source is based on collaboration, and ensuring there is a healthy > space for that collaboration is your responsibility. You are > neither a free workforce nor a charity. By all means, you should > help people to achieve their goals in a reasonable way by reducing > barriers, simplifying processes, and making tools reusable. But do > not for a minute believe that you have to do it all for them, even > if you think they have a great idea. Make sure you say “yes, you > should do that” more often than “yes, I will do that." And also, as has been suggested to some extent in other responses on this thread, if there are expected things go undone because there's nobody who has available time to do them, then it's a distinct possibility those things weren't necessary in the first place. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Thu Sep 5 20:57:55 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 5 Sep 2019 22:57:55 +0200 Subject: [neutron] Open Infrastructure Summit Shanghai - forum topics ideas Message-ID: Hi Neutrinos, We want to collect some ideas about potential topics which can be proposed as sessions on Forum in Shanghai. I created etherpad [1]. If You have any idea for such potential topic, please add it there. If You don’t have any ideas, please also check etherpad - maybe You will be interested in one of topics proposed by others. We don’t have much time for that as deadline for CFP is 20th of September, so please don’t wait too long with writing there Your ideas :) [1] https://etherpad.openstack.org/p/neutron-shanghai-forum-brainstorming — Slawek Kaplonski Senior software engineer Red Hat From johnsomor at gmail.com Thu Sep 5 21:45:45 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 5 Sep 2019 14:45:45 -0700 Subject: Octavia LB flavor recommendation for Amphora VMs In-Reply-To: References: Message-ID: Hi Pawel, For small deployments, the 1GB RAM, 1vCPU, 2GB disk (3GB with centos, etc) should work fine for you. You might even be able to drop the RAM lower if you will not be doing TLS. For example, my devstack amphora instance is allocated 1GB RAM, but is only using less than half that. (just because the flavor says 1GB it doesn't mean it uses all of that all of the time) Kernel page de-duplication will also help with actual consumption as the amphora images are mostly the same. If you are doing really large numbers of connections, and you are logging the tenant traffic flows locally, you might want to increase the available disk. Normal workloads will be fine with a smaller disk as the amphora do include log rotation. If you do not need the flow logs, there is a configuration setting to disable them. The main tuning you might want to do is setting the maximum amount of RAM it can consume. If you have a very large number of concurrent connections or are using TLS offloading, you might want to consider increasing the amount of RAM the amphora can consume. The HAProxy documentation states that it normally(non-TLS offload) uses around 32kB of RAM per established connection. You might start with that and see how that aligns to your application/use case. In testing I have done, adding additional vCPUs has very little impact on the performance(a small bump with the second CPU as the NIC interrupts can be split from the HAProxy processes). You can get pretty high throughput with a single vCPU. We expect once HAProxy 2.0 stabilizes and is available (the distros are not yet shipping it), we will look at enabling the threading support to vertically scale the amphora by adding vCPUs. Versions prior to 2.0 did not have good threading and the multi-process model breaks a bunch of features. If you really need more CPU now, you can always build a custom image with 2.0.x in it and use the "custom HAProxy template" configuration setting to add the threading settings. Now with Octavia flavors, you can define flavors that select different nova flavors for the amphora at load balancer creation. For example, you can have a "bronze", "silver", "gold", each with different RAM allocations. We would also love to hear what you find with your deployment and applications. Michael On Wed, Sep 4, 2019 at 2:49 AM Pawel Konczalski wrote: > > Hello everyone / Octavia Team, > > what is your experience / recommendation for a Octavia flavor with is > used to deploy Amphora VM for small / mid size setups? (RAM / Cores / HDD) > > BR > > Pawel From davidmnoriega at gmail.com Thu Sep 5 21:58:42 2019 From: davidmnoriega at gmail.com (David M Noriega) Date: Thu, 5 Sep 2019 14:58:42 -0700 Subject: zuul and nodepool ansible roles Message-ID: How do I go about contributing to the zuul and nodepool roles? They do not have either a launchpad or storyboard page. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Sep 5 22:07:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Sep 2019 22:07:51 +0000 Subject: zuul and nodepool ansible roles In-Reply-To: References: Message-ID: <20190905220751.arp4rj4c5kfek33r@yuggoth.org> On 2019-09-05 14:58:42 -0700 (-0700), David M Noriega wrote: > How do I go about contributing to the zuul and nodepool roles? > They do not have either a launchpad or storyboard page. To which zuul and nodepool roles are you referring? If you mean the ones which make up the Zuul project's standard library, you're looking for the https://opendev.org/zuul/zuul-jobs repository documented at https://zuul-ci.org/docs/zuul-jobs/ . Changes to content there can be proposed to the Gerrit service at review.opendev.org, and the Zuul community can be found in the #zuul channel on the Freenode IRC network or via the zuul-discuss at lists.zuul-ci.org mailing list. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Sep 5 22:12:59 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Sep 2019 22:12:59 +0000 Subject: zuul and nodepool ansible roles In-Reply-To: <20190905220751.arp4rj4c5kfek33r@yuggoth.org> References: <20190905220751.arp4rj4c5kfek33r@yuggoth.org> Message-ID: <20190905221259.qlhqbzeko4dk7a2g@yuggoth.org> On 2019-09-05 22:07:51 +0000 (+0000), Jeremy Stanley wrote: > On 2019-09-05 14:58:42 -0700 (-0700), David M Noriega wrote: > > How do I go about contributing to the zuul and nodepool roles? > > They do not have either a launchpad or storyboard page. > > To which zuul and nodepool roles are you referring? If you mean the > ones which make up the Zuul project's standard library, you're > looking for the https://opendev.org/zuul/zuul-jobs repository > documented at https://zuul-ci.org/docs/zuul-jobs/ . Changes to > content there can be proposed to the Gerrit service at > review.opendev.org, and the Zuul community can be found in the #zuul > channel on the Freenode IRC network or via the > zuul-discuss at lists.zuul-ci.org mailing list. I was just reminded in #zuul that https://zuul-ci.org/community.html is probably the best place to start. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Thu Sep 5 22:15:35 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 5 Sep 2019 18:15:35 -0400 Subject: zuul and nodepool ansible roles In-Reply-To: References: Message-ID: <20190905221535.GA4782@localhost.localdomain> On Thu, Sep 05, 2019 at 02:58:42PM -0700, David M Noriega wrote: > How do I go about contributing to the zuul and nodepool roles? They do not > have either a launchpad or storyboard page. This is true, have not done the steps to set this up. I think we could do launchpad or storyboard if wanted. That send, I usually hang out in #openstack-windmill to answer some questions or watch for new patches to be created from time to time. For the moment, I would suggest IRC as even if bug trackers were enabled, I am not sure how often I'd be able to check them. -Paul From nate.johnston at redhat.com Thu Sep 5 22:31:37 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 5 Sep 2019 18:31:37 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <20190905223137.i72s7n4tibkgypqf@bishop> On Thu, Sep 05, 2019 at 11:59:22AM +0200, Thierry Carrez wrote: > Chris Dent wrote: > > [...] > > We need to talk about the fact that there was no opportunity to vote > > in these "elections" (PTL or TC) because there were insufficient > > candidates. No matter the quality of new leaders (this looks like a > > good group), something is amiss. > > The reality is, with less hype around OpenStack, it's just harder to justify > the time you spend on "stewardship" positions. The employer does not value > having their employees hold those positions as much as they used to. That > affects things like finding volunteers to officiate elections, finding > candidates for the TC, and also finding PTLs for every project. > > As far as PTL/TC elections are concerned I'd suggest two things: > > - reduce the number of TC members from 13 to 9 (I actually proposed that 6 > months ago at the PTG but that was not as popular then). A group of 9 is a > good trade-off between the difficulty to get enough people to do project > stewardship and the need to get a diverse set of opinions on governance > decision. > > - allow "PTL" role to be multi-headed, so that it is less of a superhuman > and spreading the load becomes more natural. We would not elect/choose a > single person, but a ticket with one or more names on it. From a governance > perspective, we still need a clear contact point and a "bucket stops here" > voice. But in practice we could (1) contact all heads when we contact "the > PTL", and (2) consider that as long as there is no dissent between the > heads, it is "the PTL voice". To actually make it work in practice I'd > advise to keep the number of heads low (think 1-3). I think there was already an effort to allow the PTL to shed some of their duties, in the form of the Cross Project Liaisons [1] project. I thought that was a great way for more junior members of the community to get involved with stewardship and be recognized for that contribution, and perhaps be mentored up as they take a bit of load off the PTL. I think if we expand the roles to include more of the functions that PTLs feel the need to do themselves, then by doing so we (of necessity) document those parts of the job so that others can handle them. And perhaps projects can cooperate and pool resources - for example, the same person who is a liaison for Neutron to Oslo could probably be on the look out for issues of interest to Octavia as well, and so on. I think that this looks different for projects of different size; large projects can spread it out a bit, while for smaller ones more of a "triumvirate" approach would likely develop. Nate [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons for those not familiar > > [...] > > We drastically need to change the expectations we place on ourselves > > in terms of velocity. > > In terms of results, train cycle activity (as represented by merged > commits/day) is globally down 9.6% compared to Stein. Only considering > "core" projects, that's down 3.8%. > > So maybe we still have the same expectations, but we are definitely reducing > our velocity... Would you say we need to better align our expectations with > our actual speed? Or that we should reduce our expectations further, to > drive velocity further down? > > -- > Thierry Carrez (ttx) > From openstack at fried.cc Thu Sep 5 22:32:38 2019 From: openstack at fried.cc (Eric Fried) Date: Thu, 5 Sep 2019 17:32:38 -0500 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> Message-ID: <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> There are other factors at play here that arguably justify this action, but I'd like to posit that failure to put forward a PTL for teams of this nature should not by itself be grounds for de-governance-ification. Cf. the "no placement PTL" thread for discussion of leaderlessness being not only possible but potentially beneficial. efried From anlin.kong at gmail.com Thu Sep 5 22:55:49 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 6 Sep 2019 10:55:49 +1200 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Hi Anmar, Please see my comments in-line below. - Best regards, Lingxian Kong Catalyst Cloud On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih wrote: > Hi Lingxian, > > First of all, I would like to apologize because the email is pretty long. > I listed all the steps I went through just to make sure that I did > everything correctly. > No need to apologize, more information is always helpful to solve the problem. > 4- Creating the webhook for the function by: openstack webhook create > --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen capture > for the response. I tried to copy and paste > the webhook_url " > http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into > my internet browser, so I got 404 not found. I am not sure if this is > normal response or I have something wrong here. > Like Gaetan said, the webhook is supposed to be invoked by http POST. 9- Checking aodh alarm history by aodh alarm-history show > ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response > > > 10- Last step is to check the function execution in qinling and here is > the response . (empty bracket). I am not sure > what is the problem. > Yeah, from the output of alarm history, the alarm is not triggered, as a result, there won't be execution created by the webhook. Seems like the aodh-listener didn't receive the message or the message was ignored. Could you paste the aodh-listener log but make sure: 1. `debug = True` in /etc/aodh/aodh.conf 2. Trigger the python script again > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Sep 5 23:11:16 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 05 Sep 2019 16:11:16 -0700 Subject: Root Ara report removed from Zuul Jobs Message-ID: <67d5aea3-2d92-4378-9af1-a9dc2bcad0cc@www.fastmail.com> Hello Everyone, We have removed the top level Zuul job Ara reports from our Zuul jobs. This was done to reduce the total number of objects we are uploading to our swift/ceph object stores as some clouds have indicated the total object volume is a bit high. Analysis showed that Ara represented a significant chunk of that data. We did not remove that information though. Zuul's build dashboard is able to render a similar report for builds. You can find that by clicking on the "Console" tab of a build. For exampe, here is one for a nova tox job: http://zuul.openstack.org/build/8e581b24d38b4e5c8ff046be081c4525/console We hope this makes our log storage easier to support while still providing the information you need to debug your jobs. Note jobs that run a nested ara-report are not affected by this. I think TripleO, OSA, and others do this. Thank you, Clark From gmann at ghanshyammann.com Fri Sep 6 00:26:13 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Sep 2019 09:26:13 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: <16d03f6dcc3.b88b3ffe13390.5078407317661040921@ghanshyammann.com> ---- On Fri, 06 Sep 2019 01:20:39 +0900 Chris Dent wrote ---- > On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. > > In today TC meeting[2], we discussed the few possibilities and decided to reach out to the > > eligible candidates to serve the PTL position. > > Thanks for being concerned about this, but it would have been useful > if you included me (as the current PTL) and the rest of the > Placement team in the discussion or at least confirmed plans with me > before starting this seek-volunteers process. > > There are a few open questions we are still trying to resolve > before we should jump to any decisions: > > * We are currently waiting to see if Tetsuro is available (he's been > away for a few days). If he is, he'll be great, but we don't know > yet if he can or wants to. Thanks Chris. we discussed it in yesterday TC meeting and there is no hurry or leaving placement team away from the discussion. You as Train PTL and other placement members are the only ones to decide and help to select the right candidate. I am also waiting to hear from Tetsuro about his planning. > > * We've started, informally, discussing the option of pioneering the > option of leaderless projects within Placement (we pioneer many > other things there, may as well add that to the list) but without > more discussion from the whole team (which can't happen because we > don't have quorum of the actively involved people) and the TC it's > premature. Leaderless would essentially mean consensually > designating release liaisons and similar roles but no specific > PTL. I think this is easily possible in a small in number, > focused, and small feature-queue [1] group like Placement but > would much harder in one of the larger groups like Nova. This is an interesting idea and needs more discussions seems. I am not against of Leaderless project approach with right point of contacts for TC/release team etc but this is going to be the new process under current governance. Because there are other projects (winstackers and PowerVMStackers in U) are in the queue of being removed from governance because continuously lacking the leader since a couple of cycles. So if we go for Leaderless approach then, those projects should be removed based on general-in-active projects not because of no PTL. Anyways IMO, let's first check all possibility if anyone from placement team (or nova as it is an almost same team) can serve as PTL. If no then we discuss about your idea. -gmann > > * We have several reluctant people who _can_ do it, but don't want > to. Once we've explored the other ideas here and any others we can > come up with, we can dredge one of those people up as a stand-in > PTL, keeping the slot open. Because of [1] there's not much on the > agenda for U. > > Since the Placement team is not planning to have an active presence > at the PTG, nor planning to have much of a pre-PTG (as no one has > stepped up with any feature ideas) we have some days or even weeks > before it matters who the next PTL (if any) is, so if possible, > let's not rush this. > > [1] It's been a design goal of mine from the start that Placement > would quickly reach a position of stability and maturity that I > liked to call "being done". By the end of Train we are expecting to > be feature complete for any features that have been actively > discussed in the recent past [2]. The main tasks in U will be > responding to bug fixes and requests-for-explanations for the > features that already exist (because people asked for them) but are > not being used yet and getting the osc-placement client caught up. > > [2] The biggest thing that has been discussed as a "maybe we should > do" for which there are no immediate plans is "resource provider > sharding" or "one placement, many clouds". That's a thing we > imagined people might ask for, but haven't yet, so there's little > point doing it. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From Albert.Braden at synopsys.com Fri Sep 6 00:37:32 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 6 Sep 2019 00:37:32 +0000 Subject: Nova causes MySQL timeouts In-Reply-To: References: Message-ID: After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? From: Albert Braden Sent: Wednesday, September 4, 2019 10:19 AM To: openstack-discuss at lists.openstack.org Cc: Gaëtan Trellu Subject: RE: Nova causes MySQL timeouts We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: https://docs.openstack.org/keystone/stein/configuration/config-options.html Document says: [api_database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. [database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? My settings are default: [api_database]: #connection_recycle_time = 3600 #max_overflow = #max_pool_size = [database]: #connection_recycle_time = 3600 #min_pool_size = 1 #max_overflow = 50 #max_pool_size = 5 It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? From: Gaëtan Trellu > Sent: Tuesday, September 3, 2019 1:37 PM To: Albert Braden > Cc: openstack-discuss at lists.openstack.org Subject: Re: Nova causes MySQL timeouts Hi Albert, It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. Keep in mind than more workers you will have more connections will be opened on the database. Gaetan (goldyfruit) On Sep 3, 2019 4:31 PM, Albert Braden > wrote: It looks like nova is keeping mysql connections open until they time out. How are others responding to this issue? Do you just ignore the mysql errors, or is it possible to change configuration so that nova closes and reopens connections before they time out? Or is there a way to stop mysql from logging these aborted connections without hiding real issues? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nitin.Uikey at nttdata.com Fri Sep 6 02:40:13 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Fri, 6 Sep 2019 02:40:13 +0000 Subject: [dev][tacker] Steps to setup tacker for testing VNF packages In-Reply-To: References: , Message-ID: Hi All, DB upgrade steps was missing in my previous mail. Sharing all the steps again. Steps to set-up tacker for managing vnf packages:- 1. Api-paste.ini [composite:tacker] /vnfpkgm/v1: vnfpkgmapi_v1 [composite:vnfpkgmapi_v1] use = call:tacker.auth:pipeline_factory noauth = request_id catch_errors extensions vnfpkgmapp_v1 keystone = request_id catch_errors authtoken keystonecontext extensions vnfpkgmapp_v1 [app:vnfpkgmapp_v1] paste.app_factory = tacker.api.vnfpkgm.v1.router:VnfpkgmAPIRouter.factory You can also copy api-paste.ini available in patch : https://review.opendev.org/#/c/675593 2. Configuration options changes : tacker.conf a) Periodic task to delete the vnf package artifacts from nodes and glance store. default configuration in tacker/tacker/conf/conductor.py vnf_package_delete_interval = 1800 b) Path to store extracted CSAR file on compute node default configuration in tacker/conf/vnf_package.py vnf_package_csar_path = /var/lib/tacker/vnfpackages/ vnf_package_csar_path should have Read and Write access (+rw) c) Path to store CSAR file at glance store default configuration in /devstack/lib/tacker default_backend = file filesystem_store_datadir = /var/lib/tacker/csar_files filesystem_store_datadir should have Read and Write access (+rw) 3. Apply python-tackerclient patches https://review.opendev.org/#/c/679956/ https://review.opendev.org/#/c/679957/ https://review.opendev.org/#/c/679958/ 4. Apply tosca parser changes https://review.opendev.org/#/c/675561/ 5. Upgrade the tacker Database to 9d425296f2c3 version tacker-db-manage --config-file /etc/tacker/tacker.conf upgrade 9d425296f2c3 6. Sample CSAR file to create VNF package tacker/tacker/samples/vnf_packages/sample_vnf_pkg.zip 7. Commands to manage VNF packages To create a VNF package - openstack vnfpack create —user-data key=value will be generated by this command which will be used in other commands to manage VNF Package. To upload the CSAR file 1. using direct path - openstack vnfpack upload --upload-method direct-file --path 2. using web - openstack vnfpack upload --upload-method web-download --path To list all the VNF Package - openstack vnfpack list To show a VNF package details - openstack vnfpack show To delete a VNF package - openstack vnfpack delete use `openstack vnfpack --help` command for more information Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From dangtrinhnt at gmail.com Fri Sep 6 04:07:36 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 6 Sep 2019 13:07:36 +0900 Subject: [all][ptl][tc][docs] Develope a code-review practices document Message-ID: Hi all, I find it's hard sometimes to handle situations in code-review, something likes solving conflicts while not upsetting developers, or suggesting a change to a patchset while still encouraging the committer, etc. I know there are already documents that guide us on how to do a code-review [2] and even projects develope their own procedures but I find they're more about technical issues rather than human communication. Currently reading Google's code-review practices [1] give me some inspiration to develop more human-centric code-review guidelines for OpenStack projects. IMO, it could be a great way to help project teams develop stronger relationship as well as encouraging newcomers. When the document is finalized, I then encourage PTLs to refer to that document in the project's docs. Let me know what you think and I will put a patchset after one or two weeks. [1] https://google.github.io/eng-practices/review/ [2] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html [3] https://docs.openstack.org/doc-contrib-guide/docs-review.html [4] https://docs.openstack.org/nova/rocky/contributor/code-review.html [5] https://docs.openstack.org/neutron/pike/contributor/policies/code-reviews.html Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dikonoor at in.ibm.com Fri Sep 6 05:42:07 2019 From: dikonoor at in.ibm.com (Divya K Konoor) Date: Fri, 6 Sep 2019 11:12:07 +0530 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> Message-ID: Missing the deadline for a PTL nomination cannot be the reason for removing governance. PowerVMStackers continue to be an active project and would want to be continued to be governed under OpenStack. For PTL, an eligible candidate can still be appointed . Regards, D i v y a K K o n o o r -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 14142563.gif Type: image/gif Size: 558 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From luka.peschke at objectif-libre.com Fri Sep 6 08:28:20 2019 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Fri, 06 Sep 2019 10:28:20 +0200 Subject: [cloudkitty] Shift IRC meeting of september 6th Message-ID: <949db3c0.AM8AAEwUf8UAAAAAAAAAAAQR_QkAAAAAZtYAAAAAAAzbjABdchil@mailjet.com> Hello, Some CK cores are unavailable today, so we've decided to move today's meeting to next friday (the 13th) at 15h UTC / 17h CEST . Regards, Luka Peschke From thierry at openstack.org Fri Sep 6 08:48:02 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Sep 2019 10:48:02 +0200 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190905223137.i72s7n4tibkgypqf@bishop> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <20190905223137.i72s7n4tibkgypqf@bishop> Message-ID: <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> Nate Johnston wrote: > On Thu, Sep 05, 2019 at 11:59:22AM +0200, Thierry Carrez wrote: >> - allow "PTL" role to be multi-headed, so that it is less of a superhuman >> and spreading the load becomes more natural. We would not elect/choose a >> single person, but a ticket with one or more names on it. From a governance >> perspective, we still need a clear contact point and a "bucket stops here" >> voice. But in practice we could (1) contact all heads when we contact "the >> PTL", and (2) consider that as long as there is no dissent between the >> heads, it is "the PTL voice". To actually make it work in practice I'd >> advise to keep the number of heads low (think 1-3). > > I think there was already an effort to allow the PTL to shed some of their > duties, in the form of the Cross Project Liaisons [1] project. I thought that > was a great way for more junior members of the community to get involved with > stewardship and be recognized for that contribution, and perhaps be mentored up > as they take a bit of load off the PTL. I think if we expand the roles to > include more of the functions that PTLs feel the need to do themselves, then by > doing so we (of necessity) document those parts of the job so that others can > handle them. And perhaps projects can cooperate and pool resources - for > example, the same person who is a liaison for Neutron to Oslo could probably be > on the look out for issues of interest to Octavia as well, and so on. Cross-project liaisons are a form of delegation. So yes, PTLs already can (and probably should) delegate most of their duties. And in a lot of teams it already works like that. But we have noticed that it can be harder to delegate tasks than share tasks. Basically, once someone is the PTL, it is tempting to just have them do all the PTL stuff (since they will do it by default if nobody steps up). That makes the job a bit intimidating, and it is sometimes hard to find candidates to fill it. If it's clear from day 0 that two or three people will share the tasks and be collectively responsible for those tasks to be covered, it might be less intimidating (easier to find 2 x 50% than 1 x 100% ?). -- Thierry Carrez (ttx) From thierry at openstack.org Fri Sep 6 09:05:16 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Sep 2019 11:05:16 +0200 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> Message-ID: Divya K Konoor wrote: > Missing the deadline for a PTL nomination cannot be the reason for > removing governance. I agree with that, but missing the deadline twice in a row is certainly a sign of some disconnect with the rest of the OpenStack community. Project teams require a minimal amount of reactivity and presence, so it is fair to question whether PowerVMStackers should continue as a project team in the future. > PowerVMStackers continue to be an active project > and would want to be continued to be governed under OpenStack. For PTL, > an eligible candidate can still be appointed . There is another option, to stay under OpenStack governance but without the constraints of a full project team: PowerVMStackers could be made an OpenStack SIG. I already proposed that 6 months ago (last time there was no PTL nominee for the team), on the grounds that interest in PowerVM was clearly a special interest, and a SIG might be a better way to regroup people interested in supporting PowerVM in OpenStack. The objection back then was that PowerVMStackers maintained a number of PowerVM-related code, plugins and drivers that should ideally be adopted by their consuming project teams (nova, neutron, ceilometer), and that making it a SIG would endanger that adoption process. I still think it makes sense to consider PowerVMStackers as a Special Interest Group. As long as the PowerVM-related code is not adopted by the consuming projects, it is arguably a special interest, and not a completely-integrated part of OpenStack components. The only difference in being a SIG (compared to being a project team) would be to reduce the amount of mandatory tasks (like designating a PTL every 6 months). You would still be able to own repositories, get room at OpenStack events, vote on TC election... It would seem to be the best solution in your case. -- Thierry Carrez (ttx) From marek.lycka at ultimum.io Fri Sep 6 09:33:40 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Fri, 6 Sep 2019 11:33:40 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: Hi, > we need people familiar with Angular and Horizon's ways of using Angular (which seem to be very > non-standard) that would be willing to write and review code. Unfortunately the people who originally > introduced Angular in Horizon and designed how it is used are no longer interested in contributing, > and there don't seem to be any new people able to handle this. I've been working with Horizon's Angular for quite some time and don't mind keeping at it, but it's useless unless I can get my code merged, hence my original message. As far as attracting new developers goes, I think that removing some barriers to entry couldn't hurt - seeing commits simply lost to time being one of them. I can see it as being fairly demoralizing. > Personally, I think that a better long-time strategy would be to remove all > Angular-based views from Horizon, and focus on maintaining one language and one set of tools. Removing AngularJS wouldn't remove JavaScript from horizon. We'd still be left with a home-brewish framework (which is buggy as is). I don't think removing js completely is realistic either: we'd lose functionality and worsen user experience. I think that keeping Angular is the better alternative: 1) A lot of work has already been put into Angularization, solving many problems 2) Unlike legacy js, Angular code is covered by automated tests 3) Arguably, improvments are, on average, easier to add to Angular than pure js implementations Whatever reservations there may be about the current implementation can be identified and addressed, but all in all, I think removing it at this point would be counterproductive. M. čt 5. 9. 2019 v 14:28 odesílatel Radomir Dopieralski napsal: > Both of your questions have one answer: we need people familiar with > Angular and Horizon's ways of using Angular (which seem to be very > non-standard) that would be willing to write and review code. Unfortunately > the people who originally introduced Angular in Horizon and designed how it > is used are no longer interested in contributing, and there don't seem to > be any new people able to handle this. Personally, I think that a better > long-time strategy would be to remove all Angular-based views from Horizon, > and focus on maintaining one language and one set of tools. > > On Thu, Sep 5, 2019 at 1:52 PM Marek Lyčka wrote: > >> Hi all, >> >> I took apart the Horizon paging mechanism while working on [1] and have a >> few of findings: >> >> - Paging is unimplemented/turned off for many (if not most) panels, not >> just Routers and Networks >> - Currently, single page data loads could potentially bump up against API >> hard limits >> - Sorting is also broken in places where paging is enabled (Old >> images...), see [2] >> - The Networks table loads data via three API calls due to neutron API >> limitations, which makes the marker based mechanism unusable >> - There is at least one more minor bug which breaks pagination, there may >> be more >> >> While some of these things may be fixable in different hacky and/or >> inefficient ways, >> we already have Angular implementations which fix many of them and make >> improving >> and fixing the rest easier. >> >> Since Angular ports would help with other unrelated issues as well and >> allow us to >> start deprecating old code, I was wondering two things: >> >> 1) What would it take to increase the priority of Angularization in >> general? >> 2) Can the Code Review process be modified/improved to increase the >> chance for >> Angularization changes to be code reviewed and merged if they do >> happen? >> My previous attempts in this area have failed because of lack of code >> reviewers... >> >> Since full Angularization is still the goal for Horizon as far as I know, >> I'd rather >> spend time doing that than hacking solutions to different problems in >> legacy code >> which is slated deprecation. >> >> Best Regards, >> Marek >> >> [1] https://bugs.launchpad.net/horizon/+bug/1746184 >> [2] https://bugs.launchpad.net/horizon/+bug/1782732 >> >> -- >> Marek Lyčka >> Linux Developer >> >> Ultimum Technologies s.r.o. >> Na Poříčí 1047/26, 11000 Praha 1 >> Czech Republic >> >> marek.lycka at ultimum.io >> *https://ultimum.io * >> > -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Sep 6 09:36:38 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Sep 2019 11:36:38 +0200 Subject: [i18n][tc] The future of I18n Message-ID: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> Hi! The I18n project team had no PTL candidates for Ussuri, so the TC needs to decide what to do with it. It just happens that Ian kindly volunteered to be an election official, and therefore could not technically run for I18n PTL. So if Ian is still up for taking it, we could just go and appoint him. That said, I18n evolved a lot, to the point where it might fit the SIG profile better than the project team profile. As a reminder, project teams are responsible for producing OpenStack-the-software, and since they are all integral in the production of the software that we want to release on a time-based schedule, they come with a number of mandatory tasks (like designating a PTL every 6 months). SIGs (special interest groups) are OpenStack teams that work on a mission that is not directly producing a piece of the OpenStack release. SIG members are bound by their mission, rather than by a specific OpenStack release deliverable. There is no mandatory task, as it is OK if the group goes dormant for a while. The I18n team regroups translators, with an interest of making OpenStack (in general, not just the software) more accessible to non-English speakers. They currently try to translate the OpenStack user survey, the Horizon dashboard messages, and key documentation. It could still continue as a project team (since it still produces Horizon translations), but I'd argue that at this point it is not what defines them. The fact that they are translators is what defines them, which IMHO makes them fit the SIG profile better than the project team profile. They can totally continue proposing translation files for Horizon as a I18n SIG, so there would be no technical difference. Just less mandatory tasks for the team. Thoughts ? -- Thierry Carrez (ttx) From amotoki at gmail.com Fri Sep 6 10:59:39 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 6 Sep 2019 19:59:39 +0900 Subject: [keystone][horizon][zaqar][tempest][requirements] library updates breaking projects In-Reply-To: <20190905162516.mxdxg4dl3epwwwfi@mthode.org> References: <20190905162516.mxdxg4dl3epwwwfi@mthode.org> Message-ID: On Fri, Sep 6, 2019 at 1:26 AM Matthew Thode wrote: > > I emailed a while ago about problem updates and wanted to give an > update. I'm hoping we can get these fixed before the freeze which is on > Monday iirc. > > horizon > This is a newer issue which e0ne and amotoki know about but no existing > review to fix it. > please test against https://review.opendev.org/680457 > -semantic-version===2.8.1 > +semantic-version===2.6.0 I proposed a fix at https://review.opendev.org/#/c/680631/. It passes unit tests and a failure in the integration tests looks unrelated to the fix. -- Akihiro Motoki (amotoki) > > tempest STILL has failures > I thought the following commit would fix it, but nope > https://github.com/mtreinish/stestr/commit/136027c005fc437341bc23939a18a5f3314194f1 > -stestr===2.5.1 > +stestr===2.4.0 > > python-zaqarclient > waiting on https://review.opendev.org/679842 may be merging today > -jsonschema===3.0.2 > +jsonschema===2.6.0 > > keystone > a review is out there that seems to have tests passing > https://review.opendev.org/677511/ > -oauthlib===3.1.0 > +oauthlib===3.0.2 > > -- > Matthew Thode From cdent+os at anticdent.org Fri Sep 6 11:04:04 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 12:04:04 +0100 (BST) Subject: [placement] update 19-35 Message-ID: HTML: https://anticdent.org/placement-update-19-35.html Let's have a placement update 19-35. Feature freeze is this week. We have a feature in progress (consumer types, see below) but it is not critical. # Most Important Three main things we should probably concern ourselves with in the immediate future: * We are currently without a PTL for Ussuri. There's some discussion about the options for dealing with this in an [email thread](http://lists.openstack.org/pipermail/openstack-discuss/2019-September/thread.html#9131). If you have ideas (or want to put yourself forward), please share. * We need to work on useful documentation for the features developed this cycle. * We need to create some [cycle highlights](http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009137.html). To help with that I've started [an etherpad](https://etherpad.openstack.org/p/placement-train-cycle-highlights). If I've forgotten anything, please make additions. # What's Changed * osc-placement 1.7.0 has been [released](https://pypi.org/project/osc-placement/). This adds support for [managing allocation ratios](https://review.opendev.org/#/q/topic:allocation-ratios+(status:open+OR+status:merged)) via aggregates, but adding a few different commands and args for inventory manipulation. * Work on consumer types exposed that placement needed to be first class in grenade to make sure database migrations are run. That [change has merged](https://review.opendev.org/679655). Until then placement was upgraded as part of nova. # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 24 (-1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 5 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 6 (0). # osc-placement * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * `--amend` and `--aggregate` on resource provider inventory has merged and been release 1.7.0 (see above). # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * I took this through to microversion and api-ref docs, so it is ready for wider review. If this doesn't make it in for Train, that's okay. The goal is to have it ready for Nova to start working with it when Nova is able. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. Performance related explorations continue: * Refactor initialization of research context. This puts the code that might cause an exit earlier in the process so we can avoid useless work. One outcome of the performance work needs to be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). * Merge request log and request id middlewares is worth attention. It makes sure that _all_ log message from a single request use a global and local request id. There are three [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And zero [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users This week (because of feature freeze) I will not be adding new finds to the list, just updating what was already on the list. * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * Nova: libvirt: Start reporting PCPU inventory to placement A part of * Nova: support move ops with qos ports * nova: Support filtering of hosts by forbidden aggregates * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement * Nova: cross cell resize * Nova: Scheduler translate properties to traits * Nova: single pass instance info fetch in host manager * Nova: using provider config file for custom resource providers # End 🐎 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From balazs.gibizer at est.tech Fri Sep 6 12:00:20 2019 From: balazs.gibizer at est.tech (=?utf-8?B?QmFsw6F6cyBHaWJpemVy?=) Date: Fri, 6 Sep 2019 12:00:20 +0000 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: <1567771216.28660.0@smtp.office365.com> On Thu, Sep 5, 2019 at 6:20 PM, Chris Dent wrote: On Fri, 6 Sep 2019, Ghanshyam Mann wrote: With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. Cheers, gibi Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Fri Sep 6 12:00:22 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Fri, 6 Sep 2019 14:00:22 +0200 Subject: [monasca] Review Priority flag Message-ID: <1ba4b730-2e6f-4b09-5fb1-1f20ef4b7970@suse.com> Hello Team, now that we have the possibility to label our code changes with Review-Priority I would like to start the discussion about formalizing its usage. Right now every core reviewer can set its value, but we haven't defined any rules on how to use it. I suggest a process of proposing the changes which should be prioritized in weekly team meeting or in the mailing list. Any core reviewer, preferably from a different company, could confirm such proposed change by setting RV +1. I hope it's simple enough. What do you think? Another topic is exposing the prioritized code changes to the reviewers. We can list them using the filter [1]. We could add the link to this filter to Contributor Guide [2] and Priorities page [3]. We should also go through the list every week in the meeting. Any other ideas? Thanks Witek [1] https://review.opendev.org/#/q/(projects:openstack/monasca+OR+project:openstack/python-monascaclient)+label:Review-Priority+is:open [2] https://docs.openstack.org/monasca-api/latest/contributor/index.html [3] http://specs.openstack.org/openstack/monasca-specs/ From openstack at sheep.art.pl Fri Sep 6 12:00:30 2019 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Fri, 6 Sep 2019 14:00:30 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka wrote: > Hi, > > > we need people familiar with Angular and Horizon's ways of using Angular > (which seem to be very > > non-standard) that would be willing to write and review code. > Unfortunately the people who originally > > introduced Angular in Horizon and designed how it is used are no longer > interested in contributing, > > and there don't seem to be any new people able to handle this. > > I've been working with Horizon's Angular for quite some time and don't > mind keeping at it, but > it's useless unless I can get my code merged, hence my original message. > > As far as attracting new developers goes, I think that removing some > barriers to entry couldn't hurt - > seeing commits simply lost to time being one of them. I can see it as > being fairly demoralizing. > We can't review your patches, because we don't understand them. For the patches to be merged, we need more than one person, so that they can review each other's patches. > > Personally, I think that a better long-time strategy would be to remove > all > > Angular-based views from Horizon, and focus on maintaining one language > and one set of tools. > > Removing AngularJS wouldn't remove JavaScript from horizon. We'd still be > left with a home-brewish > framework (which is buggy as is). I don't think removing js completely is > realistic either: we'd lose > functionality and worsen user experience. I think that keeping Angular is > the better alternative: > > 1) A lot of work has already been put into Angularization, solving many > problems > 2) Unlike legacy js, Angular code is covered by automated tests > 3) Arguably, improvments are, on average, easier to add to Angular than > pure js implementations > > Whatever reservations there may be about the current implementation can be > identified and addressed, but > all in all, I think removing it at this point would be counterproductive. > JavaScript is fine. We all know how to write and how to review JavaScript code, and there doesn't have to be much of it — Horizon is not the kind of tool that has to bee all shiny and animated. It's a tool for getting work done. AngularJS is a problem, because you can't tell what the code does just by looking at the code, and so you can neither review nor fix it. There has been a lot of work put into mixing Horizon with Angular, but I disagree that it has solved problems, and in fact it has introduced a lot of regressions. Just to take a simple example, the translations are currently broken for en.AU and en.GB languages, and date display is not localized. And nobody cares. We had automated tests before Angular. There weren't many of them, because we also didn't have much JavaScript code. If I remember correctly, those tests were ripped out during the Angularization. Arguably, improvements are, on average, impossible to add to Angular, because the code makes no sense on its own. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Sep 6 12:27:53 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 13:27:53 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> Message-ID: On Wed, 4 Sep 2019, Doug Hellmann wrote: > I would take this a step further, and remind everyone in > leadership positions that your job is not to do things *for* > anyone, but to enable others to do things *for themselves*. Open > source is based on collaboration, and ensuring there is a healthy > space for that collaboration is your responsibility. You are > neither a free workforce nor a charity. By all means, you should > help people to achieve their goals in a reasonable way by reducing > barriers, simplifying processes, and making tools reusable. But do > not for a minute believe that you have to do it all for them, even > if you think they have a great idea. Make sure you say “yes, you > should do that” more often than “yes, I will do that." This is very true, but I think it underestimates the many different forces that are involved in "doing work in OpenStack". These are, of course, very different from person to person, but I've got some observations (of course I do, everyone should). I suspect some of these are unique to my experience, but I suspect some of them are not. It would be useful (to me at least) to know where some of us have had similar experiences. Most people work on OpenStack because it is their job or is closely related to their job. But because it is "open source" and "a community" and "collaborative" doing what people ask for and helping others achieve what they need is but one small piece of the motivation and action calculus. Making "it" (various things: code, community, product, experiences of various kinds) "better" (again, very subjective and multi-dimensional) is very complicated. And it is further complicated by the roadblocks that can come up in the community. In the other thread that started this Sean said: the reason that the investment that was been made was reducing was not driven by the lack of hype but by how slow adding some feature that really mattered [1] One aspect of burn out comes from the combination of weathering these roadblocks and having a kind of optimism that says "I can, somehow change this or overcome this." Another is simply a dedication to quality, no matter the obstacles. This is tightly coupled with Sean's comments above. Improving the "developer experience" is rarely a priority and gets pushed on the back burner unless you dedicate the time to being core or PTL, which grants some license to "getting code merged". For some projects that is a _huge_ undertaking. My relatively good success at overcoming the obstacles but limited (that is, constrained to a small domain) at changing the root causes is why I'm now advocating chilling out. This is risky because the latency between code and related work done now and any feedback is insanely high. The improvements we've made recently to placement won't be in common use for 6 months to 3 years, depending on how we measure "common". Detaching or chilling out now doesn't have an impact for some time. That feedback latency also means figuring out what "better" or "quality" mean for a project is a guessing game. Making cycles longer will make that worse. A year ago when we started extracting placement I tried to make real the idea that full time cores should rarely write feature code and primarily be involved in helping "people to achieve their goals in a reasonable way by reducing barriers, simplifying processes, and making tools reusable". This only sort of worked. There were issues: * There were feature goals, but few people to do the work. * Our (OpenStack's) long term standards for what is or is not a barrier, good process and tooling are historically so low that bring them up to spec requires a vast amount of work. To me, the Placement changes made in Train were needed so that Placement could make a respectable claim to being "good". 75% of the changes (counting by commit) were made by 4 people. 43% by one. [2] The large amount of time required to be core, PTL or "get their code merged pretty easily" (in some projects) is a big portion of any job and given the contraction of interest in the community (but not in the product) from plenty of companies, there is lurking fear that the luxury of making that commitment, of being a "unicorn 100% upstream", will go away at any time. This increases the need to do all those "make it better" things _now_. Which, obviously, is a trap, and people who feel like that would be better off if they chilled out, but I would guess that people who feel that way do so because making it better (whatever it is) is important in and of itself. Especially when the lack of commitment from enterprises is waning: they don't care, so I must, because I care. In other projects, there's simply no one there to become core or a reluctance to get into leadership because it is perceived to be too time consuming (because for many people in leadership, the time consumption is very visible). Similarly, when there's a sense of waning interest, the guessing game described above for determining what matters is pressurized. "If I get this wrong, the risk of our irrelevance or even demise is increased, don't mess this up!". Also a trap. But both traps are compelling. I think we need to investigate changing our governance and leadership structures. We should have evolved away from them, but we haven't because power always strives to maintain itself even when it is no longer fit for purpose. TC, PTL, Core and even "projects" all need rigorous review and reconsideration to see if they are really supporting us ("us" in this case is "the people who make OpenStack") the way they should. If we are unable or unwilling to do that, then we need to enforce "contributing" enterprises to contribute sufficient resources to prop up the old power structures. [3] [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009123.html [2] That is, assuming stackalytics is correct today, it often isn't. [3] Perversely, I think this option (companies paying up) is the fundamentally right one from an economic standpoint, but that is because I don't believe that OpenStack is (currently and through the peak of its history) open source. It is collaborative inter-enterprise development that allows large companies to have a market in which they make a load of money. That takes money and people. If OpenStack were simpler and more contained and tried less hard to satisfy everyone, it could operate as an underlay (much like Linux) to some other market but for now it is the market. The pains we are having now may be the signs of a need for a shift to being an underlay (k8s and others are the new keen market now). If that's the case we can accelerate that shift by narrowing what we do. Trimming the fat. Making OpenStack much more turnkey with far fewer choices. But again, the current batch of engaged enterprises have not shown signs of wanting that narrowing. So they either need to change what they want or cough up the resources to support what they want in a healthy fashion. What we should do is strive to be healthy, whatever else happens. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From dtantsur at redhat.com Fri Sep 6 12:30:59 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 6 Sep 2019 14:30:59 +0200 Subject: [ironic] opensuse-15 jobs are temporary non-voting on bifrost In-Reply-To: References: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> Message-ID: <55cfa2ca-4f8b-1613-9321-80cf1eccae75@redhat.com> On 9/5/19 8:09 PM, Dirk Müller wrote: > Hi Dmitry, > > Am Mi., 4. Sept. 2019 um 17:25 Uhr schrieb Dmitry Tantsur : > >> JFYI we had to disable opensuse-15 jobs because they kept failing with >> repository issues. Help with debugging appreciated. > > The nodeset is incorrect, https://review.opendev.org/680450 should get > you help started. Thank you! > > > Greetings, > Dirk > From cdent+os at anticdent.org Fri Sep 6 12:50:52 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 13:50:52 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: Message-ID: On Wed, 4 Sep 2019, Kendall Nelson wrote: > To kind of rephrase for everyone (Chris, correct me if I am wrong or not > getting all of it): What do you think we, as a community, can do about the > lack of candidates for roles like TC or PTL? How can we adjust, as a > community, to make our governance structures fit better? In what wasy can > we address and prevent burnout? That's a useful and sufficient summary. Thanks for extracting things out like this. Very good email hygiene. > - Longer release cycle. I know this has come up a dozen or more times (and > I'm a little sorry for bringing it up again), but I think OpenStack has > stabilized enough that 6 months is a little short and now may finally be > the time to lengthen things a bit. 9 months might be a better fit. With > longer release cycles comes more time to get work done as well which I've > heard has been a complaint of more part time contributors when this > discussion has come up in the past. As I said in my other message in this thread, in response to Doug, I think that this might be counterproductive in terms of easing burnout. It's probably good for providing more time to get some things done, but it aggravates the pressure and risks involved in trying to predict what matters. Since I've already said enough over on that message, I'll not add more here. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From doug at stackhpc.com Fri Sep 6 13:05:05 2019 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 6 Sep 2019 14:05:05 +0100 Subject: [monasca] Review Priority flag In-Reply-To: <1ba4b730-2e6f-4b09-5fb1-1f20ef4b7970@suse.com> References: <1ba4b730-2e6f-4b09-5fb1-1f20ef4b7970@suse.com> Message-ID: On 06/09/2019 13:00, Witek Bedyk wrote: > Hello Team, > > now that we have the possibility to label our code changes with > Review-Priority I would like to start the discussion about formalizing > its usage. Right now every core reviewer can set its value, but we > haven't defined any rules on how to use it. > > I suggest a process of proposing the changes which should be > prioritized in weekly team meeting or in the mailing list. Any core > reviewer, preferably from a different company, could confirm such > proposed change by setting RV +1. > > I hope it's simple enough. What do you think? Sounds good to me. > > Another topic is exposing the prioritized code changes to the > reviewers. We can list them using the filter [1]. We could add the > link to this filter to Contributor Guide [2] and Priorities page [3]. > We should also go through the list every week in the meeting. Any > other ideas? I think that is a good plan. Perhaps we could have a more general Gerrit dashboard which also includes a Review Priority section. Something like this perhaps? http://www.tinyurl.com/monasca > > Thanks > Witek > > > [1] > https://review.opendev.org/#/q/(projects:openstack/monasca+OR+project:openstack/python-monascaclient)+label:Review-Priority+is:open > [2] https://docs.openstack.org/monasca-api/latest/contributor/index.html > [3] http://specs.openstack.org/openstack/monasca-specs/ > From fungi at yuggoth.org Fri Sep 6 13:10:54 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Sep 2019 13:10:54 +0000 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> Message-ID: <20190906131053.rofnz7zeoudctoif@yuggoth.org> On 2019-09-06 13:27:53 +0100 (+0100), Chris Dent wrote: [...] > Most people work on OpenStack because it is their job or is closely > related to their job. But because it is "open source" and "a > community" and "collaborative" doing what people ask for and helping > others achieve what they need is but one small piece of the > motivation and action calculus. [...] I don't know that this captures my motivation, at least. I chose my job so that I could assist in the creation and maintenance of OpenStack and similar free software, not the other way around. Maybe I'm in a minority within the community, but I suspect there are more folks than just me who feel the same. > I don't believe that OpenStack is (currently and through the peak > of its history) open source. It is collaborative inter-enterprise > development that allows large companies to have a market in which > they make a load of money. [...] Yes, making these tasks easier and less expensive for "large companies" like CERN, SKA, MOC, and all manner of other research and educational organizations is what causes this work to be worthwhile for me. I like that what we do provides a positive contribution to the sum total knowledge of our species. I personally think this aspect can't be overstated. What we do matters beyond the desire and ability for some self-serving commercial enterprises to take and give nothing back. The nature of modern business is exploitation, but it's not as if the commons of free software is the only resource they're exploiting to their own gain. I'm all for the people of our planet coming together to fight injustice or abuse by corporate and political powers, but the problem extends far, far beyond our community and pretending we can solve such abuse and oppression within OpenStack without looking at the bigger picture is short-sighted and naive. I'm disappointed that you don't think the software you're making is open source. I think the software I'm making is open source, and if I didn't I wouldn't be here. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Fri Sep 6 13:15:06 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 14:15:06 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190906131053.rofnz7zeoudctoif@yuggoth.org> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: On Fri, 6 Sep 2019, Jeremy Stanley wrote: > I'm disappointed that you don't think the software you're making is > open source. I think the software I'm making is open source, and if > I didn't I wouldn't be here. I'm disappointed too, I hope I've made that obvious. As I said at the start: everyone has different experiences. You and I have different ones, that is _good_. The reason I have stayed in OpenStack is because I've wanted to make it more "open source". So I think we're working to similar ends, but starting from different points. Again: that is _good_. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From corey.bryant at canonical.com Fri Sep 6 13:30:20 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 6 Sep 2019 09:30:20 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-1) Message-ID: This is the goal-1 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There is only 1 week remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == We're down to 8 projects with failing tests that need fixing, and 3 projects with successful tests that should be ready to merge. I've been working to contact PTLs for these projects to help finish them up. Thank you to all who have contributed their time and fixes to enable patches to land! == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Sep 6 13:34:41 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 6 Sep 2019 09:34:41 -0400 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: <1567771216.28660.0@smtp.office365.com> References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> <1567771216.28660.0@smtp.office365.com> Message-ID: On Fri, Sep 6, 2019 at 8:04 AM Balázs Gibizer wrote: > > > > On Thu, Sep 5, 2019 at 6:20 PM, Chris Dent wrote: > > On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. > > Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. > > > I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. If it's okay with you, would you like to share what are some of the priorities and work that you feel is placed on a PTL which makes you reluctant? PS, by no means I am trying to push for you to be PTL if you're not currently interested, but I want to hear some of the community thoughts about this (and feel free to reply privately) > Cheers, > gibi > > Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From jon at csail.mit.edu Fri Sep 6 13:37:59 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Fri, 6 Sep 2019 09:37:59 -0400 Subject: [i18n][tc] The future of I18n In-Reply-To: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> Message-ID: <20190906133759.obgszlvqexgam5n3@csail.mit.edu> I'd be lead by how the people working in the space want to organize, but... Seems like SIG would be a good fit as I18N is extremely cross project, presumably everything has text output even if it's just logging and not enduser focused. my 2¢ -Jon On Fri, Sep 06, 2019 at 11:36:38AM +0200, Thierry Carrez wrote: :Hi! : :The I18n project team had no PTL candidates for Ussuri, so the TC needs to :decide what to do with it. It just happens that Ian kindly volunteered to be :an election official, and therefore could not technically run for I18n PTL. :So if Ian is still up for taking it, we could just go and appoint him. : :That said, I18n evolved a lot, to the point where it might fit the SIG :profile better than the project team profile. : :As a reminder, project teams are responsible for producing :OpenStack-the-software, and since they are all integral in the production of :the software that we want to release on a time-based schedule, they come with :a number of mandatory tasks (like designating a PTL every 6 months). : :SIGs (special interest groups) are OpenStack teams that work on a mission :that is not directly producing a piece of the OpenStack release. SIG members :are bound by their mission, rather than by a specific OpenStack release :deliverable. There is no mandatory task, as it is OK if the group goes :dormant for a while. : :The I18n team regroups translators, with an interest of making OpenStack (in :general, not just the software) more accessible to non-English speakers. They :currently try to translate the OpenStack user survey, the Horizon dashboard :messages, and key documentation. : :It could still continue as a project team (since it still produces Horizon :translations), but I'd argue that at this point it is not what defines them. :The fact that they are translators is what defines them, which IMHO makes :them fit the SIG profile better than the project team profile. They can :totally continue proposing translation files for Horizon as a I18n SIG, so :there would be no technical difference. Just less mandatory tasks for the :team. : :Thoughts ? : :-- :Thierry Carrez (ttx) : From jfrancoa at redhat.com Fri Sep 6 13:38:17 2019 From: jfrancoa at redhat.com (Jose Luis Franco Arza) Date: Fri, 6 Sep 2019 15:38:17 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: References: <20190830122850.GA5248@holtby> Message-ID: +1 with my eyes closed! I though he was already core. On Tue, Sep 3, 2019 at 3:55 PM Carter, Kevin wrote: > +1 > > -- > > Kevin Carter > IRC: Cloudnull > > > On Fri, Aug 30, 2019 at 7:33 AM Michele Baldessari > wrote: > >> Hi all, >> >> Damien (dciabrin on IRC) has always been very active in all HA things in >> TripleO and I think it is overdue for him to have core rights on this >> topic. So I'd like to propose to give him core permissions on any >> HA-related code in TripleO. >> >> Please vote here and in a week or two we can then act on this. >> >> Thanks, >> -- >> Michele Baldessari >> C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Sep 6 14:34:25 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Sep 2019 09:34:25 -0500 Subject: [oslo] Nova causes MySQL timeouts In-Reply-To: References: Message-ID: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Tagging with oslo as this sounds related to oslo.db. On 9/5/19 7:37 PM, Albert Braden wrote: > After more googling it appears that max_pool_size is a maximum limit on > the number of connections that can stay open, and max_overflow is a > maximum limit on the number of connections that can be temporarily > opened when the pool has been consumed. It looks like the defaults are 5 > and 10 which would keep 5 connections open all the time and allow 10 temp. > > Do I need to set max_pool_size to 0 and max_overflow to the number of > connections that I want to allow? Is that a reasonable and correct > configuration? Intuitively that doesn't seem right, to have a pool size > of 0, but if the "pool" is a group of connections that will remain open > until they time out, then maybe 0 is correct? I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. 0: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size 1: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.QueuePool.__init__ 2: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.connection_recycle_time > > *From:* Albert Braden > *Sent:* Wednesday, September 4, 2019 10:19 AM > *To:* openstack-discuss at lists.openstack.org > *Cc:* Gaëtan Trellu > *Subject:* RE: Nova causes MySQL timeouts > > We’re not setting max_pool_size nor max_overflow option presently. I > googled around and found this document: > > https://docs.openstack.org/keystone/stein/configuration/config-options.html > > > Document says: > > [api_database] > > connection_recycle_time = 3600               (Integer) Timeout before > idle SQL connections are reaped. > > max_overflow = None                                   (Integer) If set, > use this value for max_overflow with SQLAlchemy. > > max_pool_size = None                                  (Integer) Maximum > number of SQL connections to keep open in a pool. > > [database] > > connection_recycle_time = 3600               (Integer) Timeout before > idle SQL connections are reaped. > > min_pool_size = 1                                            (Integer) > Minimum number of SQL connections to keep open in a pool. > > max_overflow = 50                                          (Integer) If > set, use this value for max_overflow with SQLAlchemy. > > max_pool_size = None                                  (Integer) Maximum > number of SQL connections to keep open in a pool. > > If min_pool_size is >0, would that cause at least 1 connection to remain > open until it times out? What are the recommended values for these, to > allow unused connections to close before they time out? Is > “min_pool_size = 0” an acceptable setting? > > My settings are default: > > [api_database]: > > #connection_recycle_time = 3600 > > #max_overflow = > > #max_pool_size = > > [database]: > > #connection_recycle_time = 3600 > > #min_pool_size = 1 > > #max_overflow = 50 > > #max_pool_size = 5 > > It’s not obvious what max_overflow does. Where can I find a document > that explains more about these settings? > > *From:* Gaëtan Trellu > > *Sent:* Tuesday, September 3, 2019 1:37 PM > *To:* Albert Braden > > *Cc:* openstack-discuss at lists.openstack.org > > *Subject:* Re: Nova causes MySQL timeouts > > Hi Albert, > > It is a configuration issue, have a look to max_pool_size > and max_overflow options under [database] section. > > Keep in mind than more workers you will have more connections will be > opened on the database. > > Gaetan (goldyfruit) > > On Sep 3, 2019 4:31 PM, Albert Braden > wrote: > > It looks like nova is keeping mysql connections open until they time > out. How are others responding to this issue? Do you just ignore the > mysql errors, or is it possible to change configuration so that nova > closes and reopens connections before they time out? Or is there a > way to stop mysql from logging these aborted connections without > hiding real issues? > > Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' > (Got timeout reading communication packets) > From jungleboyj at gmail.com Fri Sep 6 14:37:39 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Sep 2019 09:37:39 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <38c6f889-2b82-1a59-f00d-699fb04df6f3@gmail.com> > > - reduce the number of TC members from 13 to 9 (I actually proposed > that 6 months ago at the PTG but that was not as popular then). A > group of 9 is a good trade-off between the difficulty to get enough > people to do project stewardship and the need to get a diverse set of > opinions on governance decision. > I am in support of this.  Seems appropriate to support the level of participation in OpenStack. > - allow "PTL" role to be multi-headed, so that it is less of a > superhuman and spreading the load becomes more natural. We would not > elect/choose a single person, but a ticket with one or more names on > it. From a governance perspective, we still need a clear contact point > and a "bucket stops here" voice. But in practice we could (1) contact > all heads when we contact "the PTL", and (2) consider that as long as > there is no dissent between the heads, it is "the PTL voice". To > actually make it work in practice I'd advise to keep the number of > heads low (think 1-3). > No concerns with this given that it has been something we have unofficially done in Cinder for years.  I couldn't have gotten things done the way I did without help from Sean McGinnis.  Now that the torch has been passed to Brian I plan to continue to support him there. >> [...] >> We drastically need to change the expectations we place on ourselves >> in terms of velocity. > > In terms of results, train cycle activity (as represented by merged > commits/day) is globally down 9.6% compared to Stein. Only considering > "core" projects, that's down 3.8%. > > So maybe we still have the same expectations, but we are definitely > reducing our velocity... Would you say we need to better align our > expectations with our actual speed? Or that we should reduce our > expectations further, to drive velocity further down? > In the case of Cinder our velocity is slowing due to reduced review activity.  That is soon going to be a big problem and we have had little luck at encouraging to do more reviews again.  I have also found that we have had to get better at saying 'No' to things.  This is in the interest of avoiding burnout.  There is a lot we want to do but if it isn't a priority for someone it simply isn't going to get done.  Prioritizing the work has become increasingly important. As has been touched upon in other discussions, I think we have a culture where it is difficult for them to say no to things.  It is great that people care about OpenStack and want to make things happen but it can't be at the cost of people  burning out.  To some extent we need to slow velocity.  If corporations don't step up to start helping out then we must be doing what needs to get done. From jungleboyj at gmail.com Fri Sep 6 14:38:28 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Sep 2019 09:38:28 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <4a9a49e2-911c-5e55-d7d3-4115859a000c@gmail.com> On 9/5/2019 5:04 AM, Chris Dent wrote: > On Thu, 5 Sep 2019, Thierry Carrez wrote: > >> So maybe we still have the same expectations, but we are definitely >> reducing our velocity... Would you say we need to better align our >> expectations with our actual speed? Or that we should reduce our >> expectations further, to drive velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. > I actually agree with this!  :-)  We need them to start helping us prioritize. From jungleboyj at gmail.com Fri Sep 6 14:42:34 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Sep 2019 09:42:34 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> Message-ID: On 9/5/2019 5:33 AM, Ghanshyam Mann wrote: > ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent wrote ---- > > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > > > So maybe we still have the same expectations, but we are definitely reducing > > > our velocity... Would you say we need to better align our expectations with > > > our actual speed? Or that we should reduce our expectations further, to drive > > > velocity further down? > > > > We should slow down enough that the vendors and enterprises start to > > suffer. If they never notice, then it's clear we're trying too hard > > and can chill out. > > +1 on this but instead of slow down and make vendors suffer we need the proper > way to notify or make them understand about the future cutoff effect on OpenStack > as software. I know we have been trying every possible way but I am sure there are > much more managerial steps can be taken. I expect Board of Director to come forward > on this as an accountable entity. TC should raise this as high priority issue to them (in meetings, > joint leadership meeting etc). Agreed.  I think that partially falls into the community's hands itself.  I have spent years advocating for OpenStack in my company and have started having success.  The problem is that it is a slow process.  I am hoping that others are doing the same and we will start seeing a reverse in the trend.  Otherwise, I think we need help from the foundation leadership to reach out and start re-engaging companies. > > I am sure this has been brought up before, can we make OpenStack membership company > to have a minimum set of developers to maintain upstream. With the current situation, I think > it make sense to ask them to contribute manpower also along with membership fee. But again > this is more of BoD and foundation area. I had this thought, but it is quite likely that then I would not be able to contribute anymore.  :-(  So, I fear that could be a slippery slope for many people. > > I agree on ttx proposal to reduce the TC number to 9 or 7, I do not think this will make any > difference or slow down on any of the TC activity. 9 or 7 members are enough in TC. > > As long as we get PTL(even without an election) we are in a good position. This time only > 7 leaderless projects (6 actually with Cyborg PTL missing to propose nomination in election repo and only on ML) are > not so bad number. But yes this is a sign of taking action before it goes into more worst situation. > > -gmann > > > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent > > From openstack at nemebean.com Fri Sep 6 15:01:29 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Sep 2019 10:01:29 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <20190905223137.i72s7n4tibkgypqf@bishop> <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> Message-ID: <918e56aa-d9c3-88e9-22fc-c7da12990f97@nemebean.com> On 9/6/19 3:48 AM, Thierry Carrez wrote: > Nate Johnston wrote: >> On Thu, Sep 05, 2019 at 11:59:22AM +0200, Thierry Carrez wrote: >>> - allow "PTL" role to be multi-headed, so that it is less of a >>> superhuman >>> and spreading the load becomes more natural. We would not elect/choose a >>> single person, but a ticket with one or more names on it. From a >>> governance >>> perspective, we still need a clear contact point and a "bucket stops >>> here" >>> voice. But in practice we could (1) contact all heads when we contact >>> "the >>> PTL", and (2) consider that as long as there is no dissent between the >>> heads, it is "the PTL voice". To actually make it work in practice I'd >>> advise to keep the number of heads low (think 1-3). >> >> I think there was already an effort to allow the PTL to shed some of >> their >> duties, in the form of the Cross Project Liaisons [1] project.  I >> thought that >> was a great way for more junior members of the community to get >> involved with >> stewardship and be recognized for that contribution, and perhaps be >> mentored up >> as they take a bit of load off the PTL.  I think if we expand the >> roles to >> include more of the functions that PTLs feel the need to do >> themselves, then by >> doing so we (of necessity) document those parts of the job so that >> others can >> handle them.  And perhaps projects can cooperate and pool resources - for >> example, the same person who is a liaison for Neutron to Oslo could >> probably be >> on the look out for issues of interest to Octavia as well, and so on. > > Cross-project liaisons are a form of delegation. So yes, PTLs already > can (and probably should) delegate most of their duties. And in a lot of > teams it already works like that. But we have noticed that it can be > harder to delegate tasks than share tasks. Basically, once someone is > the PTL, it is tempting to just have them do all the PTL stuff (since > they will do it by default if nobody steps up). > > That makes the job a bit intimidating, and it is sometimes hard to find > candidates to fill it. If it's clear from day 0 that two or three people > will share the tasks and be collectively responsible for those tasks to > be covered, it might be less intimidating (easier to find 2 x 50% than 1 > x 100% ?). > Just to play a bit of devil's advocate here, in many cases if a problem is everyone's problem then it becomes no one's problem because everyone assumes someone else will deal with it. This is why it usually works better to ask a specific person to volunteer for something than to put out a broad call for *someone* to volunteer. That said, maybe this ties into what Doug wrote earlier that if something doesn't get done maybe it wasn't that important in the first place. I'm not entirely sure I agree with that, but if it's going to be our philosophy going forward then this might be a non-issue. I'll also say that for me specifically, having the PTL title gives me a lever to use downstream. People don't generally question you spending time on a project you're leading. The same isn't necessarily true of being a core to whom PTL duties were delegated. Again, I'm not necessarily opposed to this, I just want to point out some potential drawbacks from my perspective. From rosmaita.fossdev at gmail.com Fri Sep 6 15:21:30 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 6 Sep 2019 11:21:30 -0400 Subject: [cinder][ops] Shanghai Forum - Cinder Topic Planning Message-ID: <64b7e9be-0210-129f-fe1e-c4455e7c944d@gmail.com> The Cinder Community would like to get in on some Forum action in Shanghai, but to do that we need to have some topics to propose. You don't have to actively be working on Cinder to propose a topic, and you don't have to be present to win. The point of the Forum sessions is to get feedback from operators and users about the current state of the software, get some ideas about what should be in the next release, and have some strategic discussion about The Future. So whether you can attend or not, if you have some ideas you'd like us to discuss, feel free to propose a topic: https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals The deadline for proposals to the Foundation is 20 September, so if you could get your idea down on the etherpad before the Cinder weekly meeting on Wednesday 18 September 16:00 UTC, that will give the Cinder team time to look them over. thanks! brian From mriedemos at gmail.com Fri Sep 6 15:46:15 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Sep 2019 10:46:15 -0500 Subject: [nova] Deprecating the XenAPI driver Message-ID: After discussing this at the Train PTG and logging a quality warning in the driver 3 months ago [1] with no response, the nova team is now formally deprecating the XenAPI driver [2]. There has been no working third party CI for the driver for at least a release and no clear maintainers of the driver in nova anymore. If you're using the driver in production, please speak up now otherwise technically the driver could be removed as early as the Ussuri release. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006744.html [2] https://review.opendev.org/#/c/680732/ -- Thanks, Matt From francois.scheurer at everyware.ch Fri Sep 6 15:59:29 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 6 Sep 2019 17:59:29 +0200 Subject: [keystone] cannot use 'openstack trust list' without admin role Message-ID: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> Dear Keystone Experts, I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. 'openstack trust create/show' works. 'openstack trust list' is denied. But keystone policy.json says:     "identity:create_trust": "user_id:%(trust.trustor_user_id)s",     "identity:list_trusts": "",     "identity:list_roles_for_trust": "",     "identity:get_role_for_trust": "",     "identity:delete_trust": "",     "identity:get_trust": "", So "openstack list trusts" is always allowed. In keystone log (I replaced the uid's by names in the ouput below) I see that 'identity:list_trusts()' was actually granted but just after that a_*admin_required()*_ is getting checked and fails... I wonder why... There is also a flag*is_admin_project=True* in the rbac creds for some reason... Any clue? Many thanks in advance! Cheers Francois #openstack --os-cloud stage-fsc trust create --project fscproject --role creator fsc fsc #=> fail because of the names and policy rules, but using uid's it works openstack --os-cloud stage-fsc trust create --project aeac4b07d8b144178c43c65f29fa9dac --role 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c 3e9b1a4fe95048a3b98fb5abebd44f6c +--------------------+----------------------------------+ | Field              | Value                            | +--------------------+----------------------------------+ | deleted_at         | None                             | | expires_at         | None                             | | id                 | e74bcdf125e049c69c2e0ab1b182df5b | | impersonation      | False                            | | project_id         | fscproject | | redelegation_count | 0                                | | remaining_uses     | None                             | | roles              | creator                          | | trustee_user_id    | fsc | | trustor_user_id    | fsc | +--------------------+----------------------------------+ openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b +--------------------+----------------------------------+ | Field              | Value                            | +--------------------+----------------------------------+ | deleted_at         | None                             | | expires_at         | None                             | | id                 | e74bcdf125e049c69c2e0ab1b182df5b | | impersonation      | False                            | | project_id         | fscproject | | redelegation_count | 0                                | | remaining_uses     | None                             | | roles              | creator                          | | trustee_user_id    | fsc | | trustor_user_id    | fsc | +--------------------+----------------------------------+ #this fails: openstack --os-cloud stage-fsc trust list *You are not authorized to perform the requested action: admin_required. (HTTP 403)* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Sep 6 16:14:56 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 6 Sep 2019 11:14:56 -0500 Subject: [release][heat][nova][openstackclient][openstacksdk][nova] Pending final library releases Message-ID: <20190906161456.GA14051@sm-workstation> Hey everyone, Yesterday was the deadline for any non-client library releases. In order to make sure we include any unreleased commits, and to make sure those commits are able to make it onto a stable/train branch, releases were proposed by the Release Management team for any libs that looked like they needed it. We've had most of them +1'd by the team's PTL or release liaison and have been able to process those requests. There are still a few with no responses from the teams though. https://review.opendev.org/#/q/topic:train-3+status:open If you are a PTL or release liaison for one of the teams tagged in the subject line, please take a look and either +1 if things are ready, or if there are any last minute critical fixes going in, update the patches with a more appropriate commit hash to tag from. For any release requests not ack'd by the teams, we will need to proceed with these by Monday morning to make sure updates make it out and any dependency issues are flushed out before the client lib and other upcoming freezes. I will also submit patches to create the stable/train branch for any of these libs that have not already done so. If there are any questions or concerns about any of this, please reach out here or in the #openstack-release channel and we'll do what we can to help out. Thanks! Sean From dale at bewley.net Fri Sep 6 16:44:16 2019 From: dale at bewley.net (Dale Bewley) Date: Fri, 6 Sep 2019 09:44:16 -0700 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: As an uninformed user I would just like to say Horizon is seen _as_ Openstack to new users and I appreciate ever effort to improve it. Without discounting past work, the Horizon experience leaves much to be desired and it colors the perspective on the entire platform. On Fri, Sep 6, 2019 at 05:01 Radomir Dopieralski wrote: > > > On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka > wrote: > >> Hi, >> >> > we need people familiar with Angular and Horizon's ways of using >> Angular (which seem to be very >> > non-standard) that would be willing to write and review code. >> Unfortunately the people who originally >> > introduced Angular in Horizon and designed how it is used are no longer >> interested in contributing, >> > and there don't seem to be any new people able to handle this. >> >> I've been working with Horizon's Angular for quite some time and don't >> mind keeping at it, but >> it's useless unless I can get my code merged, hence my original message. >> >> As far as attracting new developers goes, I think that removing some >> barriers to entry couldn't hurt - >> seeing commits simply lost to time being one of them. I can see it as >> being fairly demoralizing. >> > > We can't review your patches, because we don't understand them. For the > patches to be merged, we > need more than one person, so that they can review each other's patches. > > >> > Personally, I think that a better long-time strategy would be to remove >> all >> > Angular-based views from Horizon, and focus on maintaining one language >> and one set of tools. >> >> Removing AngularJS wouldn't remove JavaScript from horizon. We'd still be >> left with a home-brewish >> framework (which is buggy as is). I don't think removing js completely is >> realistic either: we'd lose >> functionality and worsen user experience. I think that keeping Angular is >> the better alternative: >> >> 1) A lot of work has already been put into Angularization, solving many >> problems >> 2) Unlike legacy js, Angular code is covered by automated tests >> 3) Arguably, improvments are, on average, easier to add to Angular than >> pure js implementations >> >> Whatever reservations there may be about the current implementation can >> be identified and addressed, but >> all in all, I think removing it at this point would be counterproductive. >> > > JavaScript is fine. We all know how to write and how to review JavaScript > code, and there doesn't > have to be much of it — Horizon is not the kind of tool that has to bee > all shiny and animated. It's a tool > for getting work done. AngularJS is a problem, because you can't tell what > the code does just by looking > at the code, and so you can neither review nor fix it. > > There has been a lot of work put into mixing Horizon with Angular, but I > disagree that it has solved problems, > and in fact it has introduced a lot of regressions. Just to take a simple > example, the translations are currently > broken for en.AU and en.GB languages, and date display is not localized. > And nobody cares. > > We had automated tests before Angular. There weren't many of them, because > we also didn't have much JavaScript code. > If I remember correctly, those tests were ripped out during the > Angularization. > > Arguably, improvements are, on average, impossible to add to Angular, > because the code makes no sense on its own. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Sep 6 17:16:48 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 06 Sep 2019 18:16:48 +0100 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: Message-ID: <2b3f791f1ac861d3a9cb8c73b438c51044fa6ed1.camel@redhat.com> On Thu, 2019-09-05 at 15:10 +0000, Adrian Chiris wrote: > Greetings, > I was wondering what is the guideline in regards to which kernels are supported by OpenStack in the various Linux > distributions. > > Looking at [1], Taking for example latest CentOS major (7): > Every "minor" version is released with a different kernel version, > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and the newest released in 2018 (CentOS 7.6, kernel > 3.10.0-957) for what its worth once centos8 is out which should be soonish i hope that will not be an issue so in Ussuri the bug can be fixed without regard for ceten 7 at least on master. > > While I understand that OpenStack projects are expected to support all CentOS 7.x releases. am i actully dont know if its resonable to expect all centos 7.x version to be supported. downstream we do not support OSP on all Rhel 7 version for all release. after a certen point to recive new zstream ream version of OSP you need to move to a later rhel release. if you continue to run the old x.y.z version on older rhel its supported but the latest .z is only tested/supported on the latest rhel 7.x expecting all openstack project to support the kernel form 7.0 is probably an unrealistic requirement. if so it would meen 10 years of support for that kernel or well untill we eol it. we dont test with old kernel in the gate as far as i know but i also dont know if we have a policy for this. > Does the same applies for the kernels they originally came out with? > > The reason I'm asking, is because I was working on doing some cleanup in neutron [2] for a workaround introduced > because of an old kernel bug, > It is unclear to me if it is safe to introduce this change. > > [1] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > [2] https://review.opendev.org/#/c/677095/ > > Thanks, > Adrian. > From smooney at redhat.com Fri Sep 6 17:29:18 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 06 Sep 2019 18:29:18 +0100 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> References: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> Message-ID: On Thu, 2019-09-05 at 08:20 -0700, Clark Boylan wrote: > On Thu, Sep 5, 2019, at 8:10 AM, Adrian Chiris wrote: > > > > Greetings, > > > > I was wondering what is the guideline in regards to which kernels are > > supported by OpenStack in the various Linux distributions. > > > > > > Looking at [1], Taking for example latest CentOS major (7): > > > > Every “minor” version is released with a different kernel version, > > > > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and > > the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > > > > > While I understand that OpenStack projects are expected to support all > > CentOS 7.x releases. > > It is my understanding that CentOS (and RHEL?) only support the current/latest point release of their distro [3]. yes so each rhedhat openstack plathform (OSP) z stream (x.y.z) release is tested and packaged only for the latest point release of rhel. we support customer on older .z release if they are also on the version of rhel it was tested with but we do expect customer to upgrage to the new rhel minor version when they update there openstack to a newer .z relese. this is becasue we update qemu and other products as part of the minor release of rhel and we need to ensure that nova works with that qemu and the kvm it was tested with. > We only test against that current point release. I don't expect we can be expected to support a distro release which > the distro doesn't even support. ya i think that is sane. also if we are being totally honest old kernels have bug many of which are security bugs so anyone running the original kernel any os shipped with is deploying a vulnerable cloud. > > All that to say I would only worry about the most recent point release. we might want to update the doc to that effect. it currently say latest Centos Major https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions perhaps it should be lates centos point/minor release since that is what we actully test with. also centos 8 is apprently complete the RC work so hopfully we will see a release soon. https://wiki.centos.org/About/Building_8 i have 0 info on centos but for Ussuri i hope we will have move to centos 8 and python 3 only. > > > > > Does the same applies for the kernels they _originally_ came out with? > > > > > > The reason I’m asking, is because I was working on doing some cleanup > > in neutron [2] for a workaround introduced because of an old kernel bug, > > > > It is unclear to me if it is safe to introduce this change. > > > > > > [1] > > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > > > [2] https://review.opendev.org/#/c/677095/ > > [3] https://wiki.centos.org/FAQ/General#head-dcca41e9a3d5ac4c6d900a991990fd11930867d6 > From ianyrchoi at gmail.com Fri Sep 6 17:37:23 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 7 Sep 2019 02:37:23 +0900 Subject: [i18n][tc] The future of I18n In-Reply-To: <20190906133759.obgszlvqexgam5n3@csail.mit.edu> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> <20190906133759.obgszlvqexgam5n3@csail.mit.edu> Message-ID: <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> Hello, First of all, thanks a lot for raising into this thread. Please see inline: Jonathan Proulx wrote on 9/6/2019 10:37 PM: > I'd be lead by how the people working in the space want to organize, but... > > Seems like SIG would be a good fit as I18N is extremely cross project, > presumably everything has text output even if it's just logging and > not enduser focused. > > my 2¢ > -Jon > > On Fri, Sep 06, 2019 at 11:36:38AM +0200, Thierry Carrez wrote: > :Hi! > : > :The I18n project team had no PTL candidates for Ussuri, so the TC needs to > :decide what to do with it. It just happens that Ian kindly volunteered to be > :an election official, and therefore could not technically run for I18n PTL. > :So if Ian is still up for taking it, we could just go and appoint him. I love I18n, and I could not imagine OpenStack world without I18n - I would like to take I18n PTL role for Ussuari cycle if there is no objection. > : > :That said, I18n evolved a lot, to the point where it might fit the SIG > :profile better than the project team profile. > : > :As a reminder, project teams are responsible for producing > :OpenStack-the-software, and since they are all integral in the production of > :the software that we want to release on a time-based schedule, they come with > :a number of mandatory tasks (like designating a PTL every 6 months). > : > :SIGs (special interest groups) are OpenStack teams that work on a mission > :that is not directly producing a piece of the OpenStack release. SIG members > :are bound by their mission, rather than by a specific OpenStack release > :deliverable. There is no mandatory task, as it is OK if the group goes > :dormant for a while. > : > :The I18n team regroups translators, with an interest of making OpenStack (in > :general, not just the software) more accessible to non-English speakers. They > :currently try to translate the OpenStack user survey, the Horizon dashboard > :messages, and key documentation. > : > :It could still continue as a project team (since it still produces Horizon > :translations), but I'd argue that at this point it is not what defines them. > :The fact that they are translators is what defines them, which IMHO makes > :them fit the SIG profile better than the project team profile. They can > :totally continue proposing translation files for Horizon as a I18n SIG, so > :there would be no technical difference. Just less mandatory tasks for the > :team. > : > :Thoughts ? First of all, I would like to more clarify the scope of which artifacts I18n team deals with. Regarding translation contributions to upstream official projects, I18n team started with 1) user-facing strings (e.g., dashboards), 2) non-user-facing strings (e.g., log messages) and 3) openstack-manuals documentation. The second one is not active after no real support for maintaining to translate log messages, and the third one is now expanded to some of project documents which there are the demand of translation like openstack-helm, openstack-ansible, and horizon ([2] includes the list of Docs team repos, project documents for operators and part of SIG). Based on the background, I can say that I18n team currently involves in total 19 dashboard projects [3], and 6 official project document repositories. Although the number of translated words is not larger than previous cycles [4], the amount of parts related with upstream official projects seems not to be small. IMHO, since it seems that I18n team's release activities [5] are rather stable, from the perspective, I think staying I18n team as SIG makes sense, but please kindly consider the followings: - Translators who have contributed translations to official OpenStack projects are currendly regarded as ATC and APC of the I18n project.   It would be great if OpenStack TC and official project teams regard those translation contribution as ATC and APC of corresponding official projects, if I18n team stays as SIG. - Zanata (translation platform, instance: translate.openstack.org) open source is not maintained anymore. I18n team wanted to change translation platform to something other than Zanata [6] but   current I18n team members don't have enough technical bandwidth to do that (FYI: Fedora team just started to migrate from Zanata to Weblate [7] - not easy stuff and non-small budget were agreed to use by Council).   Regardless of I18n team's status as an official team or SIG, such migration to a new translation platform indeed needs the support from the current governance (TC, UC, Foundation, Board of Directors, ...). - Another my brief understanding on the difference between as an official team and as SIG from the perspective of Four Opens is that SIGs and working groups seems that they have some flexibility using non-opensource tools for communication.   For example, me, as PTL currently encourage all the translators to come to the tools official teams use such as IRC, mailing lists, and Launchpad (note: I18n team has not migrated from Launchpad to Storyboard) - I like to use them and   I strongly believe that using such tools can assure that the team is following Four Opens well. But sometimes I encounter some reality - local language teams prefer to use their preferred communication protocols.   I might need to think more how I18n team as SIG communicates well with members, but I think the team members might want to more find out how to better communicate with language teams (e.g., using Hangout, Slack, and so on from the feedback)   , and try to use better communication tools which might be comfortable to translators who have little background on development. Note that I have not discussed the details with team members - I am still open with my thoughts, would like to more listen to opinions from the team members, and originally wanted to expand the discussion with such perspective during upcoming PTG in Shanghai with more Chinese translators. And dear OpenStackers including I18n team members & translators: please kindly share your sincere thoughts. With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/114191.html [2] https://translate.openstack.org/version-group/view/doc-resources/projects [3] https://translate.openstack.org/version-group/view/Train-dashboard-translation/projects [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007989.html [5] https://docs.openstack.org/i18n/latest/release_management.html [6] https://blueprints.launchpad.net/openstack-i18n/+spec/renew-translation-platform [7] https://fedoraproject.org/wiki/L10N_Move_to_Weblate > : > :-- > :Thierry Carrez (ttx) > : From zbitter at redhat.com Fri Sep 6 17:44:47 2019 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Sep 2019 13:44:47 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190905113636.qwxa4fjxnju7tmip@barron.net> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> Message-ID: <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> On 5/09/19 7:36 AM, Tom Barron wrote: > On 05/09/19 19:33 +0900, Ghanshyam Mann wrote: >> ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent >> wrote ---- >> > On Thu, 5 Sep 2019, Thierry Carrez wrote: >> > >> > > So maybe we still have the same expectations, but we are >> definitely reducing >> > > our velocity... Would you say we need to better align our >> expectations with >> > > our actual speed? Or that we should reduce our expectations >> further, to drive >> > > velocity further down? >> > >> > We should slow down enough that the vendors and enterprises start to >> > suffer. If they never notice, then it's clear we're trying too hard >> > and can chill out. >> >> +1 on this but instead of slow down and make vendors suffer we need >> the proper >> way to notify or make them understand about the future cutoff effect >> on OpenStack >> as software. I know we have been trying every possible way but I am >> sure there are >> much more managerial steps can be taken.  I expect Board of Director >> to come forward >> on this as an accountable entity. TC should raise this as high >> priority issue to them (in meetings, >> joint leadership meeting etc). >> >> I am sure this has been brought up before, can we make OpenStack >> membership company >> to have a minimum set of developers to maintain upstream. With the >> current situation, I think >> it make sense to ask them to contribute manpower also along with >> membership fee.  But again >> this is more of BoD and foundation area. > > +1 > > IIUC Gold Membership in the Foundation provides voting privileges at a > cost of $50-200K/year and Corporate Sponsorship provides these plus > various marketing benefits at a cost of $10-25K/year.  So far as I can > tell there is not a requirement of a commitment of contributors and > maintainers with the exception of the (currently closed) Platinum > Membership, which costs $500K/year and requires at least 2 FTE > equivalents contributing to OpenStack. Even this incredibly minimal requirement was famously not met for years by one platinum member, and a (different) platinum member was accepted without ever having contributed upstream in the past or apparently ever intending to in the future. What I'm saying is that if this a the mechanism we want to use to drive contributions, I can tell you now how it's gonna work out. The question we should be asking ourselves is why companies see value in being sponsors of the foundation but not in contributing upstream, and how we convince them of the value of the latter. One initiative the TC started on this front is this: https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html (BTW we could use help in converting the outdated Help Most Wanted entries to this format. Volunteers welcome.) cheers, Zane. > In general I see requirements > for annual cash expenditure to the Foundation, as for membership in any > joint commercial enterprise, but little that ensures the availability of > skilled labor for ongoing maintenance of our projects. > > -- Tom Barron > >> >> I agree on ttx proposal to reduce the TC number to 9 or 7, I do not >> think this will make any >> difference or slow down on any of the TC activity. 9 or 7 members are >> enough in TC. >> >> As long as we get PTL(even without an election) we are in a good >> position. This time only >> 7 leaderless projects (6 actually with Cyborg PTL missing to propose >> nomination in election repo and only on ML) are >> not so bad number. But yes this is a sign of taking action before it >> goes into more worst situation. >> >> -gmann >> >> > >> > -- >> > Chris Dent                       ٩◔̯◔۶           https://anticdent.org/ >> > freenode: cdent >> >> > From ianyrchoi at gmail.com Fri Sep 6 17:55:23 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 7 Sep 2019 02:55:23 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> Message-ID: <900c9ec1-fade-05ee-cdff-4f6e9edb00e8@gmail.com> Akihiro Motoki wrote on 9/4/2019 11:06 PM: > On Wed, Sep 4, 2019 at 12:43 AM Ian Y. Choi wrote: >> Akihiro Motoki wrote on 9/3/2019 11:12 PM: >>> On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: >>>> >>>>> On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: >>>>> >>>>> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: >>>>>>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: >>>>>>> >>>>>>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>>>>>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: >>>>>>> [snip] >>>>>>> >>>>>>>>> When the goal is defined the docs team thought the doc gate job can >>>>>>>>> handle the PDF build >>>>>>>>> without extra tox env and zuul job configuration. However, during >>>>>>>>> implementing the zuul job support >>>>>>>>> it turns out at least a new tox env or an extra zuul job configuration >>>>>>>>> is required in each project >>>>>>>>> to make the docs job fail when PDF build failure is detected. As a >>>>>>>>> result, we changes the approach >>>>>>>>> and the new tox target is now required in each project repo. >>>>>>>> The whole point of structuring the goal the way we did was that we do >>>>>>>> not want to update every single repo this cycle so we could roll out >>>>>>>> PDF building transparently. We said we would allow the job to pass >>>>>>>> even if the PDF build failed, because this was phase 1 of making all >>>>>>>> of this work. >>>>>>>> >>>>>>>> The plan was to 1. extend the current job to make PDF building >>>>>>>> optional; 2. examine the results to see how many repos need >>>>>>>> significant work; 3. add a feature flag via a setting somewhere in >>>>>>>> the repo to control whether the job fails if PDFs cannot be built. >>>>>>>> That avoids a second doc job running in parallel, and still allows us >>>>>>>> to roll out the PDF build requirement over time when we have enough >>>>>>>> information to do so. >>>>>>> Unfortunately when we tried to implement this we found that virtually >>>>>>> every project we looked at required _some_ amount of tweaks just to >>>>>>> build, let alone look sensible. This was certainly true of the big >>>>>>> service projects (nova, neutron, cinder, ...) which all ran afoul of a >>>>>>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous >>>>>>> approach, such as the inability to easily reproduce locally and the >>>>>>> general "hackiness" of the thing, along with the fact that we now had >>>>>>> to submit changes against projects anyway, a collective decision was >>>>>>> made [2] to drop that plan and persue the 'pdfdocs' tox target >>>>>>> approach. >>>>>> We wanted to avoid making a bunch of the same changes to projects just to >>>>>> add the PDF building instructions. If the *content* of a project’s documentation >>>>>> needs work, that’s different. We should make those changes. >>>>> I thought the only reason to hack the docs venv in a Zuul job was to >>>>> avoid having to mass patch projects to add tox configuration? As such, >>>>> if we're already having to mass patch projects because they don't build >>>>> otherwise, why wouldn't we add the tox configuration? Was there another >>>>> reason to pursue the zuul-only approach that I've forgotten about/never >>>>> knew? >>>> I expected to need to fix formatting (even up to the point of commenting things >>>> out, like we found with the giant config sample files). Those are content changes, >>>> and would be mostly unique across projects. >>>> >>>> I wanted to avoid a large number of roughly identical changes to add tox environments, >>>> zuul jobs, etc. because having a lot of patches like that across all the repos makes >>>> extra work for small gain, especially when we can get the same results with a small >>>> number of changes in one repository. >>>> >>>> The approach we discussed was to update the docs job to run some extra steps using >>>> scripts that lived in the openstackdocstheme repository. That shouldn’t require >>>> adding any extra software or otherwise modifying the tox environments. Did that approach >>>> not work out? >>> We explored ways only to update the docs job to run extra commands to >>> build PDF docs, >>> but there is one problem that the job cannot know whether PDF build is >>> ready or not. >>> If we ignore an error from PDF build, it works for repositories which >>> are not ready for PDF build, >>> but we cannot prevent PDF build failure again for repositories ready >>> for PDF build >>> As my project team hat of neutron team, we don't want to have PDF >>> build failure again >>> once the PDF build starts to work. >>> To avoid this, stephenfin, asettle, AJaeger and I agree that some flag >>> to determine if the PDF build >>> is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. >>> Another way we considered is a variable in openstack-tox-docs job, but >>> we cannot pass a variable >>> to zuul project template, so we didn't use this way. >>> If there is a more efficient way, I am happy to use it. >>> >>> Thanks, >>> Akihiro >>> >> Hello, >> >> >> Sorry for joining in this thread late, but to I first would like to try >> to figure out the current status regarding the current discussion on the >> thread: >> >> - openstackdocstheme has docstheme-build-pdf script [1] >> >> - build-pdf-docs Zuul job in openstack-zuul-jobs pre-installs all >> required packages [2] >> >> - Current guidance for project repos is that 1) is to just add to >> latex_documents settings [3] and add pdf-docs environment for trigger [4] >> >> - Project repos additionally need to change more for successful PDF >> builds like adding more options on conf.py [5] and changing more on rst >> files to explictly options like [6] . > Thanks Ian. > > Your understanding on the current situations is correct. Good summary, thanks. > >> >> Now my questions from comments are: >> >> a) How about checking an option in somewhere else like .zuul.yaml or >> using grep in docs env part, not doing grep to check the existance of >> "pdf-docs" tox env [3]? > I am not sure how your suggestion works more efficiently than the > current pdf-docs tox env approach. > We explored an option to introduce a flag variable to the > openstack-tox-docs job but we use > a zuul project-template which wraps openstack-tox-docs job and another job. > The current zuul project-template does not accept a variable and > projects who want to specify > a flag explicitly needs to copy the content of the project-template. > Considering this we gave up this route. > Regarding "using grep in docs env part", I haven't understood what you think, > but it looks similar to the current approach. > >> b) Can we call docstheme-build-pdf in openstackdocstheme [1] instead of >> direct Sphinx & make commands in "pdf-docs" environment [4]? > It can, but I am not sure whether we need to update the current > proposed patches. > The only advantage of using docstheme-build-pdf is that we don't need to change > project repositories when we update the command lines in future, but > it sounds a matter of taste. > >> c) Ultimately, would executing docstheme-build-pdf command in >> build-pdf-docs Zuul job with another kind of trigger like bullet a) be >> feasible and/or be implemented by the end of this cycle? > We can, but again it is a matter of taste to me > and most important thing is how we handle a flag to enable PDF build. > > Thanks, > Akihiro Thank you for sharing your opinion, and I agree that it can be the matter of taste. I wanted to emphasize that the changes to project repositories are rather so small, and have tried to explore which ways can more minimize the changes to project repositories (e.g., without any change on tox.ini in project repositories). By the way, is it possible to centralize such flags into a common repository such as a repo related with build-pdf-docs Zuul job like [1] and [2] (I took examples from I18n team)? I am asking since I also agree that it would be the best if the same changes to all repos' tox.ini and other files could be minimized. If it isn't possible, than I think there would be no alternatives. Note that my asking assumes that current PDF community goal well reflects what I previously discussed with Doug [3]. With many thanks, /Ian [1] https://review.opendev.org/#/c/525028/1/zuul.d/projects.yaml [2] https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-zanata-client/files/common_translation_update.sh#L39 [3] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134609.html > >> >> >> With many thanks, >> >> >> /Ian >> >> >> [1] https://review.opendev.org/#/c/665163/ >> >> [2] >> https://review.opendev.org/#/c/664555/25/roles/prepare-build-pdf-docs/tasks/main.yaml at 3 >> >> [3] https://review.opendev.org/#/c/678393/4/doc/source/conf.py >> >> [4] https://review.opendev.org/#/c/678393/4/tox.ini >> >> [5] https://review.opendev.org/#/c/678747/1/doc/source/conf.py at 270 >> >> [6] https://review.opendev.org/#/c/678747/1/doc/source/index.rst at 13 >> From farida.elzanaty at mail.mcgill.ca Fri Sep 6 18:16:39 2019 From: farida.elzanaty at mail.mcgill.ca (Farida El Zanaty) Date: Fri, 6 Sep 2019 18:16:39 +0000 Subject: [all][research] Survey for Openstack developers =) Message-ID: Hi!I am Farida from McGill University. I am trying to learn more about code reviews in the Openstack community, as I have been studying Openstack projects for a while. Please help me understand your perspective on design discussions during code reviews by filling up this 10-minute survey: https://forms.gle/Hhn191f6cxF5hVgG8 Survey participants will also be entered into a raffle for a $50 Amazon gift card. A little bit of context: Under the supervision of Prof. Shane McIntosh, my research aims to study design discussions that occur between developers during code reviews. Last year, we published a study about the frequency and types of such discussions that occur in OpenStack Nova and Neutron (http://rebels.ece.mcgill.ca/papers/esem2018_elzanaty.pdf).We are reaching out to Openstack developers to better understand their perspectives on design discussions during code reviews. Survey: https://forms.gle/Hhn191f6cxF5hVgG8Thanks for your time, Farida El-Zanaty =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Fri Sep 6 18:20:20 2019 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 6 Sep 2019 14:20:20 -0400 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190906131053.rofnz7zeoudctoif@yuggoth.org> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: <20190906182020.7gr2uckoqdj7ycwn@barron.net> On 06/09/19 13:10 +0000, Jeremy Stanley wrote: >On 2019-09-06 13:27:53 +0100 (+0100), Chris Dent wrote: >[...] >> Most people work on OpenStack because it is their job or is closely >> related to their job. But because it is "open source" and "a >> community" and "collaborative" doing what people ask for and helping >> others achieve what they need is but one small piece of the >> motivation and action calculus. >[...] > >I don't know that this captures my motivation, at least. I chose my >job so that I could assist in the creation and maintenance of >OpenStack and similar free software, not the other way around. Maybe >I'm in a minority within the community, but I suspect there are more >folks than just me who feel the same. > Me too, though I'm fortunate enough to have an employer who genuinely values open source work, including building and fostering open source communities. I've worked for others where open source work was always only an instrumental goal, not an end in itself -- indeed I think it was sometimes considered a necessary evil. -- Tom From miguel at mlavalle.com Fri Sep 6 18:24:44 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 6 Sep 2019 13:24:44 -0500 Subject: [infra][neutron] Requesting help to remove feature branches Message-ID: Dear Infra Team, We have decided to remove from the Neutron repo the following feature branches: feature/graphql feature/lbaasv2 feature/pecan feature/qos We don't need to preserve any state from these branches. In the case of the first one, no code was merged. The work in the other three branches is already merged into master. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Sep 6 18:35:29 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Sep 2019 11:35:29 -0700 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019, at 11:24 AM, Miguel Lavalle wrote: > Dear Infra Team, > > We have decided to remove from the Neutron repo the following feature branches: > > feature/graphql > feature/lbaasv2 > feature/pecan > feature/qos > > We don't need to preserve any state from these branches. In the case of > the first one, no code was merged. The work in the other three branches > is already merged into master. I forgot to mention that we need to close all the open changes proposed to these branches before we can delete the branch in Gerrit. feature/graphql appears to have some open changes, but the others are fine. Can you abandon those changes then we can delete the branch. Thanks, Clark From zbitter at redhat.com Fri Sep 6 19:26:10 2019 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Sep 2019 15:26:10 -0400 Subject: [heat] Resource handling in Heat stacks In-Reply-To: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> References: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> Message-ID: On 4/09/19 3:51 AM, Andreas Florath wrote: > Many thanks! Works like a charm! > > Suggestion: document default value of 'delete_on_termination'. 😉 Patches accepted 😉 > Kind regards > > Andre > > > On Wed, 2019-09-04 at 12:04 +0530, Rabi Mishra wrote: >> On Wed, Sep 4, 2019 at 11:41 AM Andreas Florath > > wrote: >>> Hello! >>> >>> >>> Can please anybody tell me, if all resources which are created >>> within a Heat stack belong to the stack in the way that >>> all the resources are freed / deleted when the stack is deleted? >>> >>> IMHO all resources which are created during the initial creation or >>> update of a stack, even if they are ephemeral or only internal >>> created, must be deleted when the stack is deleted by OpenStack Heat >>> itself. Correct? >>> >>> My question might see obvious, but I did not find an explicit hint in >>> the documentation stating this. >>> >>> >>> The reason for my question: I have a Heat template which uses two >>> images to create a server (using block_device_mapping_v2). Every time >>> I run an 'openstack stack create' and 'openstack stack delete' cycle >>> one ephemeral volume is left over / gets not deleted. >>> >> I think it's due toe delete_on_termination[1] property of bdmv2 which >> is interpreted as 'False', if not specified. You can set it to 'True' >> to delete the volumes along with server. I've not checked if it's >> different from how nova api behaves though. >> >> [1] >> https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server-prop-block_device_mapping_v2-*-delete_on_termination >> >>> For me this sounds like a problem in OpenStack (Heat). >>> (It looks that this is at least similar to >>> https://review.opendev.org/#/c/341008/ >>> which never made it into master.) >>> >>> >>> Kind regards >>> >>> Andre >>> >>> >>> >> >> From fungi at yuggoth.org Fri Sep 6 19:54:33 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Sep 2019 19:54:33 +0000 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> Message-ID: <20190906195433.iv5xtixqbsvdwd4h@yuggoth.org> On 2019-09-06 18:29:18 +0100 (+0100), Sean Mooney wrote: [...] > for Ussuri i hope we will have move to centos 8 and python 3 only. [...] In that case, you'll probably want to keep an eye on https://review.opendev.org/679798 as things unfold. Right now, though, it looks likely you'll get your wish. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Fri Sep 6 20:17:53 2019 From: amy at demarco.com (Amy Marrich) Date: Fri, 6 Sep 2019 15:17:53 -0500 Subject: [Horizon] Help making custom theme - resend as still looking:) In-Reply-To: References: Message-ID: Just thought I'd resend this out to see if someone could help:) For the Grace Hopper Conference's Open Source Day we're doing a Horizon based workshop for OpenStack (running Devstack Pike). The end goal is to have the attendee teams create their own OpenStack theme supporting a humanitarian effort of their choice in a few hours. I've tried modifying the material theme thinking it would be the easiest route to go but that might not be the best way to go about this.:) I've been getting some assistance from e0ne in the Horizon channel and my logo now shows up on the login page, and I had already gotten the SITE_BRAND attributes and the theme itself to show up after changing the local_settings.py. If anyone has some tips or a tutorial somewhere it would be greatly appreciated and I will gladly put together a tutorial for the repo when done. Thanks! Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Sep 6 20:44:26 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 6 Sep 2019 15:44:26 -0500 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: Hi Clark, Thanks for the quick respond. Done: https://review.opendev.org/#/q/project:openstack/neutron+branch:feature/graphql Regards On Fri, Sep 6, 2019 at 1:36 PM Clark Boylan wrote: > On Fri, Sep 6, 2019, at 11:24 AM, Miguel Lavalle wrote: > > Dear Infra Team, > > > > We have decided to remove from the Neutron repo the following feature > branches: > > > > feature/graphql > > feature/lbaasv2 > > feature/pecan > > feature/qos > > > > We don't need to preserve any state from these branches. In the case of > > the first one, no code was merged. The work in the other three branches > > is already merged into master. > > I forgot to mention that we need to close all the open changes proposed to > these branches before we can delete the branch in Gerrit. feature/graphql > appears to have some open changes, but the others are fine. > > Can you abandon those changes then we can delete the branch. > > Thanks, > Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 6 22:57:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Sep 2019 22:57:51 +0000 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: <20190906225750.tgbnwz6wu5gdfezo@yuggoth.org> On 2019-09-06 13:24:44 -0500 (-0500), Miguel Lavalle wrote: > We have decided to remove from the Neutron repo the following feature > branches: > > feature/graphql > feature/lbaasv2 > feature/pecan > feature/qos > > We don't need to preserve any state from these branches. In the case of the > first one, no code was merged. The work in the other three branches is > already merged into master. Sanity-checking feature/lbaasv2, `git merge-base` between it and master suggest cc400e2 is the closest common ancestor. There are 4 potentially substantive commits on feature/lbaasv2 past that point which do not seem to appear in the master branch history: 7147389 Implement Jinja templates for haproxy config cfa4a86 Tests for extension, db and plugin for LBaaS V2 02c01a3 Plugin/DB additions for version 2 of LBaaS API 4ed8862 New extension for version 2 of LBaaS API Do you happen to know whether these need to be preserved (or what happened with them)? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Sep 6 23:17:54 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Sep 2019 16:17:54 -0700 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019, at 1:44 PM, Miguel Lavalle wrote: > Hi Clark, > > Thanks for the quick respond. Done: > https://review.opendev.org/#/q/project:openstack/neutron+branch:feature/graphql > And now the branches are gone. For historical papertrails here are the branches and their heads: feature/graphql ab371ffcc69ab93d1046932297f7029bf7f184e5 feature/lbaasv2 0eed081ad9ef516f0207f179643781aad5b85b8e feature/pecan f747c35b1c1b8371de399c8239699cb89455c6e6 feature/qos 28a4c0aa69924e28f2e302acb9a8313fb310d5bf Clark From melwittt at gmail.com Fri Sep 6 23:59:48 2019 From: melwittt at gmail.com (melanie witt) Date: Fri, 6 Sep 2019 16:59:48 -0700 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? Message-ID: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Howdy all, TL;DR: I have a question, does the Telemetry service (or any other service) still make use of the server usage audit log API in Nova [1]? Recently I was investigating customer issues where the nova.task_log database table grows infinitely and is never cleaned up [2]. I asked about it today in #openstack-nova [3] and Matt Riedemann explained that the API is toggled via config option [4] and that the Telemetry service is/was the consumer of the API. I found through code inspection that there are no methods for deleting nova.task_log records and am trying to determine what is the best way forward for handling cleanup. Matt mentioned the possibility of deprecating the server usage audit log API altogether, which we might be able to do if no one is using it anymore. So, I was thinking: * If Telemetry is no longer using the server usage audit log API, we deprecate it in Nova and notify deployment tools to stop setting [DEFAULT]/instance_usage_audit = true to prevent further creation of nova.task_log records and recommend manual cleanup by users or * If Telemetry is still using the server usage audit log API, we create a new 'nova-manage db purge_task_log --before ' (or similar) command that will hard delete nova.task_log records before a specified date or all if --before is not specified Can anyone shed any light on whether Telemetry, or any other service, still uses the server usage audit log API in Nova? Would we be able to deprecate it? If we can't, what do you think of the nova-manage command idea? I would appreciate hearing your thoughts about it. Cheers, -melanie [1] https://docs.openstack.org/api-ref/compute/#server-usage-audit-log-os-instance-usage-audit-log [2] https://bugzilla.redhat.com/show_bug.cgi?id=1726256 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-09-06.log.html#t2019-09-06T14:10:38 [4] https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.instance_usage_audit From corvus at inaugust.com Fri Sep 6 16:19:58 2019 From: corvus at inaugust.com (James E. Blair) Date: Fri, 06 Sep 2019 09:19:58 -0700 Subject: [all][tc] PDF Community Goal Update In-Reply-To: (Akihiro Motoki's message of "Tue, 3 Sep 2019 23:12:30 +0900") References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> Message-ID: <874l1p9z2p.fsf@meyer.lemoncheese.net> Akihiro Motoki writes: > To avoid this, stephenfin, asettle, AJaeger and I agree that some flag > to determine if the PDF build > is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. > Another way we considered is a variable in openstack-tox-docs job, but > we cannot pass a variable > to zuul project template, so we didn't use this way. You can't pass a variable to a project-template, but you can set a variable on a project: https://zuul-ci.org/docs/zuul/user/config.html#attr-project.vars -Jim From anmar.salih1 at gmail.com Sat Sep 7 02:51:23 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Fri, 6 Sep 2019 22:51:23 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Dear Lingxian, I cloud't find aodh log file. Also I did 'ps -ef | grep aodh' and here is the response. Best regards. On Thu, Sep 5, 2019 at 6:56 PM Lingxian Kong wrote: > Hi Anmar, > > Please see my comments in-line below. > > - > Best regards, > Lingxian Kong > Catalyst Cloud > > > On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih wrote: > >> Hi Lingxian, >> >> First of all, I would like to apologize because the email is pretty long. >> I listed all the steps I went through just to make sure that I did >> everything correctly. >> > > No need to apologize, more information is always helpful to solve the > problem. > > >> 4- Creating the webhook for the function by: openstack webhook create >> --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen >> capture for the response. I tried to copy >> and paste the webhook_url " >> http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into >> my internet browser, so I got 404 not found. I am not sure if this is >> normal response or I have something wrong here. >> > > Like Gaetan said, the webhook is supposed to be invoked by http POST. > > 9- Checking aodh alarm history by aodh alarm-history show >> ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response >> >> >> 10- Last step is to check the function execution in qinling and here is >> the response . (empty bracket). I am not sure >> what is the problem. >> > > Yeah, from the output of alarm history, the alarm is not triggered, as a > result, there won't be execution created by the webhook. > > Seems like the aodh-listener didn't receive the message or the message was > ignored. Could you paste the aodh-listener log but make sure: > > 1. `debug = True` in /etc/aodh/aodh.conf > 2. Trigger the python script again > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmar.salih1 at gmail.com Sat Sep 7 02:52:08 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Fri, 6 Sep 2019 22:52:08 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: <56d312af-2b52-49e4-afbc-446162cb08c8@email.android.com> References: <56d312af-2b52-49e4-afbc-446162cb08c8@email.android.com> Message-ID: Dear Gaetan. Thank you for responding to my question. I will check it out. Best regards. Anmar Salih On Wed, Sep 4, 2019 at 9:27 AM Gaëtan Trellu wrote: > Hi Anmar, > > About your 404 when try to use the webhook, I guess this is because you > are not doing a POST but a GET. > > Try to use curl or postman with POST method to validate your webhook. > > Gaetan (goldyfruit) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Sat Sep 7 04:57:15 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 06 Sep 2019 21:57:15 -0700 Subject: [keystone] Pre-feature-freeze update Message-ID: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> I won't be writing a team report since I'm still figuring out which way is up after a week in the desert, but with feature freeze next week I wanted to give a status update on all the in-flight work that is due next week: * System Scope and Default Roles All documented scope[1] and role[2] migrations are in progress. Some are closer to done than others. Since enforce_scope cannot be set to true in keystone.conf until all of them are completed, and since leaving deprecation warnings in the logs for more than two cycles is a very undesirable operator experience, it's essential we complete these by next week. * Application Credential Access Rules This implementation[3] for keystone has been completed for months but the last few patches in the stack are lacking reviews. Client support has been proposed but with the final client release happening next week we will likely not land it until next cycle. * Resource Options and Immutable Resources Resource options[4] and immutable resources[5] are intertwined and the finishing touches are still being applied. Hope to have this completed early next week. * Federated Attributes for Users Support for federated attributes for users[6] is passing CI but needs reviews, it's unclear to me how much has changed since those patches were originally proposed two years ago so it's unfortunate that we're only left with a week to look at them. * Expiring Group Membership There is only a partial implementation proposed for expiring group membership[7] and neither patch is passing CI. This seems to have effectively missed the feature proposal freeze deadline which was a few weeks ago and will not likely make it in this cycle. * CI After skimming the meeting logs I saw the unit test timeout problem was discussed and a temporary workaround was proposed[8]. This sounded like a great idea but it seems that no one implemented it, so I did[9]. Unfortunately this will conflict with all the system-scope/default-roles patches in flight. With how many changes need to go in and how slow it will be with all of them needing to be rechecked and continually making the problem even worse, I propose we go ahead and merge the workaround ASAP and update all the in-flight changes to move the protection tests to the new location. It also appears that the non-voting federation CI broke recently, this will hopefully be fixed by updating the opensuse nodeset[10]. [1] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope [2] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [3] https://review.opendev.org/#/q/topic:bp/whitelist-extension-for-app-creds [4] https://review.opendev.org/678322 [5] https://review.opendev.org/#/q/topic:immutable-resources [6] https://review.opendev.org/#/q/topic:bp/support-federated-attr [7] https://review.opendev.org/#/q/topic:bug/1809116 [8] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-08-27-16.01.log.html#l-84 [9]https://review.opendev.org/680788 [10] https://review.opendev.org/680799 Colleen From skaplons at redhat.com Sat Sep 7 08:08:51 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 7 Sep 2019 10:08:51 +0200 Subject: [neutron] CI issues In-Reply-To: <2BBD3139-A073-42D1-8A2A-A4847F9CBA4D@redhat.com> References: <2BBD3139-A073-42D1-8A2A-A4847F9CBA4D@redhat.com> Message-ID: Hi, Patch https://review.opendev.org/#/c/680001/ is merged now. It addresses both issues which we have with neutron-functional tests currently. So Neutron's gate should be in better condition now :) > On 4 Sep 2019, at 16:37, Slawek Kaplonski wrote: > > Hi neutrinos, > > We are currently having some issues in our gate. Please see [1], [2] and [3] for details. > If Your Neutron patch failed on neutron-functional, neutron-functional-python27 or networking-ovn-tempest-dsvm-ovs-release jobs, please don’t recheck before all those issues will be solved. Recheck will not help and You will only use infra resources. > > [1] https://bugs.launchpad.net/neutron/+bug/1842659 > [2] https://bugs.launchpad.net/neutron/+bug/1842482 > [3] https://bugs.launchpad.net/bugs/1842657 > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From antonio.ojea.garcia at gmail.com Sat Sep 7 09:49:57 2019 From: antonio.ojea.garcia at gmail.com (Antonio Ojea) Date: Sat, 7 Sep 2019 11:49:57 +0200 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190906131053.rofnz7zeoudctoif@yuggoth.org> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: On Fri, 6 Sep 2019 at 15:14, Jeremy Stanley wrote: > > On 2019-09-06 13:27:53 +0100 (+0100), Chris Dent wrote: > [...] > > Most people work on OpenStack because it is their job or is closely > > related to their job. But because it is "open source" and "a > > community" and "collaborative" doing what people ask for and helping > > others achieve what they need is but one small piece of the > > motivation and action calculus. > [...] > > I don't know that this captures my motivation, at least. I chose my > job so that I could assist in the creation and maintenance of > OpenStack and similar free software, not the other way around. Maybe > I'm in a minority within the community, but I suspect there are more > folks than just me who feel the same. > I think that the reality is that not everybody can "chose" his job. Maybe the foundation can start to employ people to take care of the projects with the money received from the sponsors, I'm sure that a lot of folks will step in, not having to take time from his family life and able to dedicate their full time to the project. From fungi at yuggoth.org Sat Sep 7 12:51:36 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 7 Sep 2019 12:51:36 +0000 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: <20190907125136.4ame3so6xowu42ck@yuggoth.org> On 2019-09-07 11:49:57 +0200 (+0200), Antonio Ojea wrote: [...] > I think that the reality is that not everybody can "chose" his > job. That's a fair point. I've had the luxury of turning down much higher-paying jobs to accept one at a non-profit organization aligned with my ideals. I definitely understand that not everyone can afford to do that. On the other hand, I wonder how many folks who work on OpenStack because their employer tells them they have to, not because they're inspired by the project's goals, are compelled (through the sense of community Chris mentioned in his post) to spend extra unpaid time helping with commons tasks and assisting others... to the point that they're burned out on these activities and decide to go work on something else instead. I don't doubt that there are at least some, but perhaps no more than those who took their jobs because they wanted to help the cause. I do feel for the part-time/volunteer contributors in our community, particularly since I've spent much of my life as a part-time/volunteer contributor in a number of other free/libre open-source communities myself. I continue trying to find ways to make such "casual" contribution easier, and to see it eventually play a much more influential role in the future of OpenStack. On the other hand, OpenStack is *very* large (the third-most-active open-source project of all time, depending on how you measure that), and whether we like it or not, full-time contributors are responsible for the bulk of what we've built so far. That reality creates processes and bureaucratic structure to streamline efficiency for high-volume contribution, with a trade-off of making "casual" contribution more challenging. > Maybe the foundation can start to employ people to take care of > the projects with the money received from the sponsors, I'm sure > that a lot of folks will step in, not having to take time from his > family life and able to dedicate their full time to the project. The OSF *does* employ people to help take care of projects with the money received from corporate memberships. If you think the proportion of its funds spent on staff to handle project commons tasks which otherwise go untended is insufficient, please find time to discuss it with your elected Individual Member representatives on the board of directors and convince them to argue for a different balance in the OSF budget. The total budget of the OSF could, however, be compared to that of one small/medium-sized department at a typical member company, so it lacks the capacity to do much on its own and the staff dedicated to this are already spread quite thin as a result. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Sat Sep 7 13:09:13 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 7 Sep 2019 08:09:13 -0500 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: On 9/6/2019 6:59 PM, melanie witt wrote: > > * If Telemetry is no longer using the server usage audit log API, we > deprecate it in Nova and notify deployment tools to stop setting > [DEFAULT]/instance_usage_audit = true to prevent further creation of > nova.task_log records and recommend manual cleanup by users Deprecating the API would just be a signal to not develop new tools based on it since it's effectively unmaintained but that doesn't mean we can remove it since there could be non-Telemtry tools in the wild using it that we'd never hear about. You might not be suggesting an eventual path to removal of the API, I'm just bringing that part up since I'm sure people are thinking it. I'm also assuming that API isn't multi-cell aware, meaning it won't traverse cells pulling records like listing servers or migration resources. As for the config option to run the periodic task that creates these records, that's disabled by default so deployment tools shouldn't be enabling it by default - but maybe some do if they are configured to deploy ceilometer. > > or > > * If Telemetry is still using the server usage audit log API, we create > a new 'nova-manage db purge_task_log --before ' (or similar) > command that will hard delete nova.task_log records before a specified > date or all if --before is not specified If you can't remove the API then this is probably something that needs to happen regardless, though we likely won't know if anyone uses it. I'd consider it pretty low priority given how extremely latent this is and would expect anyone that's been running with this enabled in production has developed DB purge scripts for this table long ago. -- Thanks, Matt From mriedemos at gmail.com Sat Sep 7 13:18:33 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 7 Sep 2019 08:18:33 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <918e56aa-d9c3-88e9-22fc-c7da12990f97@nemebean.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <20190905223137.i72s7n4tibkgypqf@bishop> <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> <918e56aa-d9c3-88e9-22fc-c7da12990f97@nemebean.com> Message-ID: On 9/6/2019 10:01 AM, Ben Nemec wrote: > I'll also say that for me specifically, having the PTL title gives me a > lever to use downstream. People don't generally question you spending > time on a project you're leading. The same isn't necessarily true of > being a core to whom PTL duties were delegated. Yuuuup. My last stint as nova PTL while at IBM was so I could keep working upstream on OpenStack despite my internal management and rest of my team having moved on to other things. And then eventually moving to another company to continue working on OpenStack. -- Thanks, Matt From mriedemos at gmail.com Sat Sep 7 13:23:26 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 7 Sep 2019 08:23:26 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <38c6f889-2b82-1a59-f00d-699fb04df6f3@gmail.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <38c6f889-2b82-1a59-f00d-699fb04df6f3@gmail.com> Message-ID: On 9/6/2019 9:37 AM, Jay Bryant wrote: > As has been touched upon in other discussions, I think we have a culture > where it is difficult for them to say no to things. Welcome to the club. Nova has been harangued for years for saying no to so many things and maybe now people are starting to see why. It's not because saying no is fun. -- Thanks, Matt From Prabhjit.Singh22 at T-Mobile.com Fri Sep 6 20:42:29 2019 From: Prabhjit.Singh22 at T-Mobile.com (Singh, Prabhjit) Date: Fri, 6 Sep 2019 20:42:29 +0000 Subject: [Octavia]-Seeking some high points on using Octavia Message-ID: Hi Michael, I have been trying to get Octavia LbaaS up and running and get performance tested. It has taken me some time to get quite a few things working. While I continue to invest time in using Octavia and stay excited on some of the upcoming features. I have been asked the following questions by my leadership to which I do not have any direct answers. 1. What is the adoption of Octavia, are major organizations looking to adopt and invest in it. Can you provide some numbers 2. Roadmap wise is the Open community committed to investing in Octavia and why 3. Per your suggestion I tried to look up who are the primary companies using Octavia and haven't found a clear indication, any insight would be great. 4. Would features from haproxy 2.0 be included in Octavia 5. There are some open solutions from haproxy, Envoy, consul. How would Octavia compare. 6. Lastly, do you have enough encouragement to keep the project going, I guess I am looking for some motivation for continuing to choose to use Octavia when there are several turnkey solutions ( though offered at a price ). Currently I have been working with Redhat to answer the following questions, these are not for the community, hopefully Redhat will be able to pursue with your team. 1. How to offload logs to an external log/metrics collector 2. How to turn off logs during performance testing, I honestly do not want to do this because the performance tester is really generating live traffic which mimics a real time scenario. 3. How to set cron for rotating logs, I would think that this should be automatic. Would I need to do this everytime? 4. Do you have any way to increase performance of the amphora, my take is haproxy can handle several thousands of concurrent connections but in our case seems like we hit a threshold at 3500 - 4500 connections and then it starts to either send resets or the connections stay open for a long time. Thanks & Regards Prabhjit -----Original Message----- From: Singh, Prabhjit Sent: Tuesday, July 23, 2019 9:45 AM To: Michael Johnson Cc: openstack-discuss at lists.openstack.org Subject: RE: [Octavia]-Seeking performance numbers on Octavia Thanks so much for the valuable insights Michael! Appreciate it and keep up the good work, as I ramp up with more dev know how hopefully I would start making contributions and can maybe convince my team to start as well. Thanks & Regards Prabhjit Singh -----Original Message----- From: Michael Johnson Sent: Monday, July 22, 2019 5:48 PM To: Singh, Prabhjit Cc: openstack-discuss at lists.openstack.org Subject: Re: [Octavia]-Seeking performance numbers on Octavia [External] Hi Prabhjit, Comments in-line below. Michael On Sun, Jul 21, 2019 at 5:24 PM Singh, Prabhjit wrote: > > Hi Michael, > > Thanks for taking the time out to send me your inputs and valuable suggestions. I do remember meeting you at the Denver Summit and hearing to a couple of your sessions. > If you wouldn't mind, I do have a few more questions and your answers would help me understand that should I continue to invest in having Octavia as one of our available LBs. > > 1. Based on your response and the amount of time you are investing in > supporting Octavia, what are some of the use cases, like for e.g. if load balancing web traffic how many transactions/connections minimum can be expected. I do understand you mentioned that it's hard to performance test Octavia but some real time situations from your testing and how customers have adopted Octavia would help me level set some expectations. This is really cloud and application specific. I would recommend you fire up an Octavia install and use your preferred tool to measure it. Some good tools are tsung, weighttp, and iperf3. > 2. We are thinking of Octavia as one of the offerings, that offers a self-serve type model. Do you know of any customers who have been able to use Octavia as one of their primary load balancers and any encouraging feedback you have gotten on Octavia. There are examples of organizations using Octavia available if you google Octavia. > 3. You suggested increasing the Ram size, I could go about making a whole new Flavor. Yes, to increase the allocated RAM for a load balancer, you would create an additional nova flavor with the specifications you would like. You can then either set this as the default nova flavor for amphora (amp_flavor_id is the setting) or you can create an Octavia flavor that specifies the nova compute flavor to use (See https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Fflavors.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=FDlAK3%2FKh0DNo%2BMSJQ8kJ8lSnn01TJXASS6AHd1kRoA%3D&reserved=0 for more information on Octavia flavors). > 4. I also noticed on the haproxy.conf the maxconns is set to 2000, should I increase this, does this affect the connection per server, which you said 64000 conns per server, so if I have 10 servers can I expect somewhere close to 640000 sessions? I think you are looking at the haproxy.conf file provided by your operating system package. Octavia does not use this file, it creates it's own HAProxy configuration files as needed under /var/lib/octavia inside the amphora. The default, if the user does not specify one at listener creation, is 1,000,000. > 5. Based on some of the limitations and the dev work in progress, I think the most important feature that would make Octavia a real solid offering would be the Active-Active and Autoscaling feature. I brought this up with you in our brief conversation at the summit, and you did mention that its not a top priority at this time and you are looking for some help. I have noticed a lot of documentation has been updated on this feature, do you think with the available document and progress I could spin up a distributor and manage sessions between Amphora or it's not complete yet. Active/Active is still on our roadmap, but unfortunately the people that were working on it had to stop for personal reasons. There may be some folks picking up this work again soon. At this point the Active/Active patches up for review are non-functional and still a work in progress. > 6. We have a Triple O setup, do you think I can make the above tweaks with the Triple O setup. I think you are able to make various adjustments to Octavia with Triple O, but I do not have specifics on that. > Thanks & Regards > > Prabhjit Singh > Systems Design and Strategy - Magentabox > | O: (973) 397-4819 | M: (973) 563-4445 > > > > -----Original Message----- > From: Michael Johnson > Sent: Friday, July 19, 2019 6:00 PM > To: Singh, Prabhjit > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia]-Seeking performance numbers on Octavia > > [External] > > > Hi Prabhjit, > > As you have mentioned, it is very challenging to get accurate performance results in cloud environments. There are a large number(very large in fact) of factors that can impact the overall performance of OpenStack and Octavia. > > In our OpenDev testing environment, we only have software emulation virtual machines available (Qemu running with the TCG engine) which performs extremely poorly. This means that the testing environment does not reflect how the software is used in real world deployments. > An example of this is simply booting a VM can take up to ten minutes on Qemu with TCG when it takes about twenty seconds on a real OpenStack deployment. > With this resource limitation, we cannot effectively run performance benchmarking test jobs on the OpenDev environment. > > Because of this, we don't publish performance numbers as they will not reflect what you can achieve in your environment. > > Let me try to speak to your bullet points: > 1. The Octavia team has never (to my knowledge) claimed the Amphora driver is "carrier grade". We do consider the Amphora driver to be "operator grade", which speaks to a cloud operator's perspective versus the previous offering that did not support high availability, have appropriate maintenance tooling, upgrade paths, performance, etc. > To me, "carrier grade" has an additional level of requirements including performance, latency, scale, and availability SLAs. This is not what the Octavia Amphora driver is currently ready for. That said, third party provider drivers for Octavia may be able to provide a "carrier grade" level of load balancing for OpenStack. > 2. As for performance tuning, much of this is either automatically handled by Octavia or are dependent on the application you are load balancing and your cloud deployment. For example we have many configuration settings to tune how many retries we attempt when interacting with other services. In performing and stable clouds, these can be tuned down, in others the defaults may be appropriate. If you would like faster failover, at the expense of slightly more network traffic, you can tune the health monitoring and keepalived_vrrp settings. We do not currently have a performance tuning guide for Octavia but would support someone authoring one. > 3. We do not currently have a guide for this. I will say with the version of HAproxy currently being shipped with the distributions, going beyond the 1vCPU per amphora does not gain you much. With the release of HAProxy 2.0 this has changed and we expect to be adding support for vertically scaling the Amphora in future releases. Disk space is only necessary if you are storing the flow logs locally, which I would not recommend for a performance load balancer (See the notes in the log offloading guide: > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Flog-offloading.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=qyX1BM6wR6v804WCYB2HY6IRmDfeQS1zi38FS34kB1U%3D&reserved=0). > Finally, the RAM usage is a factor of the number of concurrent connections and if you are enabling TLS on the load balancer. For typical load balancing loads, the default is typically fine. However, if you have high connection counts and/or TLS offloading, you may want to experiment with increasing the available RAM. > 4. The source IP issue is a known issue > (https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstoryboard.openstack.org%2F%23!%2Fstory%2F1629066&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=GkTPXRmOfjpMYXDYZ9t5xH1aEq0E%2BWDZRhK8ux%2FnrUQ%3D&reserved=0). We have not prioritized addressing this as we have not had anyone come forward that they needed this in their deployment. If this is an issue impacting your use case, please comment on the story to that effect and provide a use case. This will help the team prioritize this work. > Also, patches are welcome! If you are interested in working on this issue, I can help you with information about how this could be added. > It should also be noted that it is a limitation of 64,000 connections per-backend server, not per load balancer. > 5. The team uses the #openstack-lbaas IRC channel on freenode and is happy to answer questions, etc. > > To date, we have had limited resources (people and equipment) available to do performance evaluation and tuning. There are definitely kernel and HAProxy tuning settings we have evaluated and added to the Amphora driver, but I know there is more work that can be done. If you are interested in help us with this work, please let us know. > > Michael > > P.S. Here are just a few considerations that can/will impact the performance of an Octavia Amphora load balancer: > > Hardware used for the compute nodes > Network Interface Cards (NICs) used in the compute nodes Number of > network ports enabled on the compute hosts Network switch > configurations (Jumbo frames, and so on) Cloud network topology > (leaf‐spine, fat‐tree, and so on) The OpenStack Neutron networking > configuration (ML2 and ML3 drivers) Tenant networking configuration > (VXLAN, VLANS, GRE, and so on) Colocation of applications and Octavia > amphorae Over subscription of the compute and networking resources > Protocols being load balanced Configuration settings used when > creating the load balancer (connection limits, and so on) Version of > OpenStack services (nova, neutron, and so on) Version of OpenStack > Octavia Flavor of the OpenStack Octavia load balancer OS and > hypervisor versions used Deployed security mitigations (Spectre, > Meltdown, and so on) Customer application performance Health of the > customer application > > On Fri, Jul 19, 2019 at 8:52 AM Singh, Prabhjit wrote: > > > > Hi > > > > > > > > I have been trying to test Octavia with some traffic generators and > > my tests are inconclusive. Appreciate your inputs on the following > > > > > > > > It would be really nice to have some performance numbers that you guys have been able to achieve for this to be termed as carrier grade. > > Would also appreciate if you could share any inputs on performance > > tuning Octavia Any recommended flavor sizes for spinning up Amphorae, the default size of 1 core, 2 Gb disk and 1 Gig RAM does not seem enough. > > Also I noticed when the Amphorae are spun up, at one time only one > > master is talking to the backend servers and has one IP that its > > using, it has to run out of ports after 64000 TCP concurrent > > sessions, id there a way to add more IPs or is this the limitation > > If I needed some help with Octavia and some guidance around > > performance tuning can someone from the community help > > > > > > > > Thanks & Regards > > > > > > > > Prabhjit Singh > > > > > > > > > > > > From tim.bell at cern.ch Sat Sep 7 15:16:13 2019 From: tim.bell at cern.ch (Tim Bell) Date: Sat, 7 Sep 2019 17:16:13 +0200 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: On 9/7/19 3:09 PM, Matt Riedemann wrote: > On 9/6/2019 6:59 PM, melanie witt wrote: >> >> * If Telemetry is no longer using the server usage audit log API, we >> deprecate it in Nova and notify deployment tools to stop setting >> [DEFAULT]/instance_usage_audit = true to prevent further creation of >> nova.task_log records and recommend manual cleanup by users > > Deprecating the API would just be a signal to not develop new tools > based on it since it's effectively unmaintained but that doesn't mean > we can remove it since there could be non-Telemtry tools in the wild > using it that we'd never hear about. You might not be suggesting an > eventual path to removal of the API, I'm just bringing that part up > since I'm sure people are thinking it. > Tools like cASO (https://github.com/IFCA/caso) use this API. This is used by many of the EGI Federated Cloud sites to do accounting per VM (https://egi-federated-cloud-integration.readthedocs.io/en/latest/openstack.html) > I'm also assuming that API isn't multi-cell aware, meaning it won't > traverse cells pulling records like listing servers or migration > resources. Given scaling issues with the current Telemetry implementation, I suspect alternative approaches have had to be developed in any case. CERN uses libvirt data extraction. > > As for the config option to run the periodic task that creates these > records, that's disabled by default so deployment tools shouldn't be > enabling it by default - but maybe some do if they are configured to > deploy ceilometer. > >> >> or >> >> * If Telemetry is still using the server usage audit log API, we >> create a new 'nova-manage db purge_task_log --before ' (or >> similar) command that will hard delete nova.task_log records before a >> specified date or all if --before is not specified > > If you can't remove the API then this is probably something that needs > to happen regardless, though we likely won't know if anyone uses it. > I'd consider it pretty low priority given how extremely latent this is > and would expect anyone that's been running with this enabled in > production has developed DB purge scripts for this table long ago. > From johnsomor at gmail.com Sat Sep 7 20:21:59 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Sat, 7 Sep 2019 13:21:59 -0700 Subject: [Octavia]-Seeking some high points on using Octavia In-Reply-To: References: Message-ID: Hi Prabhjit, Answers to the questions I can answer below. I hope you continue to work with your support contact to resolve the issues you are experiencing. Here I can only speak with my OpenStack community hat on. Michael On Fri, Sep 6, 2019 at 1:42 PM Singh, Prabhjit wrote: > > Hi Michael, > > I have been trying to get Octavia LbaaS up and running and get performance tested. It has taken me some time to get quite a few things working. > > While I continue to invest time in using Octavia and stay excited on some of the upcoming features. I have been asked the following questions by my leadership to which I do not have any direct answers. > > 1. What is the adoption of Octavia, are major organizations looking to adopt and invest in it. Can you provide some numbers I don't have much I can share here. You can look at the OpenStack user survey information: https://www.openstack.org/analytics though some of that is still fragmented as Octavia was part of neutron in some older releases. In the 2016 and 2017 survey, "Software load balancing" was the #1 neutron feature "actively used, interested in, or planned for use." Page 53: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/survey/April-2016-User-Survey-Report.pdf Page 60: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/survey/April2017SurveyReport.pdf You may also find interest in which companies have contributed to the project by looking at Stackalytics: https://www.stackalytics.com/?module=octavia-group > 2. Roadmap wise is the Open community committed to investing in Octavia and why We do maintain a roadmap for longer-term goals: https://wiki.openstack.org/wiki/Octavia/Roadmap Beyond that, as OpenStack is an open community of many contributors I cannot speculate commitment. > 3. Per your suggestion I tried to look up who are the primary companies using Octavia and haven't found a clear indication, any insight would be great. That is really all I can share. > 4. Would features from haproxy 2.0 be included in Octavia Yes, it is on the roadmap. We have been waiting for 2.0.x to stabilize. The release timing of HAProxy 2.0 means that most of the major Linux distributions are not yet shipping it. This makes it a bit tricky for the OpenStack team as our testing standard is tied to these releases: https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions There is a chance that the OpenStack team will start adding features that need HAProxy 2.0.x in the Ussuri release cycle. > 5. There are some open solutions from haproxy, Envoy, consul. How would Octavia compare. There are many, many different load balancing options available. As you know Octavia supports provider drivers, so that alternate technologies can be plugged in. For the reference amphora driver (the one we use for OpenStack testing), HAProxy was selected for its stability and wide support. > 6. Lastly, do you have enough encouragement to keep the project going, I guess I am looking for some motivation for continuing to choose to use Octavia when there are several turnkey solutions ( though offered at a price ). Well, personally I plan to keep working on Octavia. I am not the project team lead for the Train or Ussuri releases, but I am still an active core member. I am a "right tool for the right job" kind of person, so it really is up to you and your needs to balance the decision of which load balancing option to select. > Currently I have been working with Redhat to answer the following questions, these are not for the community, hopefully Redhat will be able to pursue with your team. With my OpenStack hat on and not speaking for Red Hat: > 1. How to offload logs to an external log/metrics collector This was a new feature for the Train release: https://docs.openstack.org/octavia/latest/admin/log-offloading.html > 2. How to turn off logs during performance testing, I honestly do not want to do this because the performance tester is really generating live traffic which mimics a real time scenario. https://docs.openstack.org/octavia/latest/configuration/configref.html#haproxy_amphora.connection_logging > 3. How to set cron for rotating logs, I would think that this should be automatic. Would I need to do this everytime? Logs are already being rotated inside the amphora. > 4. Do you have any way to increase performance of the amphora, my take is haproxy can handle several thousands of concurrent connections but in our case seems like we hit a threshold at 3500 - 4500 connections and then it starts to either send resets or the connections stay open for a long time. Yes, I have had amphora do many more connections per second than that. There is some issue in your environment that is limiting it. > Thanks & Regards > > Prabhjit > > > > > -----Original Message----- > From: Singh, Prabhjit > Sent: Tuesday, July 23, 2019 9:45 AM > To: Michael Johnson > Cc: openstack-discuss at lists.openstack.org > Subject: RE: [Octavia]-Seeking performance numbers on Octavia > > Thanks so much for the valuable insights Michael! Appreciate it and keep up the good work, as I ramp up with more dev know how hopefully I would start making contributions and can maybe convince my team to start as well. > > Thanks & Regards > > Prabhjit Singh > > > > -----Original Message----- > From: Michael Johnson > Sent: Monday, July 22, 2019 5:48 PM > To: Singh, Prabhjit > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia]-Seeking performance numbers on Octavia > > [External] > > > Hi Prabhjit, > > Comments in-line below. > > Michael > > On Sun, Jul 21, 2019 at 5:24 PM Singh, Prabhjit wrote: > > > > Hi Michael, > > > > Thanks for taking the time out to send me your inputs and valuable suggestions. I do remember meeting you at the Denver Summit and hearing to a couple of your sessions. > > If you wouldn't mind, I do have a few more questions and your answers would help me understand that should I continue to invest in having Octavia as one of our available LBs. > > > > 1. Based on your response and the amount of time you are investing in > > supporting Octavia, what are some of the use cases, like for e.g. if load balancing web traffic how many transactions/connections minimum can be expected. I do understand you mentioned that it's hard to performance test Octavia but some real time situations from your testing and how customers have adopted Octavia would help me level set some expectations. > > This is really cloud and application specific. I would recommend you fire up an Octavia install and use your preferred tool to measure it. > Some good tools are tsung, weighttp, and iperf3. > > > 2. We are thinking of Octavia as one of the offerings, that offers a self-serve type model. Do you know of any customers who have been able to use Octavia as one of their primary load balancers and any encouraging feedback you have gotten on Octavia. > > There are examples of organizations using Octavia available if you google Octavia. > > > 3. You suggested increasing the Ram size, I could go about making a whole new Flavor. > > Yes, to increase the allocated RAM for a load balancer, you would create an additional nova flavor with the specifications you would like. You can then either set this as the default nova flavor for amphora (amp_flavor_id is the setting) or you can create an Octavia flavor that specifies the nova compute flavor to use (See > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Fflavors.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=FDlAK3%2FKh0DNo%2BMSJQ8kJ8lSnn01TJXASS6AHd1kRoA%3D&reserved=0 for more information on Octavia flavors). > > > 4. I also noticed on the haproxy.conf the maxconns is set to 2000, should I increase this, does this affect the connection per server, which you said 64000 conns per server, so if I have 10 servers can I expect somewhere close to 640000 sessions? > > I think you are looking at the haproxy.conf file provided by your operating system package. Octavia does not use this file, it creates it's own HAProxy configuration files as needed under /var/lib/octavia inside the amphora. The default, if the user does not specify one at listener creation, is 1,000,000. > > > 5. Based on some of the limitations and the dev work in progress, I think the most important feature that would make Octavia a real solid offering would be the Active-Active and Autoscaling feature. I brought this up with you in our brief conversation at the summit, and you did mention that its not a top priority at this time and you are looking for some help. I have noticed a lot of documentation has been updated on this feature, do you think with the available document and progress I could spin up a distributor and manage sessions between Amphora or it's not complete yet. > > Active/Active is still on our roadmap, but unfortunately the people that were working on it had to stop for personal reasons. There may be some folks picking up this work again soon. At this point the Active/Active patches up for review are non-functional and still a work in progress. > > > 6. We have a Triple O setup, do you think I can make the above tweaks with the Triple O setup. > > I think you are able to make various adjustments to Octavia with Triple O, but I do not have specifics on that. > > > Thanks & Regards > > > > Prabhjit Singh > > Systems Design and Strategy - Magentabox > > | O: (973) 397-4819 | M: (973) 563-4445 > > > > > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Friday, July 19, 2019 6:00 PM > > To: Singh, Prabhjit > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [Octavia]-Seeking performance numbers on Octavia > > > > [External] > > > > > > Hi Prabhjit, > > > > As you have mentioned, it is very challenging to get accurate performance results in cloud environments. There are a large number(very large in fact) of factors that can impact the overall performance of OpenStack and Octavia. > > > > In our OpenDev testing environment, we only have software emulation virtual machines available (Qemu running with the TCG engine) which performs extremely poorly. This means that the testing environment does not reflect how the software is used in real world deployments. > > An example of this is simply booting a VM can take up to ten minutes on Qemu with TCG when it takes about twenty seconds on a real OpenStack deployment. > > With this resource limitation, we cannot effectively run performance benchmarking test jobs on the OpenDev environment. > > > > Because of this, we don't publish performance numbers as they will not reflect what you can achieve in your environment. > > > > Let me try to speak to your bullet points: > > 1. The Octavia team has never (to my knowledge) claimed the Amphora driver is "carrier grade". We do consider the Amphora driver to be "operator grade", which speaks to a cloud operator's perspective versus the previous offering that did not support high availability, have appropriate maintenance tooling, upgrade paths, performance, etc. > > To me, "carrier grade" has an additional level of requirements including performance, latency, scale, and availability SLAs. This is not what the Octavia Amphora driver is currently ready for. That said, third party provider drivers for Octavia may be able to provide a "carrier grade" level of load balancing for OpenStack. > > 2. As for performance tuning, much of this is either automatically handled by Octavia or are dependent on the application you are load balancing and your cloud deployment. For example we have many configuration settings to tune how many retries we attempt when interacting with other services. In performing and stable clouds, these can be tuned down, in others the defaults may be appropriate. If you would like faster failover, at the expense of slightly more network traffic, you can tune the health monitoring and keepalived_vrrp settings. We do not currently have a performance tuning guide for Octavia but would support someone authoring one. > > 3. We do not currently have a guide for this. I will say with the version of HAproxy currently being shipped with the distributions, going beyond the 1vCPU per amphora does not gain you much. With the release of HAProxy 2.0 this has changed and we expect to be adding support for vertically scaling the Amphora in future releases. Disk space is only necessary if you are storing the flow logs locally, which I would not recommend for a performance load balancer (See the notes in the log offloading guide: > > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Flog-offloading.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=qyX1BM6wR6v804WCYB2HY6IRmDfeQS1zi38FS34kB1U%3D&reserved=0). > > Finally, the RAM usage is a factor of the number of concurrent connections and if you are enabling TLS on the load balancer. For typical load balancing loads, the default is typically fine. However, if you have high connection counts and/or TLS offloading, you may want to experiment with increasing the available RAM. > > 4. The source IP issue is a known issue > > (https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstoryboard.openstack.org%2F%23!%2Fstory%2F1629066&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=GkTPXRmOfjpMYXDYZ9t5xH1aEq0E%2BWDZRhK8ux%2FnrUQ%3D&reserved=0). We have not prioritized addressing this as we have not had anyone come forward that they needed this in their deployment. If this is an issue impacting your use case, please comment on the story to that effect and provide a use case. This will help the team prioritize this work. > > Also, patches are welcome! If you are interested in working on this issue, I can help you with information about how this could be added. > > It should also be noted that it is a limitation of 64,000 connections per-backend server, not per load balancer. > > 5. The team uses the #openstack-lbaas IRC channel on freenode and is happy to answer questions, etc. > > > > To date, we have had limited resources (people and equipment) available to do performance evaluation and tuning. There are definitely kernel and HAProxy tuning settings we have evaluated and added to the Amphora driver, but I know there is more work that can be done. If you are interested in help us with this work, please let us know. > > > > Michael > > > > P.S. Here are just a few considerations that can/will impact the performance of an Octavia Amphora load balancer: > > > > Hardware used for the compute nodes > > Network Interface Cards (NICs) used in the compute nodes Number of > > network ports enabled on the compute hosts Network switch > > configurations (Jumbo frames, and so on) Cloud network topology > > (leaf‐spine, fat‐tree, and so on) The OpenStack Neutron networking > > configuration (ML2 and ML3 drivers) Tenant networking configuration > > (VXLAN, VLANS, GRE, and so on) Colocation of applications and Octavia > > amphorae Over subscription of the compute and networking resources > > Protocols being load balanced Configuration settings used when > > creating the load balancer (connection limits, and so on) Version of > > OpenStack services (nova, neutron, and so on) Version of OpenStack > > Octavia Flavor of the OpenStack Octavia load balancer OS and > > hypervisor versions used Deployed security mitigations (Spectre, > > Meltdown, and so on) Customer application performance Health of the > > customer application > > > > On Fri, Jul 19, 2019 at 8:52 AM Singh, Prabhjit wrote: > > > > > > Hi > > > > > > > > > > > > I have been trying to test Octavia with some traffic generators and > > > my tests are inconclusive. Appreciate your inputs on the following > > > > > > > > > > > > It would be really nice to have some performance numbers that you guys have been able to achieve for this to be termed as carrier grade. > > > Would also appreciate if you could share any inputs on performance > > > tuning Octavia Any recommended flavor sizes for spinning up Amphorae, the default size of 1 core, 2 Gb disk and 1 Gig RAM does not seem enough. > > > Also I noticed when the Amphorae are spun up, at one time only one > > > master is talking to the backend servers and has one IP that its > > > using, it has to run out of ports after 64000 TCP concurrent > > > sessions, id there a way to add more IPs or is this the limitation > > > If I needed some help with Octavia and some guidance around > > > performance tuning can someone from the community help > > > > > > > > > > > > Thanks & Regards > > > > > > > > > > > > Prabhjit Singh > > > > > > > > > > > > > > > > > > From hongbin034 at gmail.com Sat Sep 7 21:22:25 2019 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 7 Sep 2019 17:22:25 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> Message-ID: On Fri, Sep 6, 2019 at 1:49 PM Zane Bitter wrote: > On 5/09/19 7:36 AM, Tom Barron wrote: > > On 05/09/19 19:33 +0900, Ghanshyam Mann wrote: > >> ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent > >> wrote ---- > >> > On Thu, 5 Sep 2019, Thierry Carrez wrote: > >> > > >> > > So maybe we still have the same expectations, but we are > >> definitely reducing > >> > > our velocity... Would you say we need to better align our > >> expectations with > >> > > our actual speed? Or that we should reduce our expectations > >> further, to drive > >> > > velocity further down? > >> > > >> > We should slow down enough that the vendors and enterprises start to > >> > suffer. If they never notice, then it's clear we're trying too hard > >> > and can chill out. > >> > >> +1 on this but instead of slow down and make vendors suffer we need > >> the proper > >> way to notify or make them understand about the future cutoff effect > >> on OpenStack > >> as software. I know we have been trying every possible way but I am > >> sure there are > >> much more managerial steps can be taken. I expect Board of Director > >> to come forward > >> on this as an accountable entity. TC should raise this as high > >> priority issue to them (in meetings, > >> joint leadership meeting etc). > >> > >> I am sure this has been brought up before, can we make OpenStack > >> membership company > >> to have a minimum set of developers to maintain upstream. With the > >> current situation, I think > >> it make sense to ask them to contribute manpower also along with > >> membership fee. But again > >> this is more of BoD and foundation area. > > > > +1 > > > > IIUC Gold Membership in the Foundation provides voting privileges at a > > cost of $50-200K/year and Corporate Sponsorship provides these plus > > various marketing benefits at a cost of $10-25K/year. So far as I can > > tell there is not a requirement of a commitment of contributors and > > maintainers with the exception of the (currently closed) Platinum > > Membership, which costs $500K/year and requires at least 2 FTE > > equivalents contributing to OpenStack. > > Even this incredibly minimal requirement was famously not met for years > by one platinum member, and a (different) platinum member was accepted > without ever having contributed upstream in the past or apparently ever > intending to in the future. > > What I'm saying is that if this a the mechanism we want to use to drive > contributions, I can tell you now how it's gonna work out. > > The question we should be asking ourselves is why companies see value in > being sponsors of the foundation but not in contributing upstream, and > how we convince them of the value of the latter. > One of the reason could be the vendors have their own implementation of the OpenStack APIs instead of using the upstream implementation. Those vendors probably don't have much motivation on contributing upstream because they are not using the upstream code (except the APIs). A follow-up question is why those vendors chose to re-implement OpenStack instead of using the upstream one. This would be an interesting question to ask. > > One initiative the TC started on this front is this: > > > https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html > > (BTW we could use help in converting the outdated Help Most Wanted > entries to this format. Volunteers welcome.) > > cheers, > Zane. > > > In general I see requirements > > for annual cash expenditure to the Foundation, as for membership in any > > joint commercial enterprise, but little that ensures the availability of > > skilled labor for ongoing maintenance of our projects. > > > > -- Tom Barron > > > >> > >> I agree on ttx proposal to reduce the TC number to 9 or 7, I do not > >> think this will make any > >> difference or slow down on any of the TC activity. 9 or 7 members are > >> enough in TC. > >> > >> As long as we get PTL(even without an election) we are in a good > >> position. This time only > >> 7 leaderless projects (6 actually with Cyborg PTL missing to propose > >> nomination in election repo and only on ML) are > >> not so bad number. But yes this is a sign of taking action before it > >> goes into more worst situation. > >> > >> -gmann > >> > >> > > >> > -- > >> > Chris Dent ٩◔̯◔۶ > https://anticdent.org/ > >> > freenode: cdent > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sat Sep 7 22:21:36 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sat, 7 Sep 2019 17:21:36 -0500 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: <20190906225750.tgbnwz6wu5gdfezo@yuggoth.org> References: <20190906225750.tgbnwz6wu5gdfezo@yuggoth.org> Message-ID: Hi, So we all stay on the same page, the four branches were removed by the infra team. Thanks!. This is the conversation we had in regards to the feature/lbaasv2 branch, where we agreed tht it was not necessary to save any state: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-09-06.log.html#t2019-09-06T23:07:35 Cheers On Fri, Sep 6, 2019 at 5:58 PM Jeremy Stanley wrote: > On 2019-09-06 13:24:44 -0500 (-0500), Miguel Lavalle wrote: > > We have decided to remove from the Neutron repo the following feature > > branches: > > > > feature/graphql > > feature/lbaasv2 > > feature/pecan > > feature/qos > > > > We don't need to preserve any state from these branches. In the case of > the > > first one, no code was merged. The work in the other three branches is > > already merged into master. > > Sanity-checking feature/lbaasv2, `git merge-base` between it and > master suggest cc400e2 is the closest common ancestor. There are 4 > potentially substantive commits on feature/lbaasv2 past that point > which do not seem to appear in the master branch history: > > 7147389 Implement Jinja templates for haproxy config > cfa4a86 Tests for extension, db and plugin for LBaaS V2 > 02c01a3 Plugin/DB additions for version 2 of LBaaS API > 4ed8862 New extension for version 2 of LBaaS API > > Do you happen to know whether these need to be preserved (or what > happened with them)? > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sat Sep 7 22:28:16 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sun, 8 Sep 2019 10:28:16 +1200 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: OpenStack services in DevStack are managed by systemd, so you can check aodh-listener log by running `sudo journalctl -u devstack at aodh-listener.service | less` - Best regards, Lingxian Kong Catalyst Cloud On Sat, Sep 7, 2019 at 2:51 PM Anmar Salih wrote: > > Dear Lingxian, > > I cloud't find aodh log file. > > Also I did 'ps -ef | grep aodh' and here is > the response. > > Best regards. > > > On Thu, Sep 5, 2019 at 6:56 PM Lingxian Kong wrote: > >> Hi Anmar, >> >> Please see my comments in-line below. >> >> - >> Best regards, >> Lingxian Kong >> Catalyst Cloud >> >> >> On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih >> wrote: >> >>> Hi Lingxian, >>> >>> First of all, I would like to apologize because the email is pretty >>> long. I listed all the steps I went through just to make sure that I did >>> everything correctly. >>> >> >> No need to apologize, more information is always helpful to solve the >> problem. >> >> >>> 4- Creating the webhook for the function by: openstack webhook create >>> --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen >>> capture for the response. I tried to copy >>> and paste the webhook_url " >>> http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into >>> my internet browser, so I got 404 not found. I am not sure if this is >>> normal response or I have something wrong here. >>> >> >> Like Gaetan said, the webhook is supposed to be invoked by http POST. >> >> 9- Checking aodh alarm history by aodh alarm-history show >>> ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response >>> >>> >>> 10- Last step is to check the function execution in qinling and here is >>> the response . (empty bracket). I am not >>> sure what is the problem. >>> >> >> Yeah, from the output of alarm history, the alarm is not triggered, as a >> result, there won't be execution created by the webhook. >> >> Seems like the aodh-listener didn't receive the message or the message >> was ignored. Could you paste the aodh-listener log but make sure: >> >> 1. `debug = True` in /etc/aodh/aodh.conf >> 2. Trigger the python script again >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmar.salih1 at gmail.com Sat Sep 7 23:29:43 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Sat, 7 Sep 2019 19:29:43 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Dear Lingxian, I executed 'sudo journalctl -u devstack at aodh-listener.service | less' and got this response . Thank you. On Sat, Sep 7, 2019 at 6:28 PM Lingxian Kong wrote: > OpenStack services in DevStack are managed by systemd, so you can check > aodh-listener log by running `sudo journalctl -u > devstack at aodh-listener.service | less` > > - > Best regards, > Lingxian Kong > Catalyst Cloud > > > On Sat, Sep 7, 2019 at 2:51 PM Anmar Salih wrote: > >> >> Dear Lingxian, >> >> I cloud't find aodh log file. >> >> Also I did 'ps -ef | grep aodh' and here >> is the response. >> >> Best regards. >> >> >> On Thu, Sep 5, 2019 at 6:56 PM Lingxian Kong >> wrote: >> >>> Hi Anmar, >>> >>> Please see my comments in-line below. >>> >>> - >>> Best regards, >>> Lingxian Kong >>> Catalyst Cloud >>> >>> >>> On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih >>> wrote: >>> >>>> Hi Lingxian, >>>> >>>> First of all, I would like to apologize because the email is pretty >>>> long. I listed all the steps I went through just to make sure that I did >>>> everything correctly. >>>> >>> >>> No need to apologize, more information is always helpful to solve the >>> problem. >>> >>> >>>> 4- Creating the webhook for the function by: openstack webhook create >>>> --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen >>>> capture for the response. I tried to copy >>>> and paste the webhook_url " >>>> http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into >>>> my internet browser, so I got 404 not found. I am not sure if this is >>>> normal response or I have something wrong here. >>>> >>> >>> Like Gaetan said, the webhook is supposed to be invoked by http POST. >>> >>> 9- Checking aodh alarm history by aodh alarm-history show >>>> ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response >>>> >>>> >>>> 10- Last step is to check the function execution in qinling and here is >>>> the response . (empty bracket). I am not >>>> sure what is the problem. >>>> >>> >>> Yeah, from the output of alarm history, the alarm is not triggered, as a >>> result, there won't be execution created by the webhook. >>> >>> Seems like the aodh-listener didn't receive the message or the message >>> was ignored. Could you paste the aodh-listener log but make sure: >>> >>> 1. `debug = True` in /etc/aodh/aodh.conf >>> 2. Trigger the python script again >>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Sep 8 11:11:31 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 08 Sep 2019 20:11:31 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> <1567771216.28660.0@smtp.office365.com> Message-ID: <16d10925fe1.1021dc3419292.6668714441615096551@ghanshyammann.com> ---- On Fri, 06 Sep 2019 22:34:41 +0900 Mohammed Naser wrote ---- > On Fri, Sep 6, 2019 at 8:04 AM Balázs Gibizer wrote: > > > > > > > > On Thu, Sep 5, 2019 at 6:20 PM, Chris Dent wrote: > > > > On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > > > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. > > > > Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. > > > > > > I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. > > If it's okay with you, would you like to share what are some of the > priorities and work that you feel is placed on a PTL which makes you > reluctant? > > PS, by no means I am trying to push for you to be PTL if you're not > currently interested, but I want to hear some of the community > thoughts about this (and feel free to reply privately) This is really important point. I can agree about PTL responsibility for big and very high traffic of work (review + feature request + discussions etc) are more time consuming but for other projects it should not be so bad. My personal experience as QA PTL (where you have lot of responsibility during release time, stable branches for devstack and other QA tools, stable testing job etc) is really good and does not consume my mush time (when I separated my PTL time and QA core developer time). Listing the items, responsibility which making PTL job very hard will be great way to improve it. -gmann > > > Cheers, > > gibi > > > > Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > From adrianc at mellanox.com Sun Sep 8 12:37:15 2019 From: adrianc at mellanox.com (Adrian Chiris) Date: Sun, 8 Sep 2019 12:37:15 +0000 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> Message-ID: Thanks for the inputs, Supporting the last point release makes sense to me; however, current policy is a bit vague on that. Updating the doc would certainly help (if indeed the intention is to support the latest minor release). For my particular issue, it seems that CentOS major will likely be bumped for U release as stated by Sean. So, worst case is pushing to master after Train release. Thanks, Adrian. > -----Original Message----- > From: Sean Mooney > Sent: Friday, September 6, 2019 8:29 PM > To: Clark Boylan ; openstack- > discuss at lists.openstack.org > Subject: Re: [tc][neutron] Supported Linux distributions and their kernel > > On Thu, 2019-09-05 at 08:20 -0700, Clark Boylan wrote: > > On Thu, Sep 5, 2019, at 8:10 AM, Adrian Chiris wrote: > > > > > > Greetings, > > > > > > I was wondering what is the guideline in regards to which kernels > > > are supported by OpenStack in the various Linux distributions. > > > > > > > > > Looking at [1], Taking for example latest CentOS major (7): > > > > > > Every “minor” version is released with a different kernel version, > > > > > > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) > > > and the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > > > > > > > > While I understand that OpenStack projects are expected to support > > > all CentOS 7.x releases. > > > > It is my understanding that CentOS (and RHEL?) only support the > current/latest point release of their distro [3]. > yes so each rhedhat openstack plathform (OSP) z stream (x.y.z) release is > tested and packaged only for the latest point release of rhel. we support > customer on older .z release if they are also on the version of rhel it was > tested with but we do expect customer to upgrage to the new rhel minor > version when they update there openstack to a newer .z relese. > this is becasue we update qemu and other products as part of the minor > release of rhel and we need to ensure that nova works with that qemu and > the kvm it was tested with. > > > We only test against that current point release. I don't expect we > > can be expected to support a distro release which the distro doesn't even > support. > ya i think that is sane. also if we are being totally honest old kernels have bug > many of which are security bugs so anyone running the original kernel any os > shipped with is deploying a vulnerable cloud. > > > > All that to say I would only worry about the most recent point release. > we might want to update the doc to that effect. > it currently say latest Centos Major > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgover > nance.openstack.org%2Ftc%2Freference%2Fproject-testing- > interface.html%23linux- > distributions&data=02%7C01%7Cadrianc%40mellanox.com%7C88d2a34c > 865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0% > 7C0%7C637033879438364542&sdata=m7NmJgCGZ00hiseoZo5uqTc0xKyE > ro29acCKKaUsQhU%3D&reserved=0 > perhaps it should be lates centos point/minor release since that is what we > actully test with. > also centos 8 is apprently complete the RC work so hopfully we will see a > release soon. > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.c > entos.org%2FAbout%2FBuilding_8&data=02%7C01%7Cadrianc%40mella > nox.com%7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9ba6a > 4d149256f461b%7C0%7C0%7C637033879438364542&sdata=Qzpuz408idk > D0v21Z0a1xdlfqnSbhGzjz7ygTFmLXc8%3D&reserved=0 > i have 0 info on centos but for Ussuri i hope we will have move to centos 8 > and python 3 only. > > > > > > > > Does the same applies for the kernels they _originally_ came out with? > > > > > > > > > The reason I’m asking, is because I was working on doing some > > > cleanup in neutron [2] for a workaround introduced because of an old > > > kernel bug, > > > > > > It is unclear to me if it is safe to introduce this change. > > > > > > > > > [1] > > > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgo > > > vernance.openstack.org%2Ftc%2Freference%2Fproject-testing- > interface. > > > html%23linux- > distributions&data=02%7C01%7Cadrianc%40mellanox.com > > > > %7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9ba6a4d149256 > f46 > > > > 1b%7C0%7C0%7C637033879438364542&sdata=m7NmJgCGZ00hiseoZo5u > qTc0xK > > > yEro29acCKKaUsQhU%3D&reserved=0 > > > > > > [2] > > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fre > > > > view.opendev.org%2F%23%2Fc%2F677095%2F&data=02%7C01%7Cadri > anc%40 > > > > mellanox.com%7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9 > ba6 > > > > a4d149256f461b%7C0%7C0%7C637033879438364542&sdata=ShNrkEaJQ > XBgin > > > rzET4YKXf06%2Bd6GL8CuOX5mByuGCA%3D&reserved=0 > > > > [3] > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki > > .centos.org%2FFAQ%2FGeneral%23head- > dcca41e9a3d5ac4c6d900a991990fd11930867d6&data=02%7C01%7Cadria > nc%40mellanox.com%7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7 > d2e4d9ba6a4d149256f461b%7C0%7C0%7C637033879438364542&sdata= > du%2BagCLSO%2FQoPIq%2FKVYY8bmE4uM9op2b%2BgFL6QfSlcc%3D&r > eserved=0 > > > From tpb at dyncloud.net Sun Sep 8 16:33:52 2019 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 8 Sep 2019 12:33:52 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> Message-ID: <20190908163352.2autwoapaid6vim5@barron.net> On 06/09/19 13:44 -0400, Zane Bitter wrote: >On 5/09/19 7:36 AM, Tom Barron wrote: >>IIUC Gold Membership in the Foundation provides voting privileges at >>a cost of $50-200K/year and Corporate Sponsorship provides these >>plus various marketing benefits at a cost of $10-25K/year.  So far >>as I can tell there is not a requirement of a commitment of >>contributors and maintainers with the exception of the (currently >>closed) Platinum Membership, which costs $500K/year and requires at >>least 2 FTE equivalents contributing to OpenStack. > >Even this incredibly minimal requirement was famously not met for >years by one platinum member, and a (different) platinum member was >accepted without ever having contributed upstream in the past or >apparently ever intending to in the future. > >What I'm saying is that if this a the mechanism we want to use to >drive contributions, I can tell you now how it's gonna work out. I expect that you are right but if anyone has references to past communications between TC and Foundation about participation requirements or expectations for Members and Sponsors I'd appreciate pointers to these. (By analogy, it's helpful to know who has made commitments to the Paris Agreement [1], who has not, and actual track records even if one is not convinced that the agreement is going to work out.) [1] https://en.wikipedia.org/wiki/Paris_Agreement > >The question we should be asking ourselves is why companies see value >in being sponsors of the foundation but not in contributing upstream, >and how we convince them of the value of the latter. Participating companies are complex organizations whose decision makers have a mix of motives and goals, but functionally I think the classic tragedy of the commons model fits pretty well. It may be worth $50-500/K per year to foster the perception that one is a supporter or contributor to OpenStack, and to get the various marketing advantages that come along, even if one doesn't actively contribute to or maintain the software or community beyond that. > >One initiative the TC started on this front is this: > >https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html > >(BTW we could use help in converting the outdated Help Most Wanted >entries to this format. Volunteers welcome.) Reframing "Help Wanted" as "Investment Opportunities" is IMO a great idea. There were seven entries for 2018 and there is one for 2019. Did the other six get done or does the help solicited amount to submitting governance reviews like the one you did for Glance [2] for the remaining 2018 items? [2] https://review.opendev.org/#/c/668054/ From mthode at mthode.org Sun Sep 8 18:21:57 2019 From: mthode at mthode.org (Matthew Thode) Date: Sun, 8 Sep 2019 13:21:57 -0500 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: Message-ID: <20190908182157.2bf7gbdxifzj4zew@mthode.org> On 19-09-05 15:10:17, Adrian Chiris wrote: > Greetings, > I was wondering what is the guideline in regards to which kernels are supported by OpenStack in the various Linux distributions. > > Looking at [1], Taking for example latest CentOS major (7): > Every "minor" version is released with a different kernel version, > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > While I understand that OpenStack projects are expected to support all CentOS 7.x releases. > Does the same applies for the kernels they originally came out with? > > The reason I'm asking, is because I was working on doing some cleanup in neutron [2] for a workaround introduced because of an old kernel bug, > It is unclear to me if it is safe to introduce this change. > > [1] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > [2] https://review.opendev.org/#/c/677095/ > > Thanks, > Adrian. > For kernel support the way we (gentoo) do it (downstream) is to have checks to make sure the running kernel has the needed modules enabled (either statically or as a module). See the linked ebuild for our syntax (it basically checks /proc/config.gz though). https://github.com/gentoo/gentoo/blob/master/net-misc/openvswitch/openvswitch-2.11.1-r1.ebuild#L39-L54 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Sun Sep 8 22:00:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 8 Sep 2019 22:00:30 +0000 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190908163352.2autwoapaid6vim5@barron.net> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> <20190908163352.2autwoapaid6vim5@barron.net> Message-ID: <20190908220029.wx7jaot6rnutmok2@yuggoth.org> On 2019-09-08 12:33:52 -0400 (-0400), Tom Barron wrote: [...] > There were seven entries for 2018 and there is one for 2019. Did > the other six get done Not that, unfortunately, as far as I know. > or does the help solicited amount to submitting governance reviews > like the one you did for Glance [2] for the remaining 2018 items? > > [2] https://review.opendev.org/#/c/668054/ Yes, I believe that's what's still needed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Mon Sep 9 01:52:28 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 9 Sep 2019 01:52:28 +0000 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: <9bf6497d2e65429bbe83220a28a3c146@AUSX13MPS308.AMER.DELL.COM> Chris, Thank you so much for all the great help. Thanks, Arkady -----Original Message----- From: Chris Hoge Sent: Wednesday, September 4, 2019 11:24 AM To: OpenStack Discuss Subject: Thank you Stackers for five amazing years! [EXTERNAL EMAIL] Hi everyone, After more than nine years working in cloud computing and on OpenStack, I've decided that it is time for a change and will be moving on from the OpenStack Foundation. For the last five years I've had the honor of helping to support this vibrant community, and I'm going to deeply miss being a part of it. OpenStack has been a central part of my life for so long that it's hard to imagine a work life without it. I'm proud to have helped in some small way to create a lasting project and community that has, and will continue to, transform how infrastructure is managed. September 12 will officially be my last day with the OpenStack Foundation. As I make the move away from my responsibilities, I'll be working with community members to help ensure continuity of my efforts. Thank you to everyone for building such an incredible community filled with talented, smart, funny, and kind people. You've built something special here, and we're all better for it. I'll still be involved with open source. If you ever want to get in touch, be it with questions about work I've been involved with or to talk about some exciting new tech or to just catch up over a tasty meal, I'm just a message away in all the usual places. Sincerely, Chris chris at hogepodge.com Twitter/IRC/everywhere else: @hogepodge From andre at florath.net Mon Sep 9 05:40:43 2019 From: andre at florath.net (Andreas Florath) Date: Mon, 09 Sep 2019 07:40:43 +0200 Subject: [heat] Resource handling in Heat stacks In-Reply-To: References: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> Message-ID: <942fbd4b9e95cfa7049b61b2530265a2efa17a4a.camel@florath.net> On Fri, 2019-09-06 at 15:26 -0400, Zane Bitter wrote: > On 4/09/19 3:51 AM, Andreas Florath wrote: > > Many thanks! Works like a charm! > > > > Suggestion: document default value of 'delete_on_termination'. 😉 > > Patches accepted 😉 https://review.opendev.org/#/c/680912/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Mon Sep 9 07:23:26 2019 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 9 Sep 2019 12:53:26 +0530 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> Message-ID: On Wed, Sep 4, 2019 at 12:57 AM Jeremy Stanley wrote: > > On 2019-09-03 14:03:37 -0500 (-0500), Sean McGinnis wrote: > [...] > > The release automation can only create branches, not remove them. > > That is something the infra team would need to do. > > > > I can't recall how this was handled in the past. Maybe someone > > from infra can shed some light on how EOL'ing stable branches > > should be handled for the no longer needed stable/* branches. > > We've done it different ways. Sometimes it's been someone from the > OpenDev/Infra sysadmins who volunteers to just delete the list of > branches requested, but more recently for large batches related to > EOL work we've temporarily elevated permissions for a member of the > Stable Branch (now Extended Maintenance SIG?) or Release teams. > -- Thanks Jeremy, Sean for all the information. Can someone from Release or Infra Team can do the needful of removing stable/ocata and stable/pike branch for TripleO projects being EOLed for pike/ocata in https://review.opendev.org/#/c/677478/ and https://review.opendev.org/#/c/678154/. > Jeremy Stanley Thanks and Regards Yatin Karel From renat.akhmerov at gmail.com Mon Sep 9 07:53:45 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 9 Sep 2019 14:53:45 +0700 Subject: [mistral] Publish field in workflow tasks In-Reply-To: References: Message-ID: <91b6f2db-9b82-4c6c-8dde-e4f6519cc08d@Spark> Ali, I’m for the option 2.a because it’s not so difficult to implement but it’ll be the best effort to handle a situation more gracefully if someone puts “publish” in both places (old syntax and advanced syntax). Over time we’ll deprecate the old “publish” completely though. Thanks Renat Akhmerov @Nokia On 28 Aug 2019, 15:37 +0700, Ali Abdelal , wrote: > Hello, > > Currently, there are two "publish" fields, one in the task(regular "publish")-the scope is branch and not global, > and another under "on-success", “on-error” or “on-complete”. > > In the current behavior, regular "publish" is ignored if there is "publish" under "on-success", “on-error” or “on-complete” [1]. > > For example:- > (a) > version: '2.0' > wf1: >     tasks: >       t1: >         publish: >           res_x1: 1 >         on-success: >           publish: >             branch: >               res_x2: 2 > > (b) > version: '2.0' > wf2: >     tasks: >       t1: >         publish: >           res_x1: 1 > > "res_x1" won't be published in (a), but it will in (b). > > > We can either:- > > 1) Invalidate such syntax. > 2) Merge the two publishes together and if there are duplicate keys, there are two options:- >    a) What takes priority is what's in publish under "on-success" or “on-error” or “on-complete. >    b) Not allow having a duplicate. > > > What is your opinion? > And please tell us if you have other suggestions. > > [1] https://bugs.launchpad.net/mistral/+bug/1791449 -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack-dev at storpool.com Mon Sep 9 08:22:58 2019 From: openstack-dev at storpool.com (Peter Penchev) Date: Mon, 9 Sep 2019 11:22:58 +0300 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? Message-ID: Hi, When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has been specified, this function tries to also install the development library for Python 2.x, I guess just in case some package has not declared proper Python 3 support or something. It then proceeds to install the Python 3 version of the library and all its dependencies. Unfortunately there is a problem with that, and specifically with script files installed in the system's executable files directory, e.g. /usr/local/bin. The problem appears when some Python library has already been installed for Python 3 (and has installed its script files), but is now installed for Python 2 (overwriting the script files) and is then not forcefully reinstalled for Python 3, since it is already present. Thus, the script files are last modified by the Python 2 library installation and they have a hashbang line saying `python2.x` - so if something then tries to execute them, they will run and use modules and libraries for Python 2 only. We experienced this problem when running the cinderlib tests from Cinder's `playbooks/cinderlib-run.yaml` file - it finds a unit2 executable (installed by the unittest2 library) and runs it, hoping that unit2 will be able to discover and collect the cinderlib tests and load the cinderlib modules. However, since unittest2 has last been installed as a Python 2 library, unit2 runs with Python 2 and fails to locate the cinderlib modules. (Yes, we know that there are other ways to run the cinderlib tests; this message is about the problem exposed by this way of running them) The obvious solution would be to instruct the Python 2 pip to not install script (or other shared) files at all; unfortunately, https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on install"), detailing a very similar use case ("need it installed for Python 2, but want to use it with Python 3") has been open for almost exactly three years now with no progress. I wonder if I could try to help, but even if this issue is resolved, there will be some time before OpenStack can actually depend on a recent enough version of pip. A horrible workaround would be to find the binary directory before installing the Python 2 library (using something like `pip3.7 show somepackage` and then running some heuristics on the "Location" field), tar'ing it up and then restoring it... but I don't know if I even want to think about this. Another possible way forward would be to consider whether we still want the Python 2 libraries installed - is OpenStack's Python 3 transition reached a far enough stage to assume that any projects that still require Python 2 *and* fail to declare their Python 2 dependencies properly are buggy? To be honest, this seems the most reasonable path for me - drop the "also install the Python 2 libs" code and see what happens. I could try to make this change in a couple of test runs in our third-party Cinder CI system and see if something breaks. Here is a breakdown of what happens, with links to the log of the StorPool third-party CI system for Cinder: https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_55_691087 `stack.sh` invokes `pip_install` for `os-testr` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_56_030839 `pip_install` sees that we want a Python 3 installation and invokes `pip3.7` to install os-testr. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_59_869198 `pip3.7` wants to install `unittest2` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_44_15_851337 `pip3.7` has installed `unittest2` - now `/usr/local/bin/unit2` has a hashbang line saying `python3.7` Now this is where it gets, uhm, interesting: https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_708737 `setup_dev_lib` is invoked for `os-brick` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_723318 `setup_dev_lib`, seeing that we really want a Python 3 installation, decides to install `os-brick` for Python 2 just in case. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_00_661346 `pip2.7` is invoked to install `os-brick` and its dependencies. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_25_209365 `pip2.7` decides it wants to install `unittest2`, too. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_20_924559 `pip2.7` has installed `unittest2`, and now `/usr/local/bin/unit2` has a hasbang line saying `python2.7` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_21_591114 `setup_dev_lib` turns the Python 3 flag back on. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_22_659564 `pip3.7` is invoked to install `os-brick` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_36_759583 `pip3.7` decides (correctly) that it has already installed `unittest2`, so (only partially correctly) it does not need to install it again. Thus `/usr/local/bin/unit2` is left with a hashbang line saying `python2.7`. Thanks for reading this far, I guess :) G'luck, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Mon Sep 9 08:32:26 2019 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 9 Sep 2019 09:32:26 +0100 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: References: Message-ID: +1, seems like a good addition to the team! On Thu, 5 Sep 2019 at 05:35, Renat Akhmerov wrote: > Andras, > > You just went one step ahead of me! I was going to promote Oleg in the end > of this week :) I’m glad that we coincided at this. Thanks! I’m for it with > my both hands! > > > Renat Akhmerov > @Nokia > On 4 Sep 2019, 17:33 +0700, András Kövi , wrote: > > I would like to invite Oleg Ovcharuk to join the > Mistral Core Team. Oleg has been a very active and enthusiastic contributor > to the project. He has definitely earned his way into our community. > > Thank you, > Andras > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From merlin.blom at bertelsmann.de Mon Sep 9 09:32:22 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Mon, 9 Sep 2019 09:32:22 +0000 Subject: AW: [metrics] [telemetry] [stein] cpu_util In-Reply-To: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> References: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> Message-ID: >From Witek Bedyk on Re: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? Fr 16.08.2019 17:11 ' Hi all, You can also collect `cpu.utilization_perc` metric with Monasca and trigger Heat auto-scaling as we demonstrated in the hands-on workshop at the last Summit in Denver. Here the Heat template we've used [1]. You can find the workshop material here [2]. Cheers Witek [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling_blob_master_final_autoscaling.yaml&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=wZLSXjvqYiPmMVbz8fgezCE1iwxZcQXRe3zZZW1JBFo&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=M1D9BENrKX7HD43HfcYFuB8vdP9fKgAuGOTXtRq5aZI&e= ' Cheers Merlin -----Ursprüngliche Nachricht----- Von: Budai Laszlo Gesendet: Freitag, 16. August 2019 18:10 An: OpenStack Discuss Betreff: [metrics] [telemetry] [stein] cpu_util Hello all, the release release announce of ceilometer rocky is deprecating the cpu_util and *.rate metrics "* cpu_util and *.rate meters are deprecated and will be removed in future release in favor of the Gnocchi rate calculation equivalent." so we don't have them in Stein. Can you direct me to some document that describes how to achieve these with Gnocchi rate calculation? Thank you, Laszlo -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Sep 9 09:44:39 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Sep 2019 10:44:39 +0100 (BST) Subject: [placement] "now" worklist Message-ID: As we near the end of a cycle it can be a bit unclear what tasks are relevant or a priority for the placement project. I've made a worklist in storyboard https://storyboard.openstack.org/#!/worklist/754 called "placement now". It gathers stories from the placement group (placement, osc-placement, os-resource-classes, os-traits) that I've tagged with 'pnow' to mean "these are the things we should be concerned with in the near future". This helps to take off the radar anything from the following groups: * Features that will not be considered this cycle. * Anything related to osc-placement (which has already seen its likely last release for this cycle) This leaves placement (the service) bug fixes, and docs. Not yet there is an item for "documenting the new nested provider features", mostly because the story for that has not solidified. Anything that is currently on that list we should finish before the end of the cycle. I hope that having a focused list can help drive that. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From tobias.urdin at binero.se Mon Sep 9 10:06:30 2019 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 9 Sep 2019 12:06:30 +0200 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: I don't think ceilometer uses the compute.instance.exists event by default somewhere or atleast I cannot find a reference to it. What I do know however is that we have a billing system that polls the os-simple-tenant-usage API so if that is unaffected by the possible deprecation of instance_usage_audit then I don't think we use it. Best regards Tobias On 9/7/19 5:20 PM, Tim Bell wrote: > On 9/7/19 3:09 PM, Matt Riedemann wrote: >> On 9/6/2019 6:59 PM, melanie witt wrote: >>> * If Telemetry is no longer using the server usage audit log API, we >>> deprecate it in Nova and notify deployment tools to stop setting >>> [DEFAULT]/instance_usage_audit = true to prevent further creation of >>> nova.task_log records and recommend manual cleanup by users >> Deprecating the API would just be a signal to not develop new tools >> based on it since it's effectively unmaintained but that doesn't mean >> we can remove it since there could be non-Telemtry tools in the wild >> using it that we'd never hear about. You might not be suggesting an >> eventual path to removal of the API, I'm just bringing that part up >> since I'm sure people are thinking it. >> > Tools like cASO (https://github.com/IFCA/caso) use this API. This is > used by many of the EGI Federated Cloud sites to do accounting per VM > (https://egi-federated-cloud-integration.readthedocs.io/en/latest/openstack.html) > > >> I'm also assuming that API isn't multi-cell aware, meaning it won't >> traverse cells pulling records like listing servers or migration >> resources. > Given scaling issues with the current Telemetry implementation, I > suspect alternative approaches have had to be developed in any case. > CERN uses libvirt data extraction. >> As for the config option to run the periodic task that creates these >> records, that's disabled by default so deployment tools shouldn't be >> enabling it by default - but maybe some do if they are configured to >> deploy ceilometer. >> >>> or >>> >>> * If Telemetry is still using the server usage audit log API, we >>> create a new 'nova-manage db purge_task_log --before ' (or >>> similar) command that will hard delete nova.task_log records before a >>> specified date or all if --before is not specified >> If you can't remove the API then this is probably something that needs >> to happen regardless, though we likely won't know if anyone uses it. >> I'd consider it pretty low priority given how extremely latent this is >> and would expect anyone that's been running with this enabled in >> production has developed DB purge scripts for this table long ago. >> > From tobias.urdin at binero.se Mon Sep 9 10:15:00 2019 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 9 Sep 2019 12:15:00 +0200 Subject: AW: [metrics] [telemetry] [stein] cpu_util In-Reply-To: References: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> Message-ID: The cpu_util is a pain-point for us as well, we will unfortunately need to add that metric back to keep backward compatibility to our customers. Best regards Tobias On 9/9/19 11:37 AM, Blom, Merlin, NMU-OI wrote: > From Witek Bedyk on Re: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? > Fr 16.08.2019 17:11 > ' > Hi all, > > You can also collect `cpu.utilization_perc` metric with Monasca and trigger Heat auto-scaling as we demonstrated in the hands-on workshop at the last Summit in Denver. > > Here the Heat template we've used [1]. > You can find the workshop material here [2]. > > Cheers > Witek > > [1] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling_blob_master_final_autoscaling.yaml&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=wZLSXjvqYiPmMVbz8fgezCE1iwxZcQXRe3zZZW1JBFo&e= > [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=M1D9BENrKX7HD43HfcYFuB8vdP9fKgAuGOTXtRq5aZI&e= > ' > > Cheers > Merlin > > -----Ursprüngliche Nachricht----- > Von: Budai Laszlo > Gesendet: Freitag, 16. August 2019 18:10 > An: OpenStack Discuss > Betreff: [metrics] [telemetry] [stein] cpu_util > > Hello all, > > the release release announce of ceilometer rocky is deprecating the cpu_util and *.rate metrics > "* cpu_util and *.rate meters are deprecated and will be removed in > future release in favor of the Gnocchi rate calculation equivalent." > > so we don't have them in Stein. Can you direct me to some document that describes how to achieve these with Gnocchi rate calculation? > > Thank you, > Laszlo > From thierry at openstack.org Mon Sep 9 10:30:56 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Sep 2019 12:30:56 +0200 Subject: [i18n][tc] The future of I18n In-Reply-To: <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> <20190906133759.obgszlvqexgam5n3@csail.mit.edu> <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> Message-ID: <462cad35-832e-c5b0-8971-a97f386f78e0@openstack.org> Ian Y. Choi wrote: >> On Fri, Sep 06, 2019 at 11:36:38AM +0200, Thierry Carrez wrote: >> :The I18n project team had no PTL candidates for Ussuri, so the TC needs to >> :decide what to do with it. It just happens that Ian kindly volunteered to be >> :an election official, and therefore could not technically run for I18n PTL. >> :So if Ian is still up for taking it, we could just go and appoint him. > > I love I18n, and I could not imagine OpenStack world without I18n - I > would like to take I18n PTL role for Ussuari cycle if there is no > objection. Great! I posted a review to suggest that the TC appoints you at: https://review.opendev.org/680968 >> :That said, I18n evolved a lot, to the point where it might fit the SIG >> :profile better than the project team profile. >> [...] > > IMHO, since it seems that I18n team's release activities [5] are rather > stable, from the perspective, I think staying I18n team as SIG makes > sense, but please kindly consider the followings: > > - Translators who have contributed translations to official OpenStack > projects are currendly regarded as ATC and APC of the I18n project. >   It would be great if OpenStack TC and official project teams regard > those translation contribution as ATC and APC of corresponding official > projects, if I18n team stays as SIG. Note that SIG members are considered ATCs (just like project team members) and can vote in the TC election... so there would be no difference really (except I18n SIG members would no longer have to formally vote for a PTL). > [...] > - Another my brief understanding on the difference between as an > official team and as SIG from the perspective of Four Opens is that SIGs > and working groups seems that they have some flexibility using > non-opensource tools for communication. >   For example, me, as PTL currently encourage all the translators to > come to the tools official teams use such as IRC, mailing lists, and > Launchpad (note: I18n team has not migrated from Launchpad to > Storyboard) - I like to use them and >   I strongly believe that using such tools can assure that the team is > following Four Opens well. But sometimes I encounter some reality - > local language teams prefer to use their preferred communication protocols. >   I might need to think more how I18n team as SIG communicates well > with members, but I think the team members might want to more find out > how to better communicate with language teams (e.g., using Hangout, > Slack, and so on from the feedback) >   , and try to use better communication tools which might be > comfortable to translators who have little background on development. Yes, it's true that SIGs have more freedom in how they operate, and so the diversity of communication tools used by the translators might be another reason the I18n team fits the SIG profile at this point better than the Project Team profile. > Note that I have not discussed the details with team members - I am > still open with my thoughts, would like to more listen to opinions from > the team members, and originally wanted to expand the discussion with > such perspective during upcoming PTG > in Shanghai with more Chinese translators. > And dear OpenStackers including I18n team members & translators: please > kindly share your sincere thoughts. Certainly, the idea is not to rush anything -- the team will continue to operate as a project team for the time being. But if the team agrees, transitioning to a SIG is pretty cheap, and I feel like the SIG format fits the group better at this point (and gives extra flexibility)... so it is one thing to consider :) -- Thierry Carrez (ttx) From smooney at redhat.com Mon Sep 9 11:54:58 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Sep 2019 12:54:58 +0100 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: Message-ID: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> On Mon, 2019-09-09 at 11:22 +0300, Peter Penchev wrote: > Hi, > > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has > been specified, this function tries to also install the development library > for Python 2.x, I guess just in case some package has not declared proper > Python 3 support or something. It then proceeds to install the Python 3 > version of the library and all its dependencies. > > Unfortunately there is a problem with that, and specifically with script > files installed in the system's executable files directory, e.g. > /usr/local/bin. The problem appears when some Python library has already > been installed for Python 3 (and has installed its script files), but is > now installed for Python 2 (overwriting the script files) and is then not > forcefully reinstalled for Python 3, since it is already present. Thus, the > script files are last modified by the Python 2 library installation and > they have a hashbang line saying `python2.x` - so if something then tries > to execute them, they will run and use modules and libraries for Python 2 > only. yes this is a long standing issue. we discovered it a year ago but it was never fix. in Ussrui i guess one of the first changes to devstack to make it python 3 only will be to chagne that behavior. im not sure if we will be able to change it before then. whenever you us libs_from_git in your local.conf on a python 3 install it will install them twice both with python 2 and python 3. i hope more distros elect to symlink /usr/bin/python to python 3 some distros have chosen to do that on systems that are python only and i believe that is the correct approch. when i encountered this it was always resuliting on the script header being #!/usr/bin/python with no version suffix i gues on a system where that points to python 3 the python 2.7 install might write python2.7 there instead? > > We experienced this problem when running the cinderlib tests from Cinder's > `playbooks/cinderlib-run.yaml` file - it finds a unit2 executable > (installed by the unittest2 library) and runs it, hoping that unit2 will be > able to discover and collect the cinderlib tests and load the cinderlib > modules. However, since unittest2 has last been installed as a Python 2 > library, unit2 runs with Python 2 and fails to locate the cinderlib > modules. (Yes, we know that there are other ways to run the cinderlib > tests; this message is about the problem exposed by this way of running > them) > > The obvious solution would be to instruct the Python 2 pip to not install > script (or other shared) files at all; unfortunately, > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on > install"), detailing a very similar use case ("need it installed for Python > 2, but want to use it with Python 3") has been open for almost exactly > three years now with no progress. I wonder if I could try to help, but even > if this issue is resolved, there will be some time before OpenStack can > actually depend on a recent enough version of pip. well the obvious solution is to stop doing this entirly. it was added as a hack to ensure if you use LIB_FROM_GIT in you local.conf that those libs would always be install from the git checkout that you specified in you local.conf for train we are technically requireing all project to run under python 3 so we could remove the fallback mechanium of in stalling under python 2. it was there incase a service installed under python 2 to ensure it used the same version of the lib and did not use a version form pypi instead. i wanted to stop doing this last year but we could not becase not all project could run under python 3. but now that they should be able to we dont need this hack anymore. we should change it to respec the python version you have selected. that will speed up stacking speed as we wont have to install everything twice and fix the issue you have encountered. > > A horrible workaround would be to find the binary directory before > installing the Python 2 library (using something like `pip3.7 show > somepackage` and then running some heuristics on the "Location" field), > tar'ing it up and then restoring it... but I don't know if I even want to > think about this. > > Another possible way forward would be to consider whether we still want the > Python 2 libraries installed - is OpenStack's Python 3 transition reached a > far enough stage to assume that any projects that still require Python 2 > *and* fail to declare their Python 2 dependencies properly are buggy? To be > honest, this seems the most reasonable path for me - drop the "also install > the Python 2 libs" code and see what happens. I could try to make this > change in a couple of test runs in our third-party Cinder CI system and see > if something breaks. > > Here is a breakdown of what happens, with links to the log of the StorPool > third-party CI system for Cinder: > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_55_691087 > `stack.sh` invokes `pip_install` for `os-testr` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_56_030839 > `pip_install` sees that we want a Python 3 installation and invokes > `pip3.7` to install os-testr. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_59_869198 > `pip3.7` wants to install `unittest2` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_44_15_851337 > `pip3.7` has installed `unittest2` - now `/usr/local/bin/unit2` has a > hashbang line saying `python3.7` > > Now this is where it gets, uhm, interesting: > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_708737 > `setup_dev_lib` is invoked for `os-brick` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_723318 > `setup_dev_lib`, seeing that we really want a Python 3 installation, > decides to install `os-brick` for Python 2 just in case. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_00_661346 > `pip2.7` is invoked to install `os-brick` and its dependencies. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_25_209365 > `pip2.7` decides it wants to install `unittest2`, too. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_20_924559 > `pip2.7` has installed `unittest2`, and now `/usr/local/bin/unit2` has a > hasbang line saying `python2.7` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_21_591114 > `setup_dev_lib` turns the Python 3 flag back on. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_22_659564 > `pip3.7` is invoked to install `os-brick` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_36_759583 > `pip3.7` decides (correctly) that it has already installed `unittest2`, so > (only partially correctly) it does not need to install it again. > > Thus `/usr/local/bin/unit2` is left with a hashbang line saying `python2.7`. > > Thanks for reading this far, I guess :) > > G'luck, > Peter From mnaser at vexxhost.com Mon Sep 9 12:05:15 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 9 Sep 2019 08:05:15 -0400 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> Message-ID: On Fri, Sep 6, 2019 at 5:10 AM Thierry Carrez wrote: > > Divya K Konoor wrote: > > Missing the deadline for a PTL nomination cannot be the reason for > > removing governance. > > I agree with that, but missing the deadline twice in a row is certainly > a sign of some disconnect with the rest of the OpenStack community. > Project teams require a minimal amount of reactivity and presence, so it > is fair to question whether PowerVMStackers should continue as a project > team in the future. > > > PowerVMStackers continue to be an active project > > and would want to be continued to be governed under OpenStack. For PTL, > > an eligible candidate can still be appointed . > > There is another option, to stay under OpenStack governance but without > the constraints of a full project team: PowerVMStackers could be made an > OpenStack SIG. > > I already proposed that 6 months ago (last time there was no PTL nominee > for the team), on the grounds that interest in PowerVM was clearly a > special interest, and a SIG might be a better way to regroup people > interested in supporting PowerVM in OpenStack. > > The objection back then was that PowerVMStackers maintained a number of > PowerVM-related code, plugins and drivers that should ideally be adopted > by their consuming project teams (nova, neutron, ceilometer), and that > making it a SIG would endanger that adoption process. > > I still think it makes sense to consider PowerVMStackers as a Special > Interest Group. As long as the PowerVM-related code is not adopted by > the consuming projects, it is arguably a special interest, and not a > completely-integrated part of OpenStack components. > > The only difference in being a SIG (compared to being a project team) > would be to reduce the amount of mandatory tasks (like designating a PTL > every 6 months). You would still be able to own repositories, get room > at OpenStack events, vote on TC election... > > It would seem to be the best solution in your case. I echo all of this and I think at this point, it's better for the deliverables to be within a SIG. > -- > Thierry Carrez (ttx) > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Mon Sep 9 12:08:01 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 9 Sep 2019 08:08:01 -0400 Subject: [all][ptl][tc][docs] Develope a code-review practices document In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019 at 12:11 AM Trinh Nguyen wrote: > > Hi all, > > I find it's hard sometimes to handle situations in code-review, something likes solving conflicts while not upsetting developers, or suggesting a change to a patchset while still encouraging the committer, etc. I know there are already documents that guide us on how to do a code-review [2] and even projects develope their own procedures but I find they're more about technical issues rather than human communication. Currently reading Google's code-review practices [1] give me some inspiration to develop more human-centric code-review guidelines for OpenStack projects. IMO, it could be a great way to help project teams develop stronger relationship as well as encouraging newcomers. When the document is finalized, I then encourage PTLs to refer to that document in the project's docs. > > Let me know what you think and I will put a patchset after one or two weeks. I am very supportive of this and I agree with you on this. I'd be happy to see and go over what you are looking to propose! > [1] https://google.github.io/eng-practices/review/ > [2] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > [3] https://docs.openstack.org/doc-contrib-guide/docs-review.html > [4] https://docs.openstack.org/nova/rocky/contributor/code-review.html > [5] https://docs.openstack.org/neutron/pike/contributor/policies/code-reviews.html > > > Bests, > > -- > Trinh Nguyen > www.edlab.xyz > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Mon Sep 9 12:09:23 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 9 Sep 2019 08:09:23 -0400 Subject: [ansible-sig] weekly meetings In-Reply-To: References: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> Message-ID: Hi all, Sorry about the lack of details :) It will be held in #openstack-ansible-sig on Freenode. Thanks, Mohammed On Wed, Sep 4, 2019 at 9:24 PM Carter, Kevin wrote: > > Thanks Mohammed, I've added it to my calendar and look forward to getting started. > > -- > > Kevin Carter > IRC: Cloudnull > > > On Wed, Sep 4, 2019 at 8:17 PM Wesley Peng wrote: >> >> Hi >> >> on 2019/9/5 0:20, Mohammed Naser wrote: >> > For those interested in getting involved, the ansible-sig meetings >> > will be held weekly on Fridays at 2:00 pm UTC starting next week (13 >> > September 2019). >> > >> > Looking forward to discussing details and ideas with all of you! >> >> Is it a onsite meeting? where is the location? > > > This is a good question, I assume the meeting will be on IRC, on freenode, but what channel will we be using? #openstack-ansible-sig ? > >> >> >> thanks. >> -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mriedemos at gmail.com Mon Sep 9 13:17:46 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Sep 2019 08:17:46 -0500 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: On 9/9/2019 5:06 AM, Tobias Urdin wrote: > What I do know however is that we have a billing system that polls the > os-simple-tenant-usage API so > if that is unaffected by the possible deprecation of > instance_usage_audit then I don't think we use it. Different APIs [1][2] so it's not a problem. [1] https://docs.openstack.org/api-ref/compute/#usage-reports-os-simple-tenant-usage [2] https://docs.openstack.org/api-ref/compute/#server-usage-audit-log-os-instance-usage-audit-log -- Thanks, Matt From balazs.gibizer at est.tech Mon Sep 9 13:23:19 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Mon, 9 Sep 2019 13:23:19 +0000 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> <1567771216.28660.0@smtp.office365.com> Message-ID: <1568035395.12646.1@smtp.office365.com> On Fri, Sep 6, 2019 at 3:34 PM, Mohammed Naser wrote: On Fri, Sep 6, 2019 at 8:04 AM Balázs Gibizer > wrote: I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. If it's okay with you, would you like to share what are some of the priorities and work that you feel is placed on a PTL which makes you reluctant? PS, by no means I am trying to push for you to be PTL if you're not currently interested, but I want to hear some of the community thoughts about this (and feel free to reply privately) I preceive the PTL role as a person who oversees the project and follows the status of the ongoing features and high severity bugs. A person who organizes Forum and PTG discussions and ensures that the results are documented. A person who tries to improve the overal collaboration in the given project. And I guess there are things on the PTL's plate that I'm not even aware of. This needs time and it needs commitment to have that time available during the whole cycle. I'm in a situation where I constantly feel the lack of time to do my current commitments (e.g. be a good Nova core, be a good Placement core, finish the feature I promised both to the community and internally to my employer.) I think it won't be fair from me to commit to the PTL role when I already see I would not have time to do it properly. On the personal side I guess I also affraid of not having enough skill to delegeta the above PTL related tasks to others. Based on the above my constructive suggestion is to try out that the Placement core team together try to fulfill the PTL's role. I know that for the TC it creates some extra pain as there would be no single point of contact for the Placement project. Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Sep 9 14:27:19 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Sep 2019 16:27:19 +0200 Subject: [release][cyborg] os-acc status Message-ID: Hi Cyborgs, One of your deliverables is the os-acc library. It has seen no change over this development cycle and therefore was not released at all in train. We have several options for this library now: 1- It's still very much alive and desired and just has exceptionally not seen much activity during this cycle. We should just cut a stable/train branch from the last release available (0.2.0) and continue in ussuri. 2- It's a valuable library, it just changes extremely rarely. We should make it independent from the release cycle and have it release at its own rhythm. 3- Development has stopped on this, and the library is not useful right now. We should retire this deliverable so that we do not build wrong expectations for our users. Please let us know which option fits the current status of os-acc. -- Thierry Carrez (ttx) From fungi at yuggoth.org Mon Sep 9 14:32:58 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Sep 2019 14:32:58 +0000 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> References: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> Message-ID: <20190909143258.4f32wsvamj666y2m@yuggoth.org> On 2019-09-09 12:54:58 +0100 (+0100), Sean Mooney wrote: [...] > i hope more distros elect to symlink /usr/bin/python to python 3 > some distros have chosen to do that on systems that are python > only and i believe that is the correct approch. I personally hope they don't, and at least my preferred Linux distro is not planning to do that any time in the foreseeable future (if ever). I see python and python3 as distinct programming languages with their own interpreters, and so any distro which by default pretends that its python3 interpreter is a python interpreter (by claiming the unversioned "python" executable name in the system context search path) is simply broken. > when i encountered this it was always resuliting on the script > header being #!/usr/bin/python with no version suffix i gues on a > system where that points to python 3 the python 2.7 install might > write python2.7 there instead? Yes, the correct solution is to update those to #!/usr/bin/python3 because at least some distros are going to cease providing a /usr/bin/python executable at all when they drop their 2.7 packages. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Sep 9 14:33:37 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Sep 2019 15:33:37 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190909132120.aqbv3plus2hp7q6j@pacific.linksys.moosehall> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> <20190909132120.aqbv3plus2hp7q6j@pacific.linksys.moosehall> Message-ID: On Mon, 9 Sep 2019, Adam Spiers wrote: > Chris Dent wrote: >> On Fri, 6 Sep 2019, Jeremy Stanley wrote: >> >>> I'm disappointed that you don't think the software you're making is >>> open source. I think the software I'm making is open source, and if >>> I didn't I wouldn't be here. > > I wouldn't either. I'd be very worried to live in a world where there > was no serious open source rival to AWS, Azure, GCE etc. (One possible tl;dr of the below is: For at least some people working on OpenStack the more direct and immediate cause and result of their work (whatever their intent) is the enablement of corporate profit (through sales and support) not individual humans using the software.) >From some standpoints I would guess that OpenStack looks and behaves like open source: people work on it collaboratively and the code is available for anyone to change. And I would agree that from that standpoint it is open source, and the four opens are good both in letter and in spirit. I would also agree that the academic and other non-profit use of a OpenStack that Jeremy is very compelling and motivating. But the context of much of this thread has been about the experience of the developers making OpenStack. How they come to be in this situation, how they manage their work, who they work with, what drives decisions, etc. (What follows is a ramble. A apologize for not being able to write less. But you did ask so here it goes.) In the context of daily developer experience, things are less clear. They would be more clear and would feel more like open source if I was more frequently collaborating in the creation of code with people who were using OpenStack. But I don't. Most frequently I'm collaborating with people who instead of using OpenStack are helping to make something for other people (with whom they have infrequent collaborative contact) to use OpenStack. For some people this is not the case. For example, many of the people who have been deeply involved with OpenStack infra use OpenStack all the time and also work hard to improve the code of OpenStack. But on a daily basis that isn't my experience. Nor does it feel like the experience of most of the people I tend to collaborate with. Yes, sometimes I will collaborate with someone from CERN to create a feature, but this is rare. Usually I collaborate with people from Intel, VMware, Red Hat, and a variety of Telco vendors. Doing a thing to help an existing customer or hoped for notional customer, both of whom are abstractions at a distance, not humans. This isn't a bad thing. Organizations collaborating in any way is great. But it doesn't _feel_ like "open source" to me. And that feeling is an important factor (I think) in analyzing the motivations people experience when working on OpenStack and the choices they make with regard to how they act in the environment. As someone who has done what could be called open source since long before the term was invented, the common failure of corporate patrons to give maintainability and quality (of product and (critically) the experience of creating it) sufficient attention is a source of a great deal of resentment and internal conflict. I am far too conscious of the necessity to compensate for that failure if I want to feel a sense of well being with what I'm helping to create (both in terms of product and the environment it is being created in). That is: I care enough to try to do what I think is right. In this thread, and the one that started it, we've put forward the "maybe we should just chill" as a bit of an antidote to burnout and overcommitment. While I rationally think that's the right idea, emotionally it is very hard to do and the source of that difficulty is this: OpenStack has constituted itself over the years as the domain of contributing corporations. Many paid contributors for whom working on OpenStack is their job. At the same time we have also been very vocal about being not just open source, but a source of good wisdom (the four opens) on how to do open source well. The latter creates a community I want to believe in. A source of pride. The former creates a conflict of interest, a frequent inability to do the actually right thing for the long term health of the community. A source of shame. Continued pleas to get the corporates to do "open source" well -- that is with correct attention to: * developer experience * maintainability * architectural integrity * deeper/closer ties to user engagement and their satisfaction and thus some akin to "actually open source" -- have fallen on what, if actions speak louder than words, are deaf ears. This creates a conundrum. I've tried a variety of ways out of it. One I'm experimenting with now is realizing that OpenStack really isn't, now, proper open source. And if it is not, then I don't have to care because they don't. > Again I'd be very interested to learn more about your take on what we > can do better. There are two directions to go: Maintain the mode of corporate-contribution-driven development. If this is to be healthy then the corps doing that contribution need to invest far more heavily in general, but especially in the items I've listed above at "correct attention". This would grant the community sufficient resources to evolve out of its aging models for development and governance. You have to have some free space to have the head space to get to new spaces. Start breaking down the corporate-contribution-driven development. Encourage professional openstack devs (like me) to age out of the system and discourage new ones coming in. Encourage feature development from and via users. Feature velocity might drop drastically but they might be features individuals actually use within a few weeks of their release rather than a few years. Some of this latter is already happening. Especially in what some people call the non-core projects; things associated with deployment for example. But in projects like nova we're heavily driven by trying to create a feature base which is predicted to drive sales, either directly or indirectly. And, though opinions and experiences differ, my opinion and experience is that driving sales as a direct factor is anathema to "open source". Indirect? Sure, whatever, if that floats your boat. The proper direct factor is humans. There's a lot more to this than I've stated here, but I hope that gives at least something in answer to the question. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From smooney at redhat.com Mon Sep 9 14:41:54 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Sep 2019 15:41:54 +0100 Subject: [release][cyborg] os-acc status In-Reply-To: References: Message-ID: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> On Mon, 2019-09-09 at 16:27 +0200, Thierry Carrez wrote: > Hi Cyborgs, > > One of your deliverables is the os-acc library. It has seen no change > over this development cycle and therefore was not released at all in train. > > We have several options for this library now: > > 1- It's still very much alive and desired and just has exceptionally not > seen much activity during this cycle. We should just cut a stable/train > branch from the last release available (0.2.0) and continue in ussuri. > > 2- It's a valuable library, it just changes extremely rarely. We should > make it independent from the release cycle and have it release at its > own rhythm. > > 3- Development has stopped on this, and the library is not useful right > now. We should retire this deliverable so that we do not build wrong > expectations for our users. i think ^ is the case. i dont activly work on cyborg but i belive os-acc is no longer planned to be used or developed. they can correct me if that is wrong but i think it can be removed as a deliverable. > > Please let us know which option fits the current status of os-acc. > From openstack-dev at storpool.com Mon Sep 9 14:47:13 2019 From: openstack-dev at storpool.com (Peter Penchev) Date: Mon, 9 Sep 2019 17:47:13 +0300 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> Message-ID: On Mon, Sep 9, 2019 at 5:40 PM Peter Penchev wrote: > On Mon, Sep 9, 2019 at 2:55 PM Sean Mooney wrote: > >> On Mon, 2019-09-09 at 11:22 +0300, Peter Penchev wrote: >> > Hi, >> > >> > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has >> > been specified, this function tries to also install the development >> library >> > for Python 2.x, I guess just in case some package has not declared >> proper >> > Python 3 support or something. It then proceeds to install the Python 3 >> > version of the library and all its dependencies. >> > >> > Unfortunately there is a problem with that, and specifically with script >> > files installed in the system's executable files directory, e.g. >> > /usr/local/bin. The problem appears when some Python library has already >> > been installed for Python 3 (and has installed its script files), but is >> > now installed for Python 2 (overwriting the script files) and is then >> not >> > forcefully reinstalled for Python 3, since it is already present. Thus, >> the >> > script files are last modified by the Python 2 library installation and >> > they have a hashbang line saying `python2.x` - so if something then >> tries >> > to execute them, they will run and use modules and libraries for Python >> 2 >> > only. >> yes this is a long standing issue. we discovered it a year ago but it was >> never fix. >> >> in Ussrui i guess one of the first changes to devstack to make it python >> 3 only >> will be to chagne that behavior. im not sure if we will be able to change >> it before then. >> >> whenever you us libs_from_git in your local.conf on a python 3 install it >> will install >> them twice both with python 2 and python 3. i hope more distros elect to >> symlink /usr/bin/python to python 3 >> some distros have chosen to do that on systems that are python only and i >> believe that is the correct >> approch. >> >> when i encountered this it was always resuliting on the script header >> being #!/usr/bin/python with no >> version suffix i >> gues on a system where that points to python 3 the python 2.7 install >> might write python2.7 >> there instead? >> > > It depends on what version of pip is invoked; I think that the way > devstack invokes it nowadays it will always provide a version on the > shebang line. > > >> >> > >> > We experienced this problem when running the cinderlib tests from >> Cinder's >> > `playbooks/cinderlib-run.yaml` file - it finds a unit2 executable >> > (installed by the unittest2 library) and runs it, hoping that unit2 >> will be >> > able to discover and collect the cinderlib tests and load the cinderlib >> > modules. However, since unittest2 has last been installed as a Python 2 >> > library, unit2 runs with Python 2 and fails to locate the cinderlib >> > modules. (Yes, we know that there are other ways to run the cinderlib >> > tests; this message is about the problem exposed by this way of running >> > them) >> > >> > The obvious solution would be to instruct the Python 2 pip to not >> install >> > script (or other shared) files at all; unfortunately, >> > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on >> > install"), detailing a very similar use case ("need it installed for >> Python >> > 2, but want to use it with Python 3") has been open for almost exactly >> > three years now with no progress. I wonder if I could try to help, but >> even >> > if this issue is resolved, there will be some time before OpenStack can >> > actually depend on a recent enough version of pip. >> well the obvious solution is to stop doing this entirly. >> it was added as a hack to ensure if you use LIB_FROM_GIT in you >> local.conf that those >> libs would always be install from the git checkout that you specified in >> you local.conf >> for train we are technically requireing all project to run under python 3 >> so we could remove >> the fallback mechanium of in stalling under python 2. it was there incase >> a service installed >> under python 2 to ensure it used the same version of the lib and did not >> use a version form >> pypi instead. i wanted to stop doing this last year but we could not >> becase not all project >> could run under python 3. but now that they should be able to we dont >> need this hack anymore. >> we should change it to respec the python version you have selected. that >> will speed >> up stacking speed as we wont have to install everything twice and fix the >> issue you have encountered. >> > > Yeah, thanks for confirming my thoughts that this might be the right > solution. I've proposed https://review.opendev.org/681029/ (and set > workflow -1) to wait for the Ussuri cycle. > > >> > >> > A horrible workaround would be to find the binary directory before >> > installing the Python 2 library (using something like `pip3.7 show >> > somepackage` and then running some heuristics on the "Location" field), >> > tar'ing it up and then restoring it... but I don't know if I even want >> to >> > think about this. >> > >> > Another possible way forward would be to consider whether we still want >> the >> > Python 2 libraries installed - is OpenStack's Python 3 transition >> reached a >> > far enough stage to assume that any projects that still require Python 2 >> > *and* fail to declare their Python 2 dependencies properly are buggy? >> To be >> > honest, this seems the most reasonable path for me - drop the "also >> install >> > the Python 2 libs" code and see what happens. I could try to make this >> > change in a couple of test runs in our third-party Cinder CI system and >> see >> > if something breaks. >> > > G'luck, > Peter > > Argh, I sent this from the wrong account, did I not... G'luck, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From farida.elzanaty at mail.mcgill.ca Mon Sep 9 15:02:01 2019 From: farida.elzanaty at mail.mcgill.ca (Farida El Zanaty) Date: Mon, 9 Sep 2019 15:02:01 +0000 Subject: [nova][neutron][all][openstack-devs] studying and analysing Openstack developers Message-ID: Hi! I am Farida El-Zanaty from McGill University. Under the supervision of Prof. Shane McIntosh, my research aims to study design discussions that occur between developers during code reviews. Last year, we published a study about the frequency and types of such discussions that occur in OpenStack Nova and Neutron (http://rebels.ece.mcgill.ca/papers/esem2018_elzanaty.pdf). We are reaching out to OpenStack developers to better understand their perspectives on design discussions during code reviews. Those who are interested can start by participating in our 10-minute survey about their experiences as both the code reviewer and author. Survey participants will be entered into a raffle for a $50 Amazon gift card. Survey: https://forms.gle/Hhn191f6cxF5hVgG8 Thanks for your time, Farida El-Zanaty -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Sep 9 15:33:47 2019 From: melwittt at gmail.com (melanie witt) Date: Mon, 9 Sep 2019 08:33:47 -0700 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: <91d2a3b7-cbbe-4e89-50a9-a3f12cc92e43@gmail.com> On 9/7/19 6:09 AM, Matt Riedemann wrote: > On 9/6/2019 6:59 PM, melanie witt wrote: >> >> * If Telemetry is no longer using the server usage audit log API, we >> deprecate it in Nova and notify deployment tools to stop setting >> [DEFAULT]/instance_usage_audit = true to prevent further creation of >> nova.task_log records and recommend manual cleanup by users > > Deprecating the API would just be a signal to not develop new tools > based on it since it's effectively unmaintained but that doesn't mean we > can remove it since there could be non-Telemtry tools in the wild using > it that we'd never hear about. You might not be suggesting an eventual > path to removal of the API, I'm just bringing that part up since I'm > sure people are thinking it. > > I'm also assuming that API isn't multi-cell aware, meaning it won't > traverse cells pulling records like listing servers or migration resources. > > As for the config option to run the periodic task that creates these > records, that's disabled by default so deployment tools shouldn't be > enabling it by default - but maybe some do if they are configured to > deploy ceilometer. Indeed, tripleo enables the periodic task when deploying Telemetry, which is how we have customers hitting the unbounded nova.task_log table growth problem. >> >> or >> >> * If Telemetry is still using the server usage audit log API, we >> create a new 'nova-manage db purge_task_log --before ' (or >> similar) command that will hard delete nova.task_log records before a >> specified date or all if --before is not specified > > If you can't remove the API then this is probably something that needs > to happen regardless, though we likely won't know if anyone uses it. I'd > consider it pretty low priority given how extremely latent this is and > would expect anyone that's been running with this enabled in production > has developed DB purge scripts for this table long ago. Yeah, based on Tim Bell's reply later in this thread, we can't remove the API (tools in the wild using it). So, I'll propose a new nova-manage command because we don't appear to have a standard way of cleaning up nova.task_log records for customers either, yet. -melanie From francois.scheurer at everyware.ch Mon Sep 9 15:36:34 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Mon, 9 Sep 2019 17:36:34 +0200 Subject: [keystone] cannot use 'openstack trust list' without admin role In-Reply-To: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> References: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> Message-ID: Hello I think this old link is explaining the reason behind this "inconsistency" with the policy.json rules: https://bugs.launchpad.net/keystone/+bug/1373599 So to summarize, the RBAC is allowing identity:list_trusts for a non admin user (cf. policy.json) but then hard coded policies deny the request if non admin. Quote: The policies in policy.json can make these operations more restricted, but not less restricted than the hard-coded restrictions. We can't simply remove these settings from policy.json, as that would cause the "default" rule to be used which makes trusts unusable in the case of the default "default" rule of "admin_required". Cheers Francois On 9/9/19 1:57 PM, Francois Scheurer wrote: > > Hi All > > > I found an answer here > > https://bugs.launchpad.net/keystone/+bug/1373599 > > On 9/6/19 5:59 PM, Francois Scheurer wrote: > Dear Keystone Experts, > I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. > 'openstack trust create/show' works. > 'openstack trust list' is denied. > But keystone policy.json says: >     "identity:create_trust": "user_id:%(trust.trustor_user_id)s", >     "identity:list_trusts": "", >     "identity:list_roles_for_trust": "", >     "identity:get_role_for_trust": "", >     "identity:delete_trust": "", >     "identity:get_trust": "", > So "openstack list trusts" is always allowed. > In keystone log (I replaced the uid's by names in the ouput below) I see that 'identity:list_trusts()' was actually granted > but just after that a_*admin_required()*_ is getting checked and fails... I wonder why... > There is also a flag*is_admin_project=True* in the rbac creds for some reason... > > Any clue? Many thanks in advance! > > > Cheers > Francois > > > #openstack --os-cloud stage-fsc trust create --project fscproject --role creator fsc fsc > #=> fail because of the names and policy rules, but using uid's it works > openstack --os-cloud stage-fsc trust create --project aeac4b07d8b144178c43c65f29fa9dac --role 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c 3e9b1a4fe95048a3b98fb5abebd44f6c > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > #this fails: > openstack --os-cloud stage-fsc trust list > *You are not authorized to perform the requested action: > admin_required. (HTTP 403)* > > > > > > > > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From yongle.li at gmail.com Mon Sep 9 15:56:26 2019 From: yongle.li at gmail.com (Fred Li) Date: Mon, 9 Sep 2019 23:56:26 +0800 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Chris, Thank you for your help and fruitful work in interoperability working group, as well as many other work in OpenStack community. We will miss you and see you later somewhere in the world. On Thu, Sep 5, 2019 at 12:27 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Mon Sep 9 15:57:37 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 9 Sep 2019 15:57:37 +0000 Subject: [release][cyborg] os-acc status In-Reply-To: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> References: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> Message-ID: <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> Hi Thierry and all, Os-acc is not relevant and will be discontinued. This was communicated in [1]. A patch has been filed for the same [2]. I will start the work after Train-3 milestone. That was also mentioned in [3]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008473.html [2] https://review.opendev.org/#/c/676331/ [3] https://review.opendev.org/#/c/680091/ Regards, Sundar > -----Original Message----- > From: Sean Mooney > Sent: Monday, September 9, 2019 7:42 AM > To: Thierry Carrez ; openstack- > discuss at lists.openstack.org > Subject: Re: [release][cyborg] os-acc status > > On Mon, 2019-09-09 at 16:27 +0200, Thierry Carrez wrote: > > Hi Cyborgs, > > > > One of your deliverables is the os-acc library. It has seen no change > > over this development cycle and therefore was not released at all in train. > > > > We have several options for this library now: > > > > 1- It's still very much alive and desired and just has exceptionally > > not seen much activity during this cycle. We should just cut a > > stable/train branch from the last release available (0.2.0) and continue in > ussuri. > > > > 2- It's a valuable library, it just changes extremely rarely. We > > should make it independent from the release cycle and have it release > > at its own rhythm. > > > > 3- Development has stopped on this, and the library is not useful > > right now. We should retire this deliverable so that we do not build > > wrong expectations for our users. > i think ^ is the case. > i dont activly work on cyborg but i belive os-acc is no longer planned to be > used or developed. they can correct me if that is wrong but i think it can be > removed as a deliverable. > > > > Please let us know which option fits the current status of os-acc. > > > From colleen at gazlene.net Mon Sep 9 15:57:53 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 09 Sep 2019 08:57:53 -0700 Subject: [keystone] Pre-feature-freeze update In-Reply-To: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> References: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> Message-ID: <5729a3c4-ecf1-40ef-9d12-3d640e8661bc@www.fastmail.com> On Fri, Sep 6, 2019, at 21:57, Colleen Murphy wrote: [snipped] > > * CI > > After skimming the meeting logs I saw the unit test timeout problem was > discussed and a temporary workaround was proposed[8]. This sounded like > a great idea but it seems that no one implemented it, so I did[9]. > Unfortunately this will conflict with all the > system-scope/default-roles patches in flight. With how many changes > need to go in and how slow it will be with all of them needing to be > rechecked and continually making the problem even worse, I propose we > go ahead and merge the workaround ASAP and update all the in-flight > changes to move the protection tests to the new location. > Alternatively, we can raise the timeouts temporarily as proposed here[11], then merge all the policy changes, then merge the protection test split. [snipped] > [8] > http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-08-27-16.01.log.html#l-84 > [9]https://review.opendev.org/680788 [11] https://review.opendev.org/680798 > > Colleen > > From francois.scheurer at everyware.ch Mon Sep 9 16:23:08 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Mon, 9 Sep 2019 18:23:08 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users Message-ID: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Dear All We are using Mistral 7.0.1.1 with  Openstack Rocky. (with federated users) We can create and execute a workflow via horizon, but cron triggers always fail with this error:     {         "result":             "The action raised an exception [ action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded,                     action_cls='',                     attributes='{u'client_method_name': u'servers.find'}',                     params='{                         u'action_region': u'ch-zh1',                         u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa'                     }'                 ]                 \n NovaAction.servers.find failed: You are not authorized to perform the requested action: identity:validate_token. (HTTP 403) (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33)             "     } Adding the role *admin* or *service* to the user logged in horizon is "fixing" the issue, I mean that the cron trigger then works as expected, but it would be obviously a bad idea to do this for all normal users ;-) So my question: is it a config problem on our side ? is it a known bug? or is it a feature in the sense that cron triggers are for normal users? After digging in the keystone debug logs (see at the end below), I found that RBAC check identity:validate_token an deny the authorization. But according to the policy.json (in keystone and in horizon), rule:owner should be enough to grant it...:             "identity:validate_token": "rule:service_admin_or_owner",                 "service_admin_or_owner": "rule:service_or_admin or rule:owner",                     "service_or_admin": "rule:admin_required or rule:service_role",                         "service_role": "role:service",                     "owner": "user_id:%(user_id)s or user_id:%(target.token.user_id)s", Thank you in advance for your help. Best Regards Francois Scheurer Keystone logs:         2019-09-05 09:38:00.902 29 DEBUG keystone.policy.backends.rules [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom]             enforce identity:validate_token:             {                'service_project_id':None,                'service_user_id':None,                'service_user_domain_id':None,                'service_project_domain_id':None,                'trustor_id':None,                'user_domain_id':u'testdom',                'domain_id':None,                'trust_id':u'mytrustid',                'project_domain_id':u'testdom',                'service_roles':[],                'group_ids':[],                'user_id':u'fsc',                'roles':[                   u'_member_',                   u'creator',                   u'reader',                   u'heat_stack_owner',                   u'member',                   u'load-balancer_member'],                'system_scope':None,                'trustee_id':None,                'domain_name':None,                'is_admin_project':True,                'token':,                'project_id':u'fscproject'             } enforce /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom]             You are not authorized to perform the requested action: identity:validate_token.: *ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token.* -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From chris at openstack.org Mon Sep 9 16:38:09 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 9 Sep 2019 09:38:09 -0700 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: In my personal experience, running Nova on a four core machine without limiting the number of database connections will easily exhaust the available connections to MySQL/MariaDB. Keep in mind that the limit applies to every instance of a service, so if Nova starts 'm' services replicated for 'n' cores with 'd' possible connections you'll be up to ‘m x n x d' connections. It gets big fast. The default setting of '0' (that is, unlimited) does not make for a good first-run experience, IMO. This issue comes up every few years or so, and the consensus previously is that 200-2000 connections is recommended based on your needs. Your database has to be configured to handle the load and looking at the configuration value across all your services and setting them consistently and appropriately is important. http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html > On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: > > Tagging with oslo as this sounds related to oslo.db. > > On 9/5/19 7:37 PM, Albert Braden wrote: >> After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. >> Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? > > I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. > > It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. > > 0: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size > 1: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.QueuePool.__init__ > 2: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.connection_recycle_time > >> *From:* Albert Braden >> *Sent:* Wednesday, September 4, 2019 10:19 AM >> *To:* openstack-discuss at lists.openstack.org >> *Cc:* Gaëtan Trellu >> *Subject:* RE: Nova causes MySQL timeouts >> We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: >> https://docs.openstack.org/keystone/stein/configuration/config-options.html >> Document says: >> [api_database] >> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >> max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. >> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >> [database] >> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >> min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. >> max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. >> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >> If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? >> My settings are default: >> [api_database]: >> #connection_recycle_time = 3600 >> #max_overflow = >> #max_pool_size = >> [database]: >> #connection_recycle_time = 3600 >> #min_pool_size = 1 >> #max_overflow = 50 >> #max_pool_size = 5 >> It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? >> *From:* Gaëtan Trellu > >> *Sent:* Tuesday, September 3, 2019 1:37 PM >> *To:* Albert Braden > >> *Cc:* openstack-discuss at lists.openstack.org >> *Subject:* Re: Nova causes MySQL timeouts >> Hi Albert, >> It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. >> Keep in mind than more workers you will have more connections will be opened on the database. >> Gaetan (goldyfruit) >> On Sep 3, 2019 4:31 PM, Albert Braden > wrote: >> It looks like nova is keeping mysql connections open until they time >> out. How are others responding to this issue? Do you just ignore the >> mysql errors, or is it possible to change configuration so that nova >> closes and reopens connections before they time out? Or is there a >> way to stop mysql from logging these aborted connections without >> hiding real issues? >> Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' >> (Got timeout reading communication packets) > From cboylan at sapwetik.org Mon Sep 9 16:41:02 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 09 Sep 2019 09:41:02 -0700 Subject: =?UTF-8?Q?Re:_[devstack][qa][python3]_"also_install_the_Python_2_dev_lib?= =?UTF-8?Q?rary"_-_still_needed=3F?= In-Reply-To: References: Message-ID: On Mon, Sep 9, 2019, at 1:22 AM, Peter Penchev wrote: > Hi, > > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has > been specified, this function tries to also install the development > library for Python 2.x, I guess just in case some package has not > declared proper Python 3 support or something. It then proceeds to > install the Python 3 version of the library and all its dependencies. > > Unfortunately there is a problem with that, and specifically with > script files installed in the system's executable files directory, e.g. > /usr/local/bin. The problem appears when some Python library has > already been installed for Python 3 (and has installed its script > files), but is now installed for Python 2 (overwriting the script > files) and is then not forcefully reinstalled for Python 3, since it is > already present. Thus, the script files are last modified by the Python > 2 library installation and they have a hashbang line saying `python2.x` > - so if something then tries to execute them, they will run and use > modules and libraries for Python 2 only. > > We experienced this problem when running the cinderlib tests from > Cinder's `playbooks/cinderlib-run.yaml` file - it finds a unit2 > executable (installed by the unittest2 library) and runs it, hoping > that unit2 will be able to discover and collect the cinderlib tests and > load the cinderlib modules. However, since unittest2 has last been > installed as a Python 2 library, unit2 runs with Python 2 and fails to > locate the cinderlib modules. (Yes, we know that there are other ways > to run the cinderlib tests; this message is about the problem exposed > by this way of running them) One option here is to explicitly run the file under the python version you want. I do this with `pbr freeze` frequently to ensure I'm looking at the correct version of software for the correct version of python. For example: python3 /usr/local/bin/pbr freeze | grep $packagename python2 /usr/local/bin/pbr freeze | grep $packagename Then as long as you have installed the utility (in my case pbr) under both python versions it should just work assuming they don't write different files for different versions of python at install time. > > The obvious solution would be to instruct the Python 2 pip to not > install script (or other shared) files at all; unfortunately, > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on > install"), detailing a very similar use case ("need it installed for > Python 2, but want to use it with Python 3") has been open for almost > exactly three years now with no progress. I wonder if I could try to > help, but even if this issue is resolved, there will be some time > before OpenStack can actually depend on a recent enough version of pip. Note OpenStack tests with, and as a result possibly requires, the latest version of pip. Fixing this in pip shouldn't be a problem as long as they make a release not long after. > > A horrible workaround would be to find the binary directory before > installing the Python 2 library (using something like `pip3.7 show > somepackage` and then running some heuristics on the "Location" field), > tar'ing it up and then restoring it... but I don't know if I even want > to think about this. > > Another possible way forward would be to consider whether we still want > the Python 2 libraries installed - is OpenStack's Python 3 transition > reached a far enough stage to assume that any projects that still > require Python 2 *and* fail to declare their Python 2 dependencies > properly are buggy? To be honest, this seems the most reasonable path > for me - drop the "also install the Python 2 libs" code and see what > happens. I could try to make this change in a couple of test runs in > our third-party Cinder CI system and see if something breaks. > snip Hope this helps, Clark From openstack at nemebean.com Mon Sep 9 16:49:53 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Sep 2019 11:49:53 -0500 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: On 9/9/19 11:38 AM, Chris Hoge wrote: > In my personal experience, running Nova on a four core machine without > limiting the number of database connections will easily exhaust the > available connections to MySQL/MariaDB. Keep in mind that the limit > applies to every instance of a service, so if Nova starts 'm' services > replicated for 'n' cores with 'd' possible connections you'll be up to > ‘m x n x d' connections. It gets big fast. > > The default setting of '0' (that is, unlimited) does not make for a good > first-run experience, IMO. We don't default to 0. We default to 5: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size > > This issue comes up every few years or so, and the consensus previously > is that 200-2000 connections is recommended based on your needs. Your > database has to be configured to handle the load and looking at the > configuration value across all your services and setting them > consistently and appropriately is important. > > http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html Thanks, I did not recall that discussion. If I'm reading it correctly, Jay is suggesting that for MySQL we should just disable connection pooling. As I noted earlier, I don't think we expose the ability to do that in oslo.db (patches welcome!), but setting max_pool_size to 1 would get you pretty close. Maybe we should add that to the help text for the option in oslo.db? > >> On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: >> >> Tagging with oslo as this sounds related to oslo.db. >> >> On 9/5/19 7:37 PM, Albert Braden wrote: >>> After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. >>> Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? >> >> I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. >> >> It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. >> >> 0: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size >> 1: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.QueuePool.__init__ >> 2: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.connection_recycle_time >> >>> *From:* Albert Braden >>> *Sent:* Wednesday, September 4, 2019 10:19 AM >>> *To:* openstack-discuss at lists.openstack.org >>> *Cc:* Gaëtan Trellu >>> *Subject:* RE: Nova causes MySQL timeouts >>> We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: >>> https://docs.openstack.org/keystone/stein/configuration/config-options.html >>> Document says: >>> [api_database] >>> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >>> max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. >>> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >>> [database] >>> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >>> min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. >>> max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. >>> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >>> If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? >>> My settings are default: >>> [api_database]: >>> #connection_recycle_time = 3600 >>> #max_overflow = >>> #max_pool_size = >>> [database]: >>> #connection_recycle_time = 3600 >>> #min_pool_size = 1 >>> #max_overflow = 50 >>> #max_pool_size = 5 >>> It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? >>> *From:* Gaëtan Trellu > >>> *Sent:* Tuesday, September 3, 2019 1:37 PM >>> *To:* Albert Braden > >>> *Cc:* openstack-discuss at lists.openstack.org >>> *Subject:* Re: Nova causes MySQL timeouts >>> Hi Albert, >>> It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. >>> Keep in mind than more workers you will have more connections will be opened on the database. >>> Gaetan (goldyfruit) >>> On Sep 3, 2019 4:31 PM, Albert Braden > wrote: >>> It looks like nova is keeping mysql connections open until they time >>> out. How are others responding to this issue? Do you just ignore the >>> mysql errors, or is it possible to change configuration so that nova >>> closes and reopens connections before they time out? Or is there a >>> way to stop mysql from logging these aborted connections without >>> hiding real issues? >>> Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' >>> (Got timeout reading communication packets) >> > > From openstack at nemebean.com Mon Sep 9 16:51:34 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Sep 2019 11:51:34 -0500 Subject: [keystone] Pre-feature-freeze update In-Reply-To: <5729a3c4-ecf1-40ef-9d12-3d640e8661bc@www.fastmail.com> References: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> <5729a3c4-ecf1-40ef-9d12-3d640e8661bc@www.fastmail.com> Message-ID: <6b51299f-5b0d-f8fe-e1a2-cff029903aa5@nemebean.com> On 9/9/19 10:57 AM, Colleen Murphy wrote: > On Fri, Sep 6, 2019, at 21:57, Colleen Murphy wrote: > > [snipped] > >> >> * CI >> >> After skimming the meeting logs I saw the unit test timeout problem was >> discussed and a temporary workaround was proposed[8]. This sounded like >> a great idea but it seems that no one implemented it, so I did[9]. >> Unfortunately this will conflict with all the >> system-scope/default-roles patches in flight. With how many changes >> need to go in and how slow it will be with all of them needing to be >> rechecked and continually making the problem even worse, I propose we >> go ahead and merge the workaround ASAP and update all the in-flight >> changes to move the protection tests to the new location. >> > > Alternatively, we can raise the timeouts temporarily as proposed here[11], then merge all the policy changes, then merge the protection test split. Seems prudent (one rebase vs. many rebases), assuming the "merge all the policy changes" step can be done in a reasonable amount of time. > > [snipped] > >> [8] >> http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-08-27-16.01.log.html#l-84 >> [9]https://review.opendev.org/680788 > > [11] https://review.opendev.org/680798 > >> >> Colleen >> >> > From tpb at dyncloud.net Mon Sep 9 18:05:20 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 9 Sep 2019 14:05:20 -0400 Subject: [manila][ops] Shanghai Forum - Manila Topic Planning Message-ID: <20190909180520.fu5f6gi4edarvl65@barron.net> As mentioned several times in the Manila Community Meeting, we have posted an etherpad to brainstorm and gauge interest in topics for the Forum at the upcoming OpenInfra Summit in Shanghai. The point of the Forum sesssions is to get feedback from operators and users on things that need fixing, improvements and enhancements, and more generally about the strategic direction for Manila. So please take a look and update this etherpad with topic ideas and indicate your interest in topics already present if you have an interest in Manila. It doesn't matter whether you contribute to the project or not, or whether you will yourself be attending the Forum: https://etherpad.openstack.org/p/manila-shanghai-forum-brainstorming We will review this etherpad in our community meeting at 1500 UTC on 19 September in #openstack-meeting-alt on Freenode, one day before the Forum proposal deadline. Please feel free to join that meeting to discuss, and in any case please add to the brainstorming deadline before then. Cheers, -- Tom Barron From premdeep.xion at gmail.com Mon Sep 9 18:05:37 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Mon, 9 Sep 2019 23:35:37 +0530 Subject: [nova] Offline Installation of Openstack Message-ID: Hi Team, Requesting your help on below. We have a requirement to setup Openstack in an isolated infra. We will not be provided with Internet. How can we set it up? 1. Can we have a local repository (Rocky, Universal, etc)created? If so how do we manage it? 2. We have noticed lot of package dependencies while setting up Openstack Infra, so will creating a local repository help in an implementation when we do not have an internet. What is the success rate? Thanks Prem -------------- next part -------------- An HTML attachment was scrubbed... URL: From premdeep.xion at gmail.com Mon Sep 9 18:07:04 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Mon, 9 Sep 2019 23:37:04 +0530 Subject: [nova] Offline Installation of Openstack In-Reply-To: References: Message-ID: Additionally we would like to set up in ubuntu 18.04, Rocky version On Mon, Sep 9, 2019 at 11:35 PM Premdeep S wrote: > Hi Team, > > Requesting your help on below. > > We have a requirement to setup Openstack in an isolated infra. We will not > be provided with Internet. How can we set it up? > > 1. Can we have a local repository (Rocky, Universal, etc)created? If so > how do we manage it? > 2. We have noticed lot of package dependencies while setting up Openstack > Infra, so will creating a local repository help in an implementation when > we do not have an internet. What is the success rate? > > Thanks > Prem > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 9 18:16:58 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Sep 2019 18:16:58 +0000 Subject: [nova] Offline Installation of Openstack In-Reply-To: References: Message-ID: <20190909181657.aa3rti6ulflg7rbf@yuggoth.org> On 2019-09-09 23:35:37 +0530 (+0530), Premdeep S wrote: [...] > We have a requirement to setup Openstack in an isolated infra. We > will not be provided with Internet. How can we set it up? > > 1. Can we have a local repository (Rocky, Universal, etc)created? > If so how do we manage it? > > 2. We have noticed lot of package dependencies while setting up > Openstack Infra, so will creating a local repository help in an > implementation when we do not have an internet. What is the > success rate? I know Debian provides complete installation image sets for CD, DVD and Blu-ray you can use offline, and these incorporate all packages in their archive (including OpenStack): https://www.debian.org/releases/buster/debian-installer/ In your follow-up E-mail you mentioned Ubuntu specifically... I don't know whether they maintain similar installation media images, but if they do that may be a good solution. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From premdeep.xion at gmail.com Mon Sep 9 18:18:55 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Mon, 9 Sep 2019 23:48:55 +0530 Subject: [ceph][nova][DR] Openstack DR Setup Message-ID: Hi Team, We are looking to build a DR infrastructure. Our existing DC setup consists of multiple node Controller, Compute and Ceph nodes as the storage backend. We are using ubuntu 18.04 and Rocky version. Can someone please share any document or guide us on how we can build a DR infra for the existing DC? 1. Do we need to have the storage shared across (Ceph)? 2. What are the dependencies? 3. Is there a guide for the same Thanks Prem -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Sep 9 19:00:12 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 9 Sep 2019 21:00:12 +0200 Subject: [edge] Edge whitepaper and tutorial authors needed Message-ID: <20B34733-8B9A-421C-BCCB-2BEF1D87BB27@gmail.com> Hi, I’m reaching out to point you to two mail threads on the edge-computing mailing list. The edge working group is looking into writing up a second whitepaper with a few detailed use cases and information about the reference architecture work the group has been doing. If you are interested in this work please __reach out to me or check out this mail thread__: http://lists.openstack.org/pipermail/edge-computing/2019-September/000632.html The other work item is writing up edge tutorials about frameworks to a German magazine. __This is a short deadline activity, please reach out to me if you are interested in participating.__ For further information please see this mail thread: http://lists.openstack.org/pipermail/edge-computing/2019-September/000633.html Please let me know if you have questions to any of the above. Thanks, Ildikó From nate.johnston at redhat.com Mon Sep 9 19:18:52 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 9 Sep 2019 15:18:52 -0400 Subject: Bug Deputy September 2 - September 9 Message-ID: <20190909191758.5rhxosto6mumldiy@bishop> Neutrinos, Here is the bug deputy report for this past week. It was an exciting week of CI problems; thanks to everyone who pitched in to get us past those thorny issues. I would call out specifically the two bugs rated as High that have no assignee yet; they are both gate related - one for Neutron, and the other affecting the Patrole project. There is also one bug left Untriaged because I was unable to validate the bug; I would appreciate a triaging look if you can. Thanks! Nate ---- Critical - https://bugs.launchpad.net/bugs/1842482 "test_get_devices_info_veth_different_namespaces" fails because veth1_1 interface has a link device in the same namespace Status: Fix Committed (slaweq) https://review.opendev.org/680001 - https://bugs.launchpad.net/bugs/1842517 neutron-sanity-check command fails if netdev datapath is used Status: In Progress (deepak.tiwari) No change registered in LP bug - https://bugs.launchpad.net/bugs/1842657 Job networking-ovn-tempest-dsvm-ovs-release is failing 100% times Status: Fix Committed (maciej.josefczyk) https://review.opendev.org/661065 - https://bugs.launchpad.net/bugs/1842659 Funtional tests of start and restart services failing 100% times Status: Fix Committed (slaweq) https://review.opendev.org/680001 High - https://bugs.launchpad.net/bugs/1843285 Trunk scenario test test_subport_connectivity failing with iptables_hybrid fw driver Status: Unassigned - https://bugs.launchpad.net/bugs/1842666 Bulk port creation with supplied security group also adds default security group Status: In Progress (njohnston) https://review.opendev.org/679852 - https://bugs.launchpad.net/bugs/1843025 FWaaS v2 fails to add ICMPv6 rules via horizon Status: In Progress (haleyb) https://review.opendev.org/680753 - https://bugs.launchpad.net/bugs/1843282 Rally CI not working since jsonschema version bump Status: Fix Committed (ralonsoh) https://review.opendev.org/681001 - https://bugs.launchpad.net/bugs/1843290 Remove network flavor profile fails Status: Unassigned Note: Currently breaking the gate for the Patrole project Medium - https://bugs.launchpad.net/bugs/1842327 Report in logs when FIP associate and disassociate Status: In progress (ralonsoh) https://review.opendev.org/680976 Low - https://bugs.launchpad.net/bugs/1842934 multicast scenario test failing when guest image don't have python3 installed Status: In Progress (slaweq) https://review.opendev.org/680428 - https://bugs.launchpad.net/bugs/1842937 Some ports assigned to routers don't have the correspondent routerport register Status: In Progress (ralonsoh) No change registered in LP bug - https://bugs.launchpad.net/bugs/1843269 Nova notifier called even if set to False Status: In Progress (haleyb) https://review.opendev.org/681016 RFE - https://bugs.launchpad.net/bugs/1843218 allow to create record on default zone from tenants Untriaged - https://bugs.launchpad.net/bugs/1843211 network-ip-availabilities' result is not correct when the subnet has no allocation-pool Status: Unassigned From colleen at gazlene.net Mon Sep 9 19:19:19 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 09 Sep 2019 12:19:19 -0700 Subject: [keystone] cannot use 'openstack trust list' without admin role In-Reply-To: References: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> Message-ID: <720f0763-efaa-4b1b-b3bf-0befec246c7c@www.fastmail.com> Hi François, On Mon, Sep 9, 2019, at 08:36, Francois Scheurer wrote: > Hello > > > > I think this old link is explaining the reason behind this > "inconsistency" with the policy.json rules: > > https://bugs.launchpad.net/keystone/+bug/1373599 > > So to summarize, the RBAC is allowing identity:list_trusts for a non > admin user (cf. policy.json) but then hard coded policies deny the > request if non admin. > > Quote: > > The policies in policy.json can make these operations more restricted, > but not less restricted than the hard-coded restrictions. We can't > simply remove these settings from policy.json, as that would cause the > "default" rule to be used which makes trusts unusable in the case of > the default "default" rule of "admin_required". I wish I had known about this bug, as I would have reopened and closed it. You're correct that the trusts API was doing some unusal RBAC hardcoding, which we have just addressed by moving that logic into policy and then updating the policy defaults to be more sensible: https://review.opendev.org/#/q/topic:trust-policies That series is making its way through CI now and so will be available in the Train release. Unfortunately I don't think we can backport any of it because it introduces new functionality in the policies. Colleen > > > > Cheers > > Francois > > > > On 9/9/19 1:57 PM, Francois Scheurer wrote: > > Hi All > > > > > > I found an answer here > > > https://bugs.launchpad.net/keystone/+bug/1373599 > > > > > On 9/6/19 5:59 PM, Francois Scheurer wrote: > > Dear Keystone Experts, I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. 'openstack trust create/show' works. 'openstack trust list' is denied. But keystone policy.json says: > >     "identity:create_trust": "user_id:%(trust.trustor_user_id)s", >     "identity:list_trusts": "", >     "identity:list_roles_for_trust": "", >     "identity:get_role_for_trust": "", >     "identity:delete_trust": "", >     "identity:get_trust": "", > > So "openstack list trusts" is always allowed. In keystone log (I > replaced the uid's by names in the ouput below) I see that > 'identity:list_trusts()' was actually granted > but just after that a _*admin_required()*_ is getting checked and > fails... I wonder why... > > There is also a flag* is_admin_project=True* in the rbac creds for some reason... > > Any clue? Many thanks in advance! > > > Cheers > Francois > > > > #openstack --os-cloud stage-fsc trust create --project fscproject > --role creator fsc fsc > #=> fail because of the names and policy rules, but using uid's it works > openstack --os-cloud stage-fsc trust create --project > aeac4b07d8b144178c43c65f29fa9dac --role > 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c > 3e9b1a4fe95048a3b98fb5abebd44f6c > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > #this fails: > openstack --os-cloud stage-fsc trust list > > *You are not authorized to perform the requested action: admin_required. (HTTP 403)* > > > > > > > >  -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > Attachments: > * smime.p7s From nate.johnston at redhat.com Mon Sep 9 19:53:48 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 9 Sep 2019 15:53:48 -0400 Subject: [neutron] Bug Deputy September 2 - September 9 In-Reply-To: <20190909191758.5rhxosto6mumldiy@bishop> References: <20190909191758.5rhxosto6mumldiy@bishop> Message-ID: <20190909193358.3jalu6t7pbwxwwib@bishop> Apologies, omitted the "[neutron]" subject tag. On Mon, Sep 09, 2019 at 03:18:52PM -0400, Nate Johnston wrote: > Neutrinos, > > Here is the bug deputy report for this past week. It was an exciting week of > CI problems; thanks to everyone who pitched in to get us past those thorny > issues. I would call out specifically the two bugs rated as High that have no > assignee yet; they are both gate related - one for Neutron, and the other > affecting the Patrole project. There is also one bug left Untriaged because I > was unable to validate the bug; I would appreciate a triaging look if you can. > > Thanks! > > Nate > > ---- > > Critical > > - https://bugs.launchpad.net/bugs/1842482 > "test_get_devices_info_veth_different_namespaces" fails because veth1_1 interface has a link device in the same namespace > Status: Fix Committed (slaweq) https://review.opendev.org/680001 > > - https://bugs.launchpad.net/bugs/1842517 > neutron-sanity-check command fails if netdev datapath is used > Status: In Progress (deepak.tiwari) No change registered in LP bug > > - https://bugs.launchpad.net/bugs/1842657 > Job networking-ovn-tempest-dsvm-ovs-release is failing 100% times > Status: Fix Committed (maciej.josefczyk) https://review.opendev.org/661065 > > - https://bugs.launchpad.net/bugs/1842659 > Funtional tests of start and restart services failing 100% times > Status: Fix Committed (slaweq) https://review.opendev.org/680001 > > High > > - https://bugs.launchpad.net/bugs/1843285 > Trunk scenario test test_subport_connectivity failing with iptables_hybrid fw driver > Status: Unassigned > > - https://bugs.launchpad.net/bugs/1842666 > Bulk port creation with supplied security group also adds default security group > Status: In Progress (njohnston) https://review.opendev.org/679852 > > - https://bugs.launchpad.net/bugs/1843025 > FWaaS v2 fails to add ICMPv6 rules via horizon > Status: In Progress (haleyb) https://review.opendev.org/680753 > > - https://bugs.launchpad.net/bugs/1843282 > Rally CI not working since jsonschema version bump > Status: Fix Committed (ralonsoh) https://review.opendev.org/681001 > > - https://bugs.launchpad.net/bugs/1843290 > Remove network flavor profile fails > Status: Unassigned > Note: Currently breaking the gate for the Patrole project > > Medium > > - https://bugs.launchpad.net/bugs/1842327 > Report in logs when FIP associate and disassociate > Status: In progress (ralonsoh) https://review.opendev.org/680976 > > Low > > - https://bugs.launchpad.net/bugs/1842934 > multicast scenario test failing when guest image don't have python3 installed > Status: In Progress (slaweq) https://review.opendev.org/680428 > > - https://bugs.launchpad.net/bugs/1842937 > Some ports assigned to routers don't have the correspondent routerport register > Status: In Progress (ralonsoh) No change registered in LP bug > > - https://bugs.launchpad.net/bugs/1843269 > Nova notifier called even if set to False > Status: In Progress (haleyb) https://review.opendev.org/681016 > > RFE > > - https://bugs.launchpad.net/bugs/1843218 > allow to create record on default zone from tenants > > Untriaged > > - https://bugs.launchpad.net/bugs/1843211 > network-ip-availabilities' result is not correct when the subnet has no allocation-pool > Status: Unassigned From openstack-dev at storpool.com Mon Sep 9 22:23:04 2019 From: openstack-dev at storpool.com (Peter Penchev) Date: Tue, 10 Sep 2019 01:23:04 +0300 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: Message-ID: On Mon, Sep 9, 2019 at 7:42 PM Clark Boylan wrote: > On Mon, Sep 9, 2019, at 1:22 AM, Peter Penchev wrote: > > Hi, > > > > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has > > been specified, this function tries to also install the development > > library for Python 2.x, I guess just in case some package has not > > declared proper Python 3 support or something. It then proceeds to > > install the Python 3 version of the library and all its dependencies. > > > > Unfortunately there is a problem with that, and specifically with > > script files installed in the system's executable files directory, e.g. > > /usr/local/bin. The problem appears when some Python library has > > already been installed for Python 3 (and has installed its script > > files), but is now installed for Python 2 (overwriting the script > > files) and is then not forcefully reinstalled for Python 3, since it is > > already present. Thus, the script files are last modified by the Python > > 2 library installation and they have a hashbang line saying `python2.x` > > - so if something then tries to execute them, they will run and use > > modules and libraries for Python 2 only. > > > > We experienced this problem when running the cinderlib tests from > > Cinder's `playbooks/cinderlib-run.yaml` file - it finds a unit2 > > executable (installed by the unittest2 library) and runs it, hoping > > that unit2 will be able to discover and collect the cinderlib tests and > > load the cinderlib modules. However, since unittest2 has last been > > installed as a Python 2 library, unit2 runs with Python 2 and fails to > > locate the cinderlib modules. (Yes, we know that there are other ways > > to run the cinderlib tests; this message is about the problem exposed > > by this way of running them) > > One option here is to explicitly run the file under the python version you > want. I do this with `pbr freeze` frequently to ensure I'm looking at the > correct version of software for the correct version of python. For example: > > python3 /usr/local/bin/pbr freeze | grep $packagename > python2 /usr/local/bin/pbr freeze | grep $packagename > > Then as long as you have installed the utility (in my case pbr) under both > python versions it should just work assuming they don't write different > files for different versions of python at install time. > This is what we ended up doing (sorry, I might have mentioned that in the original message; it was a solved problem for our CI) - we modified the Ansible job to explicitly run "python3.7 unit2". So, yeah, my message was more to point out the general problem than to ask for help for our specific case, but still, yeah, thanks, that's exactly what we did. > > > The obvious solution would be to instruct the Python 2 pip to not > > install script (or other shared) files at all; unfortunately, > > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on > > install"), detailing a very similar use case ("need it installed for > > Python 2, but want to use it with Python 3") has been open for almost > > exactly three years now with no progress. I wonder if I could try to > > help, but even if this issue is resolved, there will be some time > > before OpenStack can actually depend on a recent enough version of pip. > > Note OpenStack tests with, and as a result possibly requires, the latest > version of pip. Fixing this in pip shouldn't be a problem as long as they > make a release not long after. > Right, I did briefly wonder whether this was true while writing my mail, I should have taken the time to check and see that devstack actually installs its own version of pip and removes any versions installed by OS packages. Hm, I just might try my hand at that in the coming days or weeks, but I can't really make any promises. > > > > A horrible workaround would be to find the binary directory before > > installing the Python 2 library (using something like `pip3.7 show > > somepackage` and then running some heuristics on the "Location" field), > > tar'ing it up and then restoring it... but I don't know if I even want > > to think about this. > > > > Another possible way forward would be to consider whether we still want > > the Python 2 libraries installed - is OpenStack's Python 3 transition > > reached a far enough stage to assume that any projects that still > > require Python 2 *and* fail to declare their Python 2 dependencies > > properly are buggy? To be honest, this seems the most reasonable path > > for me - drop the "also install the Python 2 libs" code and see what > > happens. I could try to make this change in a couple of test runs in > > our third-party Cinder CI system and see if something breaks. > > > > snip > > Hope this helps, > Sure, thanks! Still, would you agree that for Ussuri this ought to be solved by ripping out the "also install a Python 2 version" part? G'luck, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 9 22:29:37 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Sep 2019 22:29:37 +0000 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: Message-ID: <20190909222936.nodrldkoc6ksmb2u@yuggoth.org> On 2019-09-10 01:23:04 +0300 (+0300), Peter Penchev wrote: [...] > would you agree that for Ussuri this ought to be solved by ripping > out the "also install a Python 2 version" part? At the very least, we ought to hide that functionality behind a config option so it's disabled by default. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Mon Sep 9 22:49:55 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Sep 2019 17:49:55 -0500 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: <45a75ea5-d3c7-d0db-673d-69bba219e805@gmail.com> On 9/9/2019 11:49 AM, Ben Nemec wrote: > Maybe we should add that to the help text for the option in oslo.db? I was going to reply to Chris's email with something like this - sounds like the config option help could use some more details around how to calculate the value that's appropriate, what to look out for when it's miscalculated, things to try, etc. Lots of the DB tuning options suffer from the same kind of lack of info. I know I know patches welcome, I'm not helping by piling on, but I'm also not deep in this area. -- Thanks, Matt From cboylan at sapwetik.org Tue Sep 10 00:08:32 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 09 Sep 2019 17:08:32 -0700 Subject: [ironic] Ironic tempest jobs hitting retry_limit failures Message-ID: Hello Ironic, We've noticed that your tempest jobs have been hitting retry_limit failure recently. What this means is we attempted to run the job 3 times but each time the job failed due to "network" problems and Zuul eventually gave up. On further investigation I found that this is happening because the ironic tempest jobs are filling the root disk on rackspace nodes (which have a smaller root / + ephemeral drive mounted at /opt) with libvirt qcow2 images. This seems to cause ansible to fail to operate because it needs to write to /tmp and it thinks there is a "network" error. I've thrown my investigation into a bug for you [0]. It would be great if you could take a look at this as we are effectively spinning our wheels for about 9 hours every time this happens. I did hold the node I used to investigate. If you'd like to dig in yourselves just ask the infra team for access to nodepool node ubuntu-bionic-rax-ord-0011007873. Finally, to help debug these issues in the future I've started adding a cleanup-run playbook [1] which should give us network and disk info (can be expanded if necessary too) for every job when it is done running. Even if the disk is full. [0] https://storyboard.openstack.org/#!/story/2006520 [1] https://review.opendev.org/#/c/681100/ Clark From renat.akhmerov at gmail.com Tue Sep 10 04:59:08 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 10 Sep 2019 11:59:08 +0700 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: References: Message-ID: <4df13713-5db7-407b-b902-a52ca1f5cddd@Spark> Oleg, congrats! Welcome to the core team ) Thanks Renat Akhmerov @Nokia On 9 Sep 2019, 15:33 +0700, Dougal Matthews , wrote: > +1, seems like a good addition to the team! > > > On Thu, 5 Sep 2019 at 05:35, Renat Akhmerov wrote: > > > Andras, > > > > > > You just went one step ahead of me! I was going to promote Oleg in the end of this week :) I’m glad that we coincided at this. Thanks! I’m for it with my both hands! > > > > > > > > > Renat Akhmerov > > > @Nokia > > > On 4 Sep 2019, 17:33 +0700, András Kövi , wrote: > > > > I would like to invite Oleg Ovcharuk to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. > > > > > > > > Thank you, > > > > Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Tue Sep 10 08:27:14 2019 From: francois.scheurer at everyware.ch (=?iso-8859-1?Q?Scheurer_Fran=E7ois?=) Date: Tue, 10 Sep 2019 08:27:14 +0000 Subject: [keystone] cannot use 'openstack trust list' without admin role In-Reply-To: <720f0763-efaa-4b1b-b3bf-0befec246c7c@www.fastmail.com> References: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> , <720f0763-efaa-4b1b-b3bf-0befec246c7c@www.fastmail.com> Message-ID: <1568104034539.98290@everyware.ch> Hi Colleen Thank you for your message. They also mentioned this in the patch proposal: https://review.opendev.org/#/c/123862/4/doc/source/configuration.rst : " I initially had the same reaction, but it arguably is desired to have hard-coded restrictions in some cases. The hard-coded restrictions prevent one from making a mistake in the policy file that opens up access to something that should never be authorized." So one should also take this into account. Best Regards Francois ________________________________________ From: Colleen Murphy Sent: Monday, September 9, 2019 9:19 PM To: openstack-discuss at lists.openstack.org Subject: Re: [keystone] cannot use 'openstack trust list' without admin role Hi François, On Mon, Sep 9, 2019, at 08:36, Francois Scheurer wrote: > Hello > > > > I think this old link is explaining the reason behind this > "inconsistency" with the policy.json rules: > > https://bugs.launchpad.net/keystone/+bug/1373599 > > So to summarize, the RBAC is allowing identity:list_trusts for a non > admin user (cf. policy.json) but then hard coded policies deny the > request if non admin. > > Quote: > > The policies in policy.json can make these operations more restricted, > but not less restricted than the hard-coded restrictions. We can't > simply remove these settings from policy.json, as that would cause the > "default" rule to be used which makes trusts unusable in the case of > the default "default" rule of "admin_required". I wish I had known about this bug, as I would have reopened and closed it. You're correct that the trusts API was doing some unusal RBAC hardcoding, which we have just addressed by moving that logic into policy and then updating the policy defaults to be more sensible: https://review.opendev.org/#/q/topic:trust-policies That series is making its way through CI now and so will be available in the Train release. Unfortunately I don't think we can backport any of it because it introduces new functionality in the policies. Colleen > > > > Cheers > > Francois > > > > On 9/9/19 1:57 PM, Francois Scheurer wrote: > > Hi All > > > > > > I found an answer here > > > https://bugs.launchpad.net/keystone/+bug/1373599 > > > > > On 9/6/19 5:59 PM, Francois Scheurer wrote: > > Dear Keystone Experts, I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. 'openstack trust create/show' works. 'openstack trust list' is denied. But keystone policy.json says: > > "identity:create_trust": "user_id:%(trust.trustor_user_id)s", > "identity:list_trusts": "", > "identity:list_roles_for_trust": "", > "identity:get_role_for_trust": "", > "identity:delete_trust": "", > "identity:get_trust": "", > > So "openstack list trusts" is always allowed. In keystone log (I > replaced the uid's by names in the ouput below) I see that > 'identity:list_trusts()' was actually granted > but just after that a _*admin_required()*_ is getting checked and > fails... I wonder why... > > There is also a flag* is_admin_project=True* in the rbac creds for some reason... > > Any clue? Many thanks in advance! > > > Cheers > Francois > > > > #openstack --os-cloud stage-fsc trust create --project fscproject > --role creator fsc fsc > #=> fail because of the names and policy rules, but using uid's it works > openstack --os-cloud stage-fsc trust create --project > aeac4b07d8b144178c43c65f29fa9dac --role > 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c > 3e9b1a4fe95048a3b98fb5abebd44f6c > +--------------------+----------------------------------+ > | Field | Value | > +--------------------+----------------------------------+ > | deleted_at | None | > | expires_at | None | > | id | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation | False | > | project_id | fscproject | > | redelegation_count | 0 | > | remaining_uses | None | > | roles | creator | > | trustee_user_id | fsc | > | trustor_user_id | fsc | > +--------------------+----------------------------------+ > > openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b > +--------------------+----------------------------------+ > | Field | Value | > +--------------------+----------------------------------+ > | deleted_at | None | > | expires_at | None | > | id | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation | False | > | project_id | fscproject | > | redelegation_count | 0 | > | remaining_uses | None | > | roles | creator | > | trustee_user_id | fsc | > | trustor_user_id | fsc | > +--------------------+----------------------------------+ > > #this fails: > openstack --os-cloud stage-fsc trust list > > *You are not authorized to perform the requested action: admin_required. (HTTP 403)* > > > > > > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > Attachments: > * smime.p7s -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From chkumar246 at gmail.com Tue Sep 10 08:27:50 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 10 Sep 2019 13:57:50 +0530 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: On Wed, Sep 4, 2019 at 9:57 PM Chris Hoge wrote: > > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. > Thank you for the all the amazing work you have done in OpenStack. Sad to see you leaving. All the best for your future adventures. :-) Thanks, Chandan Kumar From ionut at fleio.com Tue Sep 10 10:38:50 2019 From: ionut at fleio.com (Ionut Biru) Date: Tue, 10 Sep 2019 13:38:50 +0300 Subject: [neutron][vmware][vsphere] integration Message-ID: Hello guys, I'm trying to integrate openstack stein with an already running vmware vsphere cluster. All the documentation that I found explain how do it with distributed switches or port groups but currently in my setup, vmware is using Standard Network. Openstack Stein was deployed using OSA , neutron was configured using ovs and i configured the integrated_bridge to br-int. I tried first to deployed using linux-bridge but when I tried to deployed an instance, neutron returned that only ovs or dvs method is supported. Now when ovs when I'm deploying an instance with network, nova returns an error message: 2019-09-10 10:15:19.010 22443 ERROR nova.compute.manager [instance: d8d1cbb8-5c1c-4b98-9739-bea0668cfaa5] VimFaultException: An error occurred during host configuration. 2019-09-10 10:15:19.010 22443 ERROR nova.compute.manager [instance: d8d1cbb8-5c1c-4b98-9739-bea0668cfaa5] Faults: ['PlatformConfigFault'] How do you guys integrate neutron with vmware vsphere with standard network? Is there a driver that I need to use? -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant at civo.com Tue Sep 10 10:40:09 2019 From: grant at civo.com (Grant Morley) Date: Tue, 10 Sep 2019 11:40:09 +0100 Subject: OSA upgrading Xenial Queens to Bionic Rocky Message-ID: Hi all, I was wondering if there was a guide for upgrading OpenStack Ansible  from Ubuntu 16.04 Queens to Ubuntu 18.04 Rocky? I remember a long time ago there was an etherpad set up for upgrading from 14.04 -> 16.04 but I can't seem to find anything similar for going to 18.04. Annoyingly as we don't have lots of hardware, we are going to have to upgrade in place. If there are any guides that would be much appreciated. Many thanks. -- Grant Morley Cloud Lead, Civo Ltd www.civo.com | Signup for an account! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Tue Sep 10 11:17:00 2019 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Tue, 10 Sep 2019 12:17:00 +0100 Subject: [openstack-ansible] OSA upgrading Xenial Queens to Bionic Rocky In-Reply-To: References: Message-ID: Hi Grant, You need to upgrade Queens to Rocky first on your 16.04 hosts. Rocky is the OSA transitional release which supports both 16.04 and 18.04. At that point you can choose to do an in-place operating system upgrade, or reinstall the hosts from fresh one by one. Either way you should not need any additional hardware as long as you have multiple controller nodes already. Drop into #openstack-ansible IRC and we can help you out. Regards, Jonathan. On 10/09/2019 11:40, Grant Morley wrote: > Hi all, > > I was wondering if there was a guide for upgrading OpenStack Ansible > from Ubuntu 16.04 Queens to Ubuntu 18.04 Rocky? I remember a long time > ago there was an etherpad set up for upgrading from 14.04 -> 16.04 but I > can't seem to find anything similar for going to 18.04. > > Annoyingly as we don't have lots of hardware, we are going to have to > upgrade in place. > > If there are any guides that would be much appreciated. > > Many thanks. > From tnakamura.openstack at gmail.com Tue Sep 10 11:23:59 2019 From: tnakamura.openstack at gmail.com (Tetsuro Nakamura) Date: Tue, 10 Sep 2019 20:23:59 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: Sorry for the late response. I was on a business trip in Southeast Asia, and needed time to get internal permission, but finally I’d like to announce my candidacy for the PTL role of Placement for the Ussuri cycle. I’ve been involved with Placement since Queens cycle. I helped to develop new features, to keep refactoring for better performance, and to help other projects to use it (like Blazar to meet NFV requirements using placement). In the U cycle, having new features on server side, I’d like to focus on client side: * Improve usability of osc-placement and catch up the latest microversion * Commonize client code that helps other projects to use placement easily and intuitively That would help us to get more projects to use it and to get more use cases, such as reservations for ironic standalone nodes. Thanks! 2019年9月6日(金) 0:26 Ghanshyam Mann : > Hello Everyone, > > With Ussuri Cycle PTL election completed, we left with Placement project > as leaderless[1]. > In today TC meeting[2], we discussed the few possibilities and decided to > reach out to the > eligible candidates to serve the PTL position. > > We would like to know if anyone from Placement core team, Nova core team > or PTL (as placement > main consumer) of any other interested/related developer is interested to > take the PTL position? > > [1] https://governance.openstack.org/election/results/ussuri/ptl.html > [2] > http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-250 > > -TC (gmann) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 10 11:37:48 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Sep 2019 20:37:48 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: <16d1af7275c.1049b688929860.3445344073479624095@ghanshyammann.com> ---- On Tue, 10 Sep 2019 20:23:59 +0900 Tetsuro Nakamura wrote ---- > Sorry for the late response. > I was on a business trip in Southeast Asia, and needed time to get internal permission, > but finally I’d like to announce my candidacy for the PTL role of Placement for the Ussuri cycle. > I’ve been involved with Placement since Queens cycle. > I helped to develop new features, to keep refactoring for better performance, > and to help other projects to use it (like Blazar to meet NFV requirements using placement). > In the U cycle, having new features on server side, I’d like to focus on client side: > * Improve usability of osc-placement and catch up the latest microversion > * Commonize client code that helps other projects to use placement easily and intuitively > That would help us to get more projects to use it and to get more use cases, > such as reservations for ironic standalone nodes. > Thanks! Thanks Tetsuro. I have proposed the governance patch for that- https://review.opendev.org/#/c/681226/ -gmann > > 2019年9月6日(金) 0:26 Ghanshyam Mann : > Hello Everyone, > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. > In today TC meeting[2], we discussed the few possibilities and decided to reach out to the > eligible candidates to serve the PTL position. > > We would like to know if anyone from Placement core team, Nova core team or PTL (as placement > main consumer) of any other interested/related developer is interested to take the PTL position? > > [1] https://governance.openstack.org/election/results/ussuri/ptl.html > [2] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-250 > > -TC (gmann) > > > From thierry at openstack.org Tue Sep 10 12:27:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 10 Sep 2019 14:27:37 +0200 Subject: [release][cyborg] os-acc status In-Reply-To: <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> References: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> Message-ID: <57c5857e-ec1c-4685-03a9-ee890b3394eb@openstack.org> Nadathur, Sundar wrote: > Hi Thierry and all, > Os-acc is not relevant and will be discontinued. This was communicated in [1]. A patch has been filed for the same [2]. > > I will start the work after Train-3 milestone. That was also mentioned in [3]. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008473.html > [2] https://review.opendev.org/#/c/676331/ > [3] https://review.opendev.org/#/c/680091/ Ah! I did not remember that when I spotted the absence of changes on that repository. Sorry for the false alarm! Regards, -- Thierry From sundar.nadathur at intel.com Tue Sep 10 13:05:37 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Tue, 10 Sep 2019 13:05:37 +0000 Subject: [release][cyborg] os-acc status In-Reply-To: <57c5857e-ec1c-4685-03a9-ee890b3394eb@openstack.org> References: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> <57c5857e-ec1c-4685-03a9-ee890b3394eb@openstack.org> Message-ID: <1CC272501B5BC543A05DB90AA509DED52760BEC3@fmsmsx122.amr.corp.intel.com> NP, Thierry. Thanks for keeping tabs. Regards, Sundar > -----Original Message----- > From: Thierry Carrez > Sent: Tuesday, September 10, 2019 5:28 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [release][cyborg] os-acc status > > Nadathur, Sundar wrote: > > Hi Thierry and all, > > Os-acc is not relevant and will be discontinued. This was communicated in > [1]. A patch has been filed for the same [2]. > > > > I will start the work after Train-3 milestone. That was also mentioned in [3]. > > > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008 > > 473.html [2] https://review.opendev.org/#/c/676331/ > > [3] https://review.opendev.org/#/c/680091/ > > Ah! I did not remember that when I spotted the absence of changes on that > repository. Sorry for the false alarm! > > Regards, > > -- > Thierry From camille.rodriguez at canonical.com Tue Sep 10 13:07:19 2019 From: camille.rodriguez at canonical.com (Camille Rodriguez) Date: Tue, 10 Sep 2019 09:07:19 -0400 Subject: [Horizon] Help making custom theme - resend as still looking:) In-Reply-To: References: Message-ID: Hi Amy, I have done something similar with the charm-openstack-dashboard and Juju tools from Canonical previously. I also have some experience developing a Django website. I would be happy to help by testing your tutorial and provide feedback if you would like. I am also attending the GHC in October. Kind regards, Camille Rodriguez On Fri, Sep 6, 2019 at 4:23 PM Amy Marrich wrote: > > Just thought I'd resend this out to see if someone could help:) > > For the Grace Hopper Conference's Open Source Day we're doing a Horizon > based workshop for OpenStack (running Devstack Pike). The end goal is to > have the attendee teams create their own OpenStack theme supporting a > humanitarian effort of their choice in a few hours. I've tried modifying > the material theme thinking it would be the easiest route to go but that > might not be the best way to go about this.:) > > I've been getting some assistance from e0ne in the Horizon channel and my > logo now shows up on the login page, and I had already gotten the > SITE_BRAND attributes and the theme itself to show up after changing the > local_settings.py. > > If anyone has some tips or a tutorial somewhere it would be greatly > appreciated and I will gladly put together a tutorial for the repo when > done. > > Thanks! > > Amy (spotz) > -- Camille Rodriguez, Field Software Engineer Canonical -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Tue Sep 10 13:42:12 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 10 Sep 2019 09:42:12 -0400 Subject: [charms] Retiring charm-neutron-api-genericswitch Message-ID: Hi All, I'm going to retire charm-neutron-api-genericswitch today as it is currently not maintained. I've already discussed and received approval to do so from the original author and the current charms PTL, so this serves as a more broad announcement. Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 10 14:07:00 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 10 Sep 2019 10:07:00 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # New Projects - os_murano (under openstack-ansible) # General Changes - We made a few improvements while reviewing the separation of goal definition from goal selection: https://review.opendev.org/#/c/677938/ Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From vgvoleg at gmail.com Tue Sep 10 14:08:56 2019 From: vgvoleg at gmail.com (Oleg Ovcharuk) Date: Tue, 10 Sep 2019 17:08:56 +0300 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: <4df13713-5db7-407b-b902-a52ca1f5cddd@Spark> References: <4df13713-5db7-407b-b902-a52ca1f5cddd@Spark> Message-ID: Wow! Good news! Thank for your trust guys! Hope I will be useful :) > 10 сент. 2019 г., в 7:59, Renat Akhmerov написал(а): > > Oleg, congrats! Welcome to the core team ) > > > Thanks > > Renat Akhmerov > @Nokia > On 9 Sep 2019, 15:33 +0700, Dougal Matthews , wrote: >> +1, seems like a good addition to the team! >> >> On Thu, 5 Sep 2019 at 05:35, Renat Akhmerov > wrote: >> Andras, >> >> You just went one step ahead of me! I was going to promote Oleg in the end of this week :) I’m glad that we coincided at this. Thanks! I’m for it with my both hands! >> >> >> Renat Akhmerov >> @Nokia >> On 4 Sep 2019, 17:33 +0700, András Kövi >, wrote: >>> I would like to invite Oleg Ovcharuk > to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. >>> >>> Thank you, >>> Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 10 14:14:13 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 10 Sep 2019 10:14:13 -0400 Subject: [tc] monthly meeting summary Message-ID: Hi everyone, The TC held it’s monthly meeting on the 5th of September 2019 and this email provides a summary of that meeting. We contacted Alan to mention that TC will have some presence at the Shanghai leadership meeting alongside other OSF projects on November 3rd, so the day before the summit. Rico is currently working on an etherpad to update SIG guidelines to simplify the process for new SIGs. Once the draft version is done, they will ask SIG chairs to join in the editing part. We still have to contact interested parties for a new ‘large scale’ SIG so we will follow up again on that action item in the next meeting. Graham is currently in the process of testing the code in order to make the proposal bot for propose project-template patches for specific releases. We’re working on adding some forum sessions ideas for the TC and we’ve got volunteers in the forum selection committee. Thierry finished making goal selection a two-step process and it's been merged. There are a few projects that lacked a PTL elected, we’ve discussed the following points for each: - Cyborg: Sundar self-nominated but only on the mailing list, therefore we will appoint them. - Designate: The developers who have expressed interest didn’t have commits, so Graham will sync with both of them to see how if can make it work. - OpenstackSDK: Monty might have missed the notice since they were traveling so Thierry will reach out to him to see if they want to take it again or has suggestions. - I18n: Ian Y. Choi expressed interest but couldn’t run because they were an election official. - {PowerVM,Win}stackers: The code and review activity was quiet and they missed election twice in a row so we proposed to remove them from project teams list and if they want to continue they can as a SIG. - Placement: we are trying to find someone to volunteer by reaching out to Placement and Nova team. We started discussing a change in the process for release names and the rest of that discussion carried over into office hours. I hope that I covered most of what we discussed, for the full meeting logs, you can find them here: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mthode at mthode.org Tue Sep 10 15:23:08 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 10 Sep 2019 10:23:08 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> Message-ID: <20190910152308.oe75ltvwtdlnsynm@mthode.org> On 19-08-23 20:09:31, Ghanshyam Mann wrote: > ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > > NOVA: > > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > > websockify===0.9.0 tempest test failing > > > > > > > > > > KEYSTONE: > > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > > > NEUTRON: > > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > > > KURYR: > > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > > https://review.opendev.org/665352 > > > > > > > > > > MISC: > > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > > it's a testtools bug: > > > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > > for python 3. This was done to fix a huge number of compatibility issues people > > > > had reported when trying to run stdlib unittest suites using stestr on > > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > > for openstack (more specificially tempest) is that it's built off of testtools > > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > > and treats it as an unhandled exception which is a test failure, instead of the > > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > > attribute that gets checked before the test method is executed instead of > > > > relying on an exception, which is why this is mostly only an issue for tempest > > > > because it does a lot of run time skips via exceptions. > > > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > > merge access but haven't had much free time to actively maintain it > > > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > > version/branch( > > is good option to me. I think your PR to remove the unittest2 use form testtools > > > make sense to me [1]. A workaround in Tempest can be last option for us. > > > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > > solution, unittest2 is still needed for python < 3.5 in testtools and > > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > > can rework that PR so that it's conditional and always uses stdlib unittest > > for python >= 3.5 but then testtools ends up maintaining two separate paths > > depending on python version. I'd like to continue thinking about that is as a > > long term solution because I don't know when I'll have the time to keep pushing > > that PR forward. > > Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till > 29th Aug. I will be able to check the workaround on testtools or tempest side after that > only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. > > -gmann > > > > > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > > not the options you like. > > > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > > > -gmann > > > > > > > > > > > -Matt Treinish > > > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > > would be appreciated. > > > > > > > > > > -- > > > > > Matthew Thode > > > > > > > > > > > > > > > > > > > > > > Any progress on this, at the moment only stestr-2.5.1 is being held back. https://review.opendev.org/680914 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Sep 10 15:42:03 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Sep 2019 00:42:03 +0900 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190910152308.oe75ltvwtdlnsynm@mthode.org> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> <20190910152308.oe75ltvwtdlnsynm@mthode.org> Message-ID: <16d1bd6c83c.b5b4707242927.3734104778674628098@ghanshyammann.com> ---- On Wed, 11 Sep 2019 00:23:08 +0900 Matthew Thode wrote ---- > On 19-08-23 20:09:31, Ghanshyam Mann wrote: > > ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > > > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > > > NOVA: > > > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > > > websockify===0.9.0 tempest test failing > > > > > > > > > > > > KEYSTONE: > > > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > > > > > NEUTRON: > > > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > > > > > KURYR: > > > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > > > https://review.opendev.org/665352 > > > > > > > > > > > > MISC: > > > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > > > it's a testtools bug: > > > > > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > > > for python 3. This was done to fix a huge number of compatibility issues people > > > > > had reported when trying to run stdlib unittest suites using stestr on > > > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > > > for openstack (more specificially tempest) is that it's built off of testtools > > > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > > > and treats it as an unhandled exception which is a test failure, instead of the > > > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > > > attribute that gets checked before the test method is executed instead of > > > > > relying on an exception, which is why this is mostly only an issue for tempest > > > > > because it does a lot of run time skips via exceptions. > > > > > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > > > merge access but haven't had much free time to actively maintain it > > > > > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > > > version/branch( > > > is good option to me. I think your PR to remove the unittest2 use form testtools > > > > make sense to me [1]. A workaround in Tempest can be last option for us. > > > > > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > > > solution, unittest2 is still needed for python < 3.5 in testtools and > > > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > > > can rework that PR so that it's conditional and always uses stdlib unittest > > > for python >= 3.5 but then testtools ends up maintaining two separate paths > > > depending on python version. I'd like to continue thinking about that is as a > > > long term solution because I don't know when I'll have the time to keep pushing > > > that PR forward. > > > > Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till > > 29th Aug. I will be able to check the workaround on testtools or tempest side after that > > only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. > > > > -gmann > > > > > > > > > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > > > not the options you like. > > > > > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > > > > > -gmann > > > > > > > > > > > > > > -Matt Treinish > > > > > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > > > would be appreciated. > > > > > > > > > > > > -- > > > > > > Matthew Thode > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Any progress on this, at the moment only stestr-2.5.1 is being held > back. > > https://review.opendev.org/680914 There is no progress on this yet. As unittest2 cannot be dropped from testtools, we need to get some workaround in Tempest. I need more time to try the failure and fix. -gmann > > -- > Matthew Thode > From gmann at ghanshyammann.com Wed Sep 11 03:25:04 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Sep 2019 12:25:04 +0900 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <16d1bd6c83c.b5b4707242927.3734104778674628098@ghanshyammann.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> <20190910152308.oe75ltvwtdlnsynm@mthode.org> <16d1bd6c83c.b5b4707242927.3734104778674628098@ghanshyammann.com> Message-ID: <16d1e5a672e.eff625a852394.455530881521903034@ghanshyammann.com> ---- On Wed, 11 Sep 2019 00:42:03 +0900 Ghanshyam Mann wrote ---- > ---- On Wed, 11 Sep 2019 00:23:08 +0900 Matthew Thode wrote ---- > > On 19-08-23 20:09:31, Ghanshyam Mann wrote: > > > ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > > > > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > > > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > > > > NOVA: > > > > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > > > > websockify===0.9.0 tempest test failing > > > > > > > > > > > > > > KEYSTONE: > > > > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > > > > > > > NEUTRON: > > > > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > > > > > > > KURYR: > > > > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > > > > https://review.opendev.org/665352 > > > > > > > > > > > > > > MISC: > > > > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > > > > it's a testtools bug: > > > > > > > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > > > > for python 3. This was done to fix a huge number of compatibility issues people > > > > > > had reported when trying to run stdlib unittest suites using stestr on > > > > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > > > > for openstack (more specificially tempest) is that it's built off of testtools > > > > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > > > > and treats it as an unhandled exception which is a test failure, instead of the > > > > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > > > > attribute that gets checked before the test method is executed instead of > > > > > > relying on an exception, which is why this is mostly only an issue for tempest > > > > > > because it does a lot of run time skips via exceptions. > > > > > > > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > > > > merge access but haven't had much free time to actively maintain it > > > > > > > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > > > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > > > > version/branch( > > > > is good option to me. I think your PR to remove the unittest2 use form testtools > > > > > make sense to me [1]. A workaround in Tempest can be last option for us. > > > > > > > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > > > > solution, unittest2 is still needed for python < 3.5 in testtools and > > > > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > > > > can rework that PR so that it's conditional and always uses stdlib unittest > > > > for python >= 3.5 but then testtools ends up maintaining two separate paths > > > > depending on python version. I'd like to continue thinking about that is as a > > > > long term solution because I don't know when I'll have the time to keep pushing > > > > that PR forward. > > > > > > Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till > > > 29th Aug. I will be able to check the workaround on testtools or tempest side after that > > > only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. > > > > > > -gmann > > > > > > > > > > > > > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > > > > not the options you like. > > > > > > > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > -Matt Treinish > > > > > > > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > > > > would be appreciated. > > > > > > > > > > > > > > -- > > > > > > > Matthew Thode > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Any progress on this, at the moment only stestr-2.5.1 is being held > > back. > > > > https://review.opendev.org/680914 > > There is no progress on this yet. As unittest2 cannot be dropped from testtools, > we need to get some workaround in Tempest. I need more time to try the failure and fix. Trying to workaround it in Tempest - https://review.opendev.org/#/c/681340/ but it seems it needs to handle too many cases in Tempest (with py and stestr versions ) Let's see if we can properly do it in Tempest. -gmann > > > -gmann > > > > > -- > > Matthew Thode > > > From li.canwei2 at zte.com.cn Wed Sep 11 06:01:07 2019 From: li.canwei2 at zte.com.cn (li.canwei2 at zte.com.cn) Date: Wed, 11 Sep 2019 14:01:07 +0800 (CST) Subject: =?UTF-8?B?W1dhdGNoZXJdIHRlYW0gbWVldGluZyBhdCAwODowMCBVVEMgdG9kYXk=?= Message-ID: <201909111401077845519@zte.com.cn> Hi team, Watcher team will have a meeting at 08:00 UTC today in the #openstack-meeting-alt channel. The agenda is available on https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda feel free to add any additional items. Thanks! Canwei Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordan.ansell at catalyst.net.nz Wed Sep 11 07:14:28 2019 From: jordan.ansell at catalyst.net.nz (Jordan Ansell) Date: Wed, 11 Sep 2019 19:14:28 +1200 Subject: [nova][glance][entropy][database] update glance metadata for nova instance In-Reply-To: References: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> <00606cba-2f08-df2c-4342-fc997ec87342@gmail.com> Message-ID: On 27/08/19 11:00 AM, Jordan Ansell wrote: > On 27/08/19 1:47 AM, Brian Rosmaita wrote: >> On 8/26/19 4:24 AM, Sean Mooney wrote: >>> On Mon, 2019-08-26 at 18:18 +1200, Jordan Ansell wrote: >>>> Hi Openstack Discuss, >>>> >>>> I have an issue with nova not synchronizing changes between a glance >>>> image and it's local image meta information in nova. >>>> >>>> I have updated a glance image with the property "hw_rng_model=virtio", >>>> and that successfully passes that to new instances created using the >>>> updated image. However existing instances do not receive this new property. >>>> >>>> I have located the image metadata within the nova database, in the >>>> **instance_system_metadata** table, and can see it's not updated for the >>>> existing instances, and only adding the relevant rows for instances that >>>> are created when that property is present. The key being >>>> "image_hw_rng_model" and "virtio" being the value. >>>> >>>> Is there a way to tell nova to update the table for existing instances, >>>> and synchronizing the two databases? Or is this the kind of thing that >>>> would need to be done *shudder* manually...? >>> this is idealy not something you would do at all. >>> nova create a local copy of the image metadata the instace was booted with >>> intionally to not pick up chagne you make to the image metadata after you boot >>> the instance. in some case those change could invalidate the host the image is on so >>> it in general in not considerd safe to just sync them >>> >>> for the random number generator it should be ok but if you were to add a trait requirement >>> of alter the numa topology then it could invalidate the host as a candiate for that instance. >>> so if you want to do this then you need to update it manually as nova is working as >>> intended by not syncing the data. >>>> If so, are there any >>>> experts out there who can point me to some documentation on doing this >>>> correctly before I go butcher a couple of dummy nova database? >>> there is no docs for doing this as it is not a supported feature. >>> you are circumventing a safty feature we have in nova to prevent change to running instances >>> after they are first booted by change to the flavor extra spec or image metadata. >>>> Regards, >>>> Jordan >>>> >>>> >> I agree with everything Sean says here. I just want to remind you that >> if you use the nova image-create action on an instance, the image >> properties put on the new image are pulled from the nova database. So >> if you do decide to update the DB manually (not that I am recommending >> that!), don't forget that any already existing snapshot images will have >> the "wrong" value for the property. (You can update them via the Images >> API.) >> > Thanks Sean and Brian..! > > I hadn't considered the snapshots.. that's a really good point! And > thank you for the warnings, I can see why this isn't something that's > synchronized automatically :S > > Regards, > Jordan > Hi all, I wanted to share a follow-up to this with two points: * We've found another way to "give" and existing instance entropy using the API following an update to flavor and image metadata. * The documentation on entropy rates **everywhere** seems to be incorrect and could do with some updating.. Instead of updating the nova database and re-scheduling an instance, one can create a snapshot, add the "hw_rng_model=virtio" property to the snapshot, then launch the instance from that image using a flavor with the entropy properties. And boom! We have a copy of an existing instance with the addition of entropy :). Not perfect, but potentially better than an unsupported and risky operation. With regard to the flavor documentation, it's written in the libvirt documentation [1] that the unit of the period attribute is *milliseconds* not seconds. However all documentation I came across for the "hw_rng:rate_period" of a flavor says this is in *seconds*. I've submitted bugs on the docs.openstack.org site, however if you are in charge of some other documentation please update your info :) There's a big difference between 100 bytes every millisecond and 100 bytes every 1000 milliseconds..! Regards, Jordan [1] https://libvirt.org/formatdomain.html#elementsRng [2] https://bugs.launchpad.net/nova/+bug/1843541 [3] https://bugs.launchpad.net/nova/+bug/1843542 From a.settle at outlook.com Wed Sep 11 08:39:15 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 11 Sep 2019 08:39:15 +0000 Subject: [all] [tc] [ptls] PDF Goal Update Message-ID: Hi all, According to the Train schedule, this week is the week of "Train Community Goals completed" [1]. In the last few weeks, we've been working hard on the goal to enable PDF support in the project docs. We have successfully completed... 1. Creating a workable solution [2] 2. Communicating this solution via ML [3] As far as the success of the goal is, that has to be measured by the individual teams. But it looks like we're all going really well at implementing the new changes [4]. Thanks to everyone who has jumped in from across the board to make this a success! I wanted to touch base with the teams and gather a status update from the PTLs or project liaisons on where they are at, what questions they may have, and how we (the docs team and TC) can help. Over the next week I will reach out to each team and gather a status update of sorts. Thanks, Alex -- Alexandra Settle IRC: asettle [1] https://releases.openstack.org/train/schedule.html#t-goals-complete [2] https://etherpad.openstack.org/p/train-pdf-support-goal [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/ 008503.html (amongst others) [4] https://review.opendev.org/#/q/topic:build-pdf-docs From n.sameshima at w.ntt.com Wed Sep 11 10:25:53 2019 From: n.sameshima at w.ntt.com (Naohiro Sameshima) Date: Wed, 11 Sep 2019 19:25:53 +0900 Subject: [dev] [glance] proposal for S3 store driver re-support as galnce_store backend Message-ID: Hi all, I know that glance_store had supported S3 backend until version OpenStack Mitaka, and it has already been removed due to lack of maintainers [1][2]. I started refactoring the S3 driver to work with version OpenStack Stein and recently completed it. (e.g. Add Multi Store Support, Using the latest AWS SDK) So, it would be great if glance_store could support the S3 driver again. However, I'm not familiar with the procedure for that. Would it be possible to discuss this? Thanks, Naohiro [1] https://docs.openstack.org/releasenotes/glance/newton.html [2] https://opendev.org/openstack/glance_store/src/branch/master/releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml From stig.openstack at telfer.org Wed Sep 11 10:52:33 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 11 Sep 2019 11:52:33 +0100 Subject: [scientific-sig] No IRC meeting today Message-ID: <5F26878F-ED60-4F81-AA41-FEA3262A5F01@telfer.org> Hi all - Apologies, there will not be a Scientific SIG IRC meeting today, due to chair availability. Cheers, Stig From liam.young at canonical.com Wed Sep 11 10:56:45 2019 From: liam.young at canonical.com (Liam Young) Date: Wed, 11 Sep 2019 11:56:45 +0100 Subject: [masakari] Message-ID: Hi, I have a patch up for masakari and another for masakari-monitors: https://review.opendev.org/#/c/647756/ https://review.opendev.org/#/c/675734/ If any of the masakari devs have cycles I'd really love to get them landed. Thanks Liam -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbechtold at suse.com Wed Sep 11 12:48:42 2019 From: tbechtold at suse.com (Thomas Bechtold) Date: Wed, 11 Sep 2019 14:48:42 +0200 Subject: [rpm-packaging] Proposing new core member Message-ID: <6b176899-15c3-b0c6-2c0b-8cbab05e844c@suse.com> Hi, I would like to nominate Ralf Haferkamp for rpm-packaging core. Ralf has be active in doing very valuable reviews since some time so I feel he would be a great addition to the team. Please give your +1/-1 in the next days. Cheers, Tom From tobias.rydberg at citynetwork.eu Wed Sep 11 13:59:50 2019 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Wed, 11 Sep 2019 15:59:50 +0200 Subject: [sigs][publiccloud][publiccloud-wg][publiccloud-sig] Bi-weekly meeting for the Public Cloud SIG tomorrow Message-ID: <30460365-552a-26ff-8d81-149243267a99@citynetwork.eu> Hi all, It is time for a new meeting for the Public Cloud SIG! Would love to see as many of you there as possible! Topics for the meeting includes Shanghai Forum topics and moving forward on the billing initiative. Time and place: Tomorrow, 12th September at 1400 UTC in #openstack-publiccloud! Agenda can be found at https://etherpad.openstack.org/p/publiccloud-sig Feel free to add topics to the agenda! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From lpetrut at cloudbasesolutions.com Wed Sep 11 14:08:37 2019 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Wed, 11 Sep 2019 14:08:37 +0000 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance Message-ID: <64050966FCE0B948BCE2B28DB6E0B7D557AA4A56@CBSEX1.cloudbase.local> Hi, I had a chat with my team and we think it would be best if we could keep Winstackers as a separate team. This is mostly because of the associated projects, which are essential for the Windows – Openstack integration effort. Other teams may not be interested in adopting those projects, which would be required if we chose the SIG route. Despite missing this election, I can assure you that we’re quite active in this endeavor. I’m willing to take the PTL role, offloading this task from Claudiu, whose time was quite limited recently. Regards, Lucian Petrut Cloudbase Solutions ________________________________________ From: Mohammed Naser [mnaser at vexxhost.com] Sent: Monday, September 09, 2019 3:05 PM To: Thierry Carrez Cc: OpenStack Discuss Subject: Re: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance On Fri, Sep 6, 2019 at 5:10 AM Thierry Carrez wrote: > > Divya K Konoor wrote: > > Missing the deadline for a PTL nomination cannot be the reason for > > removing governance. > > I agree with that, but missing the deadline twice in a row is certainly > a sign of some disconnect with the rest of the OpenStack community. > Project teams require a minimal amount of reactivity and presence, so it > is fair to question whether PowerVMStackers should continue as a project > team in the future. > > > PowerVMStackers continue to be an active project > > and would want to be continued to be governed under OpenStack. For PTL, > > an eligible candidate can still be appointed . > > There is another option, to stay under OpenStack governance but without > the constraints of a full project team: PowerVMStackers could be made an > OpenStack SIG. > > I already proposed that 6 months ago (last time there was no PTL nominee > for the team), on the grounds that interest in PowerVM was clearly a > special interest, and a SIG might be a better way to regroup people > interested in supporting PowerVM in OpenStack. > > The objection back then was that PowerVMStackers maintained a number of > PowerVM-related code, plugins and drivers that should ideally be adopted > by their consuming project teams (nova, neutron, ceilometer), and that > making it a SIG would endanger that adoption process. > > I still think it makes sense to consider PowerVMStackers as a Special > Interest Group. As long as the PowerVM-related code is not adopted by > the consuming projects, it is arguably a special interest, and not a > completely-integrated part of OpenStack components. > > The only difference in being a SIG (compared to being a project team) > would be to reduce the amount of mandatory tasks (like designating a PTL > every 6 months). You would still be able to own repositories, get room > at OpenStack events, vote on TC election... > > It would seem to be the best solution in your case. I echo all of this and I think at this point, it's better for the deliverables to be within a SIG. > -- > Thierry Carrez (ttx) > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Sep 11 14:50:03 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 11 Sep 2019 10:50:03 -0400 Subject: How to boot 2 VMs in openstack on same subnet? Message-ID: <20190911145003.GB31877@localhost.localdomain> Greetings, We use a few openstack public clouds for testing in the ansible project, specifically using nodepool. We have a use case, where we need to boot 2 VMs on the same public subnet for testing reasons. However, the majority of the clouds we are using, do not have a single subnet for their entire public IP range. Up until now, we boot the 2 VMs, then hope they land on the same subnet, but this isn't really efficient. Basically looking to see if there is a better way to handle this either via openstacksdk or some other configuration we need cloud side. Also note, we'd like to do this with public provider network (which we don't have control over) and avoid using private network for now. Paul From marek.lycka at ultimum.io Wed Sep 11 15:21:27 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Wed, 11 Sep 2019 17:21:27 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: Hi all, > We can't review your patches, because we don't understand them. For the patches to be merged, we > need more than one person, so that they can review each other's patches. Well, yes. That's what I'm trying to address. Even if another person appeared to review javascript code, it wouldn't change anything unless he had +2 and +W rights though. And even then, it wouldn't be enough, because two +2 are currently expected for the CR process to go ahead. > JavaScript is fine. We all know how to write and how to review JavaScript code, and there doesn't > have to be much of it — Horizon is not the kind of tool that has to bee all shiny and animated. It's a tool > for getting work done. This isn't about being shiny and animated though. This is about basic functionality, usability and performance. I did some stress testing with large datasets [1], and the non-angularized versions of basic functionality like sorting, paging and filtering in table panels are either non-existent, not working at all or basically unusable (for a multitude of reasons). Removing them would force reimplementations in pure JQuery and I strongly suspect that those implementations would be much messier and cost a considerable amount of time and effort. >AngularJS is a problem, because you can't tell what the code does just by looking >at the code, and so you can neither review nor fix it. This is clearly a matter of opinion. I find Angular code easier to deal with than JQuery spaghetti. > There has been a lot of work put into mixing Horizon with Angular, but I disagree that it has solved problems, > and in fact it has introduced a lot of regressions. I'm not saying the NG implementations are perfect, but they mostly work where it counts and can be improved where they do not. > Just to take a simple example, the translations are currently broken for en.AU and en.GB languages, > and date display is not localized. And nobody cares. It's difficult for me to judge which features are broken in NG and how much interest there is in having them fixed, but they can be fixed once reported. What I can say for sure is that I keep hitting this issue because of actual feature requests from actual users. See [2] for an example. I'm not sure implementing that in pure JQuery would be nearly as simple as it was in Angular. > We had automated tests before Angular. There weren't many of them, because we also didn't have much > JavaScript code. If I remember correctly, those tests were ripped out during the Angularization. Fair enough. > Arguably, improvements are, on average, impossible to add to Angular I disagree. Yes, pure JQuery is probably easier when dealing with very simple things, but once feature complexity increases beyond the basics, you'll very quickly find the features offered by the framework relevant - things like MVC decoupling, browser-side templating, reusable components, functionality injection etc. Again, see [2] for an example. On a side note, some horizon plugins (such as octavia-dashboard) use Angular extensively. Removing it would at the very least break them. Whatever the community decision is though, I feel like it needs to be made so that related issues can be addressed with a reasonable expectation of being reviewed and merged. [1] Networks, Roles and Images in the low thousands [2] https://review.opendev.org/#/c/618173/ pá 6. 9. 2019 v 18:44 odesílatel Dale Bewley napsal: > As an uninformed user I would just like to say Horizon is seen _as_ > Openstack to new users and I appreciate ever effort to improve it. > > Without discounting past work, the Horizon experience leaves much to be > desired and it colors the perspective on the entire platform. > > On Fri, Sep 6, 2019 at 05:01 Radomir Dopieralski > wrote: > >> >> >> On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka >> wrote: >> >>> Hi, >>> >>> > we need people familiar with Angular and Horizon's ways of using >>> Angular (which seem to be very >>> > non-standard) that would be willing to write and review code. >>> Unfortunately the people who originally >>> > introduced Angular in Horizon and designed how it is used are no >>> longer interested in contributing, >>> > and there don't seem to be any new people able to handle this. >>> >>> I've been working with Horizon's Angular for quite some time and don't >>> mind keeping at it, but >>> it's useless unless I can get my code merged, hence my original message. >>> >>> As far as attracting new developers goes, I think that removing some >>> barriers to entry couldn't hurt - >>> seeing commits simply lost to time being one of them. I can see it as >>> being fairly demoralizing. >>> >> >> We can't review your patches, because we don't understand them. For the >> patches to be merged, we >> need more than one person, so that they can review each other's patches. >> >> >>> > Personally, I think that a better long-time strategy would be to >>> remove all >>> > Angular-based views from Horizon, and focus on maintaining one >>> language and one set of tools. >>> >>> Removing AngularJS wouldn't remove JavaScript from horizon. We'd still >>> be left with a home-brewish >>> framework (which is buggy as is). I don't think removing js completely >>> is realistic either: we'd lose >>> functionality and worsen user experience. I think that keeping Angular >>> is the better alternative: >>> >>> 1) A lot of work has already been put into Angularization, solving many >>> problems >>> 2) Unlike legacy js, Angular code is covered by automated tests >>> 3) Arguably, improvments are, on average, easier to add to Angular than >>> pure js implementations >>> >>> Whatever reservations there may be about the current implementation can >>> be identified and addressed, but >>> all in all, I think removing it at this point would be counterproductive. >>> >> >> JavaScript is fine. We all know how to write and how to review JavaScript >> code, and there doesn't >> have to be much of it — Horizon is not the kind of tool that has to bee >> all shiny and animated. It's a tool >> for getting work done. AngularJS is a problem, because you can't tell >> what the code does just by looking >> at the code, and so you can neither review nor fix it. >> >> There has been a lot of work put into mixing Horizon with Angular, but I >> disagree that it has solved problems, >> and in fact it has introduced a lot of regressions. Just to take a simple >> example, the translations are currently >> broken for en.AU and en.GB languages, and date display is not localized. >> And nobody cares. >> >> We had automated tests before Angular. There weren't many of them, >> because we also didn't have much JavaScript code. >> If I remember correctly, those tests were ripped out during the >> Angularization. >> >> Arguably, improvements are, on average, impossible to add to Angular, >> because the code makes no sense on its own. >> >> >> -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Wed Sep 11 15:28:07 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 11 Sep 2019 11:28:07 -0400 Subject: [sahara] Cancelling Sahara meeting September 12 Message-ID: Hi all, There will be no Sahara meeting 2019-09-12, the reason being that Luigi is not around and there is not much to discuss anyway. Holler if you need anything. Thanks, Jeremy From colleen at gazlene.net Wed Sep 11 16:06:13 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 11 Sep 2019 09:06:13 -0700 Subject: [rpm-packaging] Proposing new core member In-Reply-To: <6b176899-15c3-b0c6-2c0b-8cbab05e844c@suse.com> References: <6b176899-15c3-b0c6-2c0b-8cbab05e844c@suse.com> Message-ID: On Wed, Sep 11, 2019, at 05:48, Thomas Bechtold wrote: > Hi, > > I would like to nominate Ralf Haferkamp for rpm-packaging core. > Ralf has be active in doing very valuable reviews since some time so I > feel he would be a great addition to the team. > > Please give your +1/-1 in the next days. > > Cheers, > > Tom > > +1 will be great to have Ralf on board. Colleen From nicolas.bock at suse.com Wed Sep 11 16:18:04 2019 From: nicolas.bock at suse.com (Nicolas Bock) Date: Wed, 11 Sep 2019 10:18:04 -0600 Subject: [rpm-packaging] Proposing new core member In-Reply-To: <62caa0b06e184db0a92abf094aa43220@DM5PR1801MB2012.namprd18.prod.outlook.com> References: <62caa0b06e184db0a92abf094aa43220@DM5PR1801MB2012.namprd18.prod.outlook.com> Message-ID: <99f3435e-c2a5-dd3c-1d52-cda44ed178c6@suse.com> On 9/11/19 6:48 AM, Thomas Bechtold wrote: > Hi, > > I would like to nominate Ralf Haferkamp for rpm-packaging core. > Ralf has be active in doing very valuable reviews since some time so I > feel he would be a great addition to the team. > > Please give your +1/-1 in the next days. +1 > Cheers, > > Tom > > From gr at ham.ie Wed Sep 11 16:40:34 2019 From: gr at ham.ie (Graham Hayes) Date: Wed, 11 Sep 2019 17:40:34 +0100 Subject: [tc] TC Chair Nominations - closing soon Message-ID: Hello all new and returning TC members! Welcome (back) to the TC. We now have to do some of the standard post election paperwork / processes. One of the first things we need to do is elect a chair for this cycle! We currently have 2 nominations, and nominations will remain open until 23:59 UTC tomorrow 12-Sept-2019. At that point, we will start a CIVS election for the chair, and select them. To nominate yourself, just add a review to the governance repo like so : [1][2]. If you are interested in the chair, please do consider running - It is open to everyone, new and less new on the TC, and the job has been documented by previous chairs and TC members [3] If you have any questions - reply to this mail, ask in the #openstack-tc IRC room, reply to me and I will see who I can put you in contact with, who may know, or ping mnaser, who is the current chair. I propose the following timeline: Nominations Close: 2019-09-12 @ 23:59 UTC. Election created: Morning (EU timezone) of 13 Sept Election finish: Evening (EU timezone) of 18 Sept or when all TC members have voted. Thanks all, and please reach out with any questions! - Graham 1 - https://review.opendev.org/#/c/681285/2/reference/members.yaml 2 - https://review.opendev.org/#/c/680414/2/reference/members.yaml 3 - https://opendev.org/openstack/governance/src/branch/master/CHAIR.rst -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From smooney at redhat.com Wed Sep 11 16:52:31 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Sep 2019 17:52:31 +0100 Subject: How to boot 2 VMs in openstack on same subnet? In-Reply-To: <20190911145003.GB31877@localhost.localdomain> References: <20190911145003.GB31877@localhost.localdomain> Message-ID: <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> On Wed, 2019-09-11 at 10:50 -0400, Paul Belanger wrote: > Greetings, > > We use a few openstack public clouds for testing in the ansible project, > specifically using nodepool. We have a use case, where we need to boot 2 > VMs on the same public subnet for testing reasons. However, the majority > of the clouds we are using, do not have a single subnet for their entire > public IP range. Up until now, we boot the 2 VMs, then hope they land > on the same subnet, but this isn't really efficient. you can just specify the subnet as part of the boot request. so if you know the subnet ahead of time its pretty trivial to do this im not sure if nodepool can do that but it should not be hard to since nova supports it. at the nodepool leve you can specify the netwrok at teh pool or lable level https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.networks https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.labels.networks that coudl be extended to the subnet in theory. > > Basically looking to see if there is a better way to handle this either > via openstacksdk or some other configuration we need cloud side. > > Also note, we'd like to do this with public provider network (which we > don't have control over) and avoid using private network for now. > > Paul > From smooney at redhat.com Wed Sep 11 16:56:15 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Sep 2019 17:56:15 +0100 Subject: How to boot 2 VMs in openstack on same subnet? In-Reply-To: <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> References: <20190911145003.GB31877@localhost.localdomain> <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> Message-ID: On Wed, 2019-09-11 at 17:52 +0100, Sean Mooney wrote: > On Wed, 2019-09-11 at 10:50 -0400, Paul Belanger wrote: > > Greetings, > > > > We use a few openstack public clouds for testing in the ansible project, > > specifically using nodepool. We have a use case, where we need to boot 2 > > VMs on the same public subnet for testing reasons. However, the majority > > of the clouds we are using, do not have a single subnet for their entire > > public IP range. Up until now, we boot the 2 VMs, then hope they land > > on the same subnet, but this isn't really efficient. > > you can just specify the subnet as part of the boot request. > so if you know the subnet ahead of time its pretty trivial to do this > im not sure if nodepool can do that but it should not be hard to > since nova supports it. > > at the nodepool leve you can specify the netwrok at teh pool or lable level > https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.networks > https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.labels.networks > that coudl be extended to the subnet in theory. actully i am wrong we can olny specify the network we can select a subnet if we pass fixed ips on that network but we cant pass the subnet uuid. > > > > > Basically looking to see if there is a better way to handle this either > > via openstacksdk or some other configuration we need cloud side. > > > > Also note, we'd like to do this with public provider network (which we > > don't have control over) and avoid using private network for now. > > > > Paul > > > > From mriedemos at gmail.com Wed Sep 11 17:26:56 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 11 Sep 2019 12:26:56 -0500 Subject: How to boot 2 VMs in openstack on same subnet? In-Reply-To: References: <20190911145003.GB31877@localhost.localdomain> <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> Message-ID: On 9/11/2019 11:56 AM, Sean Mooney wrote: > we can olny specify the network Or ports, so pre-create two ports on the same subnet and provide them to nova when creating the server. -- Thanks, Matt From ekcs.openstack at gmail.com Wed Sep 11 17:52:26 2019 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 11 Sep 2019 10:52:26 -0700 Subject: [self-healing][autohealing][PTG][Forum] brainstorming etherpads for Shanghai Message-ID: Hello healers, The brainstorming etherpads for Self-healing Forum and PTG sessions are up: https://etherpad.openstack.org/p/SHA-self-healing-SIG Please add your topics there. Looking forward to productive discussions in Shanghai! From fungi at yuggoth.org Wed Sep 11 18:57:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 11 Sep 2019 18:57:16 +0000 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> Message-ID: <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> On 2019-09-09 12:53:26 +0530 (+0530), Yatin Karel wrote: [...] > Can someone from Release or Infra Team can do the needful of > removing stable/ocata and stable/pike branch for TripleO projects > being EOLed for pike/ocata in > https://review.opendev.org/#/c/677478/ and > https://review.opendev.org/#/c/678154/. I've attempted to extract the lists of projects from the changes you linked. I believe you're asking to have the stable/ocata branch deleted from these projects: openstack/instack-undercloud openstack/instack openstack/os-apply-config openstack/os-cloud-config openstack/os-collect-config openstack/os-net-config openstack/os-refresh-config openstack/puppet-tripleo openstack/python-tripleoclient openstack/tripleo-common openstack/tripleo-heat-templates openstack/tripleo-image-elements openstack/tripleo-puppet-elements openstack/tripleo-ui openstack/tripleo-validations And the stable/pike branch deleted from these projects: openstack/instack-undercloud openstack/instack openstack/os-apply-config openstack/os-collect-config openstack/os-net-config openstack/os-refresh-config openstack/paunch openstack/puppet-tripleo openstack/python-tripleoclient openstack/tripleo-common openstack/tripleo-heat-templates openstack/tripleo-image-elements openstack/tripleo-puppet-elements openstack/tripleo-ui openstack/tripleo-validations Can you confirm? Also, have you checked for and abandoned all open changes on the affected branches? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Sep 12 00:52:35 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 11 Sep 2019 17:52:35 -0700 Subject: [all][PTL] Call for Cycle Highlights for Train In-Reply-To: References: Message-ID: Reminder that cycle highlights are due the end of this week! -Kendall (diablo_rojo) On Thu, 5 Sep 2019, 11:48 am Kendall Nelson, wrote: > Hello Everyone! > > As you may or may not have read last week in the release update from Sean, > its time to call out 'cycle-highlights' in your deliverables! > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > Rocky[1] or this one for Stein[2]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. > > *The deadline for cycle highlights is the end of the R-5 week [3] on Sept > 13th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/train/$PROJECT.yaml in the > openstack/releases repo similar to this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/train/highlights.html file > or the output of the build-openstack-sphinx-docs job under html/train/ > highlights.html. > > Thanks :) > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/rocky/highlights.html > [2] https://releases.openstack.org/stein/highlights.html > [3] https://releases.openstack.org/train/schedule.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From masayuki.igawa at gmail.com Thu Sep 12 02:01:01 2019 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Thu, 12 Sep 2019 11:01:01 +0900 Subject: [qa] forum sessions brainstorming Message-ID: <84f255e0-7dd7-48c4-811d-635b77e7d4f7@www.fastmail.com> Hi All, I have created the below etherpad[0] to collect the forum ideas related to QA for Shanghai Summit. Please write up your ideas with your IRC name on the etherpad. [0] https://etherpad.openstack.org/p/PVG-forum-qa-brainstorming -- Masayuki From premdeep.xion at gmail.com Thu Sep 12 07:57:00 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Thu, 12 Sep 2019 13:27:00 +0530 Subject: [ceph][nova][DR] Openstack DR Setup In-Reply-To: References: Message-ID: Hi Team, Can anyone help on this please? On Mon, Sep 9, 2019, 11:48 PM Premdeep S wrote: > Hi Team, > > We are looking to build a DR infrastructure. Our existing DC setup > consists of multiple node Controller, Compute and Ceph nodes as the storage > backend. We are using ubuntu 18.04 and Rocky version. > > Can someone please share any document or guide us on how we can build a DR > infra for the existing DC? > > 1. Do we need to have the storage shared across (Ceph)? > 2. What are the dependencies? > 3. Is there a guide for the same > > Thanks > Prem > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rony.khan at brilliant.com.bd Thu Sep 12 09:35:05 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Thu, 12 Sep 2019 15:35:05 +0600 Subject: Rabbitmq error report Message-ID: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> Hi, I'm getting this error continuously in rabbitmq log. Though all operation going normal, but slow. Sometimes taking long time to perform operation. Please help me to solve this. rabbitmq version: rabbitmq_server-3.6.16 =ERROR REPORT==== 12-Sep-2019::13:04:55 === Channel error on connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: operation queue.declare caused a channel exception not_found: failed to perform operation on queue 'versioned_notifications.info' in vhost '/' due to timeout =WARNING REPORT==== 12-Sep-2019::13:04:55 === closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672 - nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', user: 'openstack'): client unexpectedly closed TCP connection Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Thu Sep 12 10:03:39 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 12 Sep 2019 10:03:39 +0000 Subject: [tc] TC Chair Nominations - closing soon In-Reply-To: References: Message-ID: On Wed, 2019-09-11 at 17:40 +0100, Graham Hayes wrote: > Hello all new and returning TC members! Hooray! > > Welcome (back) to the TC. We now have to do some of the standard post > election paperwork / processes. > > One of the first things we need to do is elect a chair for this > cycle! > > We currently have 2 nominations, and nominations will remain open > until > 23:59 UTC tomorrow 12-Sept-2019. At that point, we will start a CIVS > election for the chair, and select them. > > To nominate yourself, just add a review to the governance repo like > so : [1][2]. > > If you are interested in the chair, please do consider running - > It is open to everyone, new and less new on the TC, and the > job has been documented by previous chairs and TC members [3] > > If you have any questions - reply to this mail, ask in the > #openstack-tc > IRC room, reply to me and I will see who I can put you in contact > with, who may know, or ping mnaser, who is the current chair. Thanks for setting this up. As current vice-chair, if anyone's interested in that role - let me know and we can chat about what this entails. > > I propose the following timeline: > > Nominations Close: 2019-09-12 @ 23:59 UTC. > Election created: Morning (EU timezone) of 13 Sept > Election finish: Evening (EU timezone) of 18 Sept > or when all TC members have voted. Thanks mugsie! > > Thanks all, and please reach out with any questions! > > - Graham > > 1 - https://review.opendev.org/#/c/681285/2/reference/members.yaml > 2 - https://review.opendev.org/#/c/680414/2/reference/members.yaml > 3 - https://opendev.org/openstack/governance/src/branch/master/CHAIR. > rst > -- Alexandra Settle IRC: asettle From thierry at openstack.org Thu Sep 12 10:13:55 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 12 Sep 2019 12:13:55 +0200 Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables Message-ID: Hi everyone, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Train cycle: - freezer and freezer-web-ui - karbor and karbor-dashboard - kuryr-kubernetes - magnum-ui - manila-ui - monasca-agent, monasca-api, monasca-ceilometer, monasca-events-api, monasca-log-api, monasca-notification, monasca-persister and monasca-transform - networking-hyperv - neutron-fwaas-dashboard and neutron-vpnaas-dashboard - senlin-dashboard - tacker-horizon Those should be released ASAP, and in all cases before September 26th, so that we have a release to include in the final Train release. Thanks in advance, -- Thierry Carrez (ttx) From lpetrut at cloudbasesolutions.com Thu Sep 12 10:43:29 2019 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Thu, 12 Sep 2019 10:43:29 +0000 Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables In-Reply-To: References: Message-ID: <64050966FCE0B948BCE2B28DB6E0B7D557AAFFD5@CBSEX1.cloudbase.local> Hi, Thanks for the heads up! I’ve just requested a networking-hyperv release: https://review.opendev.org/#/c/681707/ Lucian Petrut From: Thierry Carrez Sent: Thursday, September 12, 2019 1:15 PM To: openstack-discuss at lists.openstack.org Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables Hi everyone, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Train cycle: - freezer and freezer-web-ui - karbor and karbor-dashboard - kuryr-kubernetes - magnum-ui - manila-ui - monasca-agent, monasca-api, monasca-ceilometer, monasca-events-api, monasca-log-api, monasca-notification, monasca-persister and monasca-transform - networking-hyperv - neutron-fwaas-dashboard and neutron-vpnaas-dashboard - senlin-dashboard - tacker-horizon Those should be released ASAP, and in all cases before September 26th, so that we have a release to include in the final Train release. Thanks in advance, -- Thierry Carrez (ttx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 12 13:36:21 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 12 Sep 2019 08:36:21 -0500 Subject: Rabbitmq error report In-Reply-To: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> References: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> Message-ID: <2d2076f9-0eb1-98e8-f9e0-1067b4472f23@nemebean.com> Have you checked that your notification queues aren't filling up? It can cause performance problems in Rabbit if nothing is clearing out those queues. On 9/12/19 4:35 AM, Md. Farhad Hasan Khan wrote: > Hi, > > I’m getting this error continuously in rabbitmq log. Though all > operation going normal, but slow. Sometimes taking long time to perform > operation. Please help me to solve this. > > rabbitmq version: rabbitmq_server-3.6.16 > > =ERROR REPORT==== 12-Sep-2019::13:04:55 === > > Channel error on connection <0.8105.3> (192.168.21.56:60116 -> > 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: > > operation queue.declare caused a channel exception not_found: failed to > perform operation on queue 'versioned_notifications.info' in vhost '/' > due to timeout > > =WARNING REPORT==== 12-Sep-2019::13:04:55 === > > closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> > 192.168.21.11:5672 - > nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', > user: 'openstack'): > > client unexpectedly closed TCP connection > > Thanks & B’Rgds, > > Rony > From francois.scheurer at everyware.ch Thu Sep 12 14:41:21 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Thu, 12 Sep 2019 16:41:21 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: Hello Apparently other people have the same issue and cannot use cron triggers anymore: https://bugs.launchpad.net/mistral/+bug/1843175 We also tried with following patch installed but the same error persists: https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split Cheers Francois On 9/9/19 6:23 PM, Francois Scheurer wrote: > > Dear All > > > We are using Mistral 7.0.1.1 with  Openstack Rocky. (with federated users) > > We can create and execute a workflow via horizon, but cron triggers > always fail with this error: > >     { >         "result": >             "The action raised an exception [ > action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, >                     action_cls=' 'mistral.actions.action_factory.NovaAction'>', >                     attributes='{u'client_method_name': > u'servers.find'}', >                     params='{ >                         u'action_region': u'ch-zh1', >                         u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa' >                     }' >                 ] >                 \n NovaAction.servers.find failed: You are not > authorized to perform the requested action: identity:validate_token. > (HTTP 403) (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33) >             " >     } > > Adding the role *admin* or *service* to the user logged in horizon is > "fixing" the issue, I mean that the cron trigger then works as expected, > > but it would be obviously a bad idea to do this for all normal users ;-) > > So my question: is it a config problem on our side ? is it a known > bug? or is it a feature in the sense that cron triggers are for normal > users? > > > After digging in the keystone debug logs (see at the end below), I > found that RBAC check identity:validate_token an deny the authorization. > > But according to the policy.json (in keystone and in horizon), > rule:owner should be enough to grant it...: > >             "identity:validate_token": "rule:service_admin_or_owner", >                 "service_admin_or_owner": "rule:service_or_admin or > rule:owner", >                     "service_or_admin": "rule:admin_required or > rule:service_role", >                         "service_role": "role:service", >                     "owner": "user_id:%(user_id)s or > user_id:%(target.token.user_id)s", > > Thank you in advance for your help. > > > Best Regards > > Francois Scheurer > > > > > Keystone logs: > >         2019-09-05 09:38:00.902 29 DEBUG > keystone.policy.backends.rules > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom > testdom] >             enforce identity:validate_token: >             { >                'service_project_id':None, >                'service_user_id':None, >                'service_user_domain_id':None, >                'service_project_domain_id':None, >                'trustor_id':None, >                'user_domain_id':u'testdom', >                'domain_id':None, >                'trust_id':u'mytrustid', >                'project_domain_id':u'testdom', >                'service_roles':[], >                'group_ids':[], >                'user_id':u'fsc', >                'roles':[ >                   u'_member_', >                   u'creator', >                   u'reader', >                   u'heat_stack_owner', >                   u'member', >                   u'load-balancer_member'], >                'system_scope':None, >                'trustee_id':None, >                'domain_name':None, >                'is_admin_project':True, >                'token': audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, >                'project_id':u'fscproject' >             } enforce > /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 >         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom > testdom] >             You are not authorized to perform the requested action: > identity:validate_token.: *ForbiddenAction: You are not authorized to > perform the requested action: identity:validate_token.* > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail:francois.scheurer at everyware.ch > web:http://www.everyware.ch -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Sep 12 15:39:27 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 12 Sep 2019 23:39:27 +0800 Subject: [tc][uc][meta-sig] How to help SIGs to have a better life? (Needs feedback for SIGs guideline) Message-ID: Hi all, The question I would like to ask is, what else we can do here to help SIGs to have a better life? What can we do from here? And is there any feedback regarding on experience of participate a SIG or chairing one? For some work in progress (or completed) actions: I'm working on a guideline for SIGs ( https://etherpad.openstack.org/p/SIGs-guideline ) because I believe it might provide some value for SIGs, especially new-formed SIGs. Please kindly provide your feedback on it. Will send a patch to update current document under governance-sigs once we got good enough confident on it. On the other hand, the reason I start this work is because we're thinking `How to help SIGs to have a better life?` There're some actions I can think of and the most easier answers are to get SIGs status, update guidelines and explain why we need SIG in general. So actions: I'm working on SIG guideline ( https://etherpad.openstack.org/p/SIGs-guideline ) and document `Comparison of Official Group Structures` ( https://review.opendev.org/#/c/668093/ ). Also, reach out to SIGs earlier this year to collect help most needed information for SIGs and WGs ( https://etherpad.openstack.org/p/DEN-help-most-needed-for-sigs-and-wgs ) Also, I know Belmiro Moreira (UC member) has reached out to SIGs too, so there are some up to date information. I will try to put all the above information together for share. And now, back to the question, what can we do from here? Or is there any other feedback? Before I start to disturb everyone with crazy ideas in my mind, would like to hear feedback from all of you. Finally, feedback on SIG guideline is desired. Thanks! -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 12 16:23:40 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 12 Sep 2019 11:23:40 -0500 Subject: [oslo][release][requirements] FFE request for oslo.policy, privsep, and service Message-ID: <77fe5106-10ae-d735-32e5-42e01677e8ce@nemebean.com> Hi, As discussed in the release meeting today, I'm requesting an FFE for oslo.policy, oslo.privsep, and oslo.service. The latter two are only release notes for things that landed late in the cycle, and oslo.policy is a small bugfix in sample policy generation. These should all be backportable if necessary, but for convenience we'd like to get them out now. Thanks. -Ben From mnaser at vexxhost.com Thu Sep 12 17:04:09 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 12 Sep 2019 13:04:09 -0400 Subject: [openstack-ansible] office hours update Message-ID: Hi everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. We finished installing the Python3 for ansible-runtime virtual environment. We discussed how Centos 7.7 wasn’t out yet but we want to move to Python 3 so we’ll start using the CR repository. The placement extract and upgrade jobs are in progress. We clarified the whole definition around freezing a milestone and features and what it implies. The roles for Train milestones were frozen. We’ll wait for Python3, placement and bind-to-mgmt before proposing the milestone. Galera is still having issues and we’re having trouble understanding and fixing them but it has something to do with listening to localhost. We tested Ansible 2.9 and are trying to figure out if we want to use it for Train. We’re having issues with bumping up os-vif for Stein because they seem to be only for testing according to OpenStack Requirements. We talked about maybe using the in-repository local constraints or creating a tag, but we don’t think they can be bumped. It seems later that it was clarified that we can do that, and os-vif made a new release today so we can check it out. Finally, we discussed a journal logging error on Stein. There’s a case of python-systemd missing for logging. Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mthode at mthode.org Thu Sep 12 17:17:59 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 12 Sep 2019 12:17:59 -0500 Subject: [oslo][release][requirements] FFE request for oslo.policy, privsep, and service In-Reply-To: <77fe5106-10ae-d735-32e5-42e01677e8ce@nemebean.com> References: <77fe5106-10ae-d735-32e5-42e01677e8ce@nemebean.com> Message-ID: <20190912171759.dxijyymox5vxnrbv@mthode.org> On 19-09-12 11:23:40, Ben Nemec wrote: > Hi, > > As discussed in the release meeting today, I'm requesting an FFE for > oslo.policy, oslo.privsep, and oslo.service. The latter two are only release > notes for things that landed late in the cycle, and oslo.policy is a small > bugfix in sample policy generation. > > These should all be backportable if necessary, but for convenience we'd like > to get them out now. > > Thanks. > > -Ben > Looks good to me, thanks for the email -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rony.khan at novotel-bd.com Thu Sep 12 09:28:52 2019 From: rony.khan at novotel-bd.com (Md. Farhad Hasan Khan) Date: Thu, 12 Sep 2019 15:28:52 +0600 Subject: Rabbitmq error report Message-ID: <7FDE461E16EF1E4587C3A3333C7DA2D910E34D2D35@Email.novotel-bd.com> Hi, I'm getting this error continuously in rabbitmq log. Though all operation going normal, but slow. Sometimes taking long time to perform operation. Please help me to solve this. rabbitmq version: rabbitmq_server-3.6.16 =ERROR REPORT==== 12-Sep-2019::13:04:55 === Channel error on connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: operation queue.declare caused a channel exception not_found: failed to perform operation on queue 'versioned_notifications.info' in vhost '/' due to timeout =WARNING REPORT==== 12-Sep-2019::13:04:55 === closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672 - nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', user: 'openstack'): client unexpectedly closed TCP connection Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From James.Benson at utsa.edu Thu Sep 12 16:11:12 2019 From: James.Benson at utsa.edu (James Benson) Date: Thu, 12 Sep 2019 16:11:12 +0000 Subject: [nova] Deprecating the XenAPI driver In-Reply-To: <> Message-ID: Matt, I am currently working on trying to deploy Xen OpenStack. Currently I have been trying to get it working on Rocky with Xen6.0 and will code fix for Stein/Train as well if possible. Trying to get a working solution with Rocky then will patch up the line. I have reached out to the last person who submitted a bug fix in Xen (with no response), but I am actively trying to get this working. Unfortunately it is a one-man job, so it is taking a lot of time. Currently facing issues with Nova and Neutron. James -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at swiftstack.com Thu Sep 12 19:23:55 2019 From: tim at swiftstack.com (Tim Burke) Date: Thu, 12 Sep 2019 12:23:55 -0700 Subject: [Openstack-stable-maint] Stable check of openstack/swift for ref refs/heads/stable/pike failed In-Reply-To: References: Message-ID: <6a93a7addb49268be191947deb1f06a239806af7.camel@swiftstack.com> Wrote up https://bugs.launchpad.net/swift/+bug/1843816 to describe the issue; tl;dr is that python's http.client/httplib got more picky about sending only RFC-compliant HTTP requests, but Swift's proxy was happy to accept non-compliant query strings and try to forward them on to backend servers. Fix for master is up at https://review.opendev.org/#/c/681875/, and a backport for pike is up at https://review.opendev.org/#/c/681879/. Once I see passing checks there, I'll propose backports for everyone in between, plus ocata. Tim On Wed, 2019-09-11 at 06:43 +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: > Build failed. > > - build-openstack-sphinx-docs > https://zuul.opendev.org/t/openstack/build/d7030406f5224d78baebbe5dbe80b4d5 > : SUCCESS in 6m 39s > - openstack-tox-py27 > https://zuul.opendev.org/t/openstack/build/a4ee29fb61684505995fda21718fcd89 > : FAILURE in 7m 15s > > _______________________________________________ > Openstack-stable-maint mailing list > Openstack-stable-maint at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint From lucioseki at gmail.com Thu Sep 12 21:49:26 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Thu, 12 Sep 2019 18:49:26 -0300 Subject: [neutron] DevStack with IPv6 Message-ID: Hi folks, I'm having troubles to ping6 a VM running over DevStack from its hypervisor. Could you please help me troubleshooting it? I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, and manually created the networks, subnets and router. Following is my router: $ openstack router show router1 -c external_gateway_info -c interfaces_info +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | external_gateway_info | {"network_id": "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": "fd12:67:1::3c"}]} | | interfaces_info | [{"subnet_id": "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] | +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ I'm trying to ping6 the following VM: $ openstack server list +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ I intend to reach it via br-ex interface of the hypervisor: $ ip a show dev br-ex 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff inet6 fd12:67:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::c82:a1ff:feba:774c/64 scope link valid_lft forever preferred_lft forever The hypervisor has the following routes: $ ip -6 route fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium fe80::/64 dev ens3 proto kernel metric 256 pref medium fe80::/64 dev br-ex proto kernel metric 256 pref medium fe80::/64 dev br-int proto kernel metric 256 pref medium fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium And within the VM has the following routes: root at ubuntu:~# ip -6 route root at ubuntu:~# ip -6 route fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref medium fe80::/64 dev ens3 proto kernel metric 256 pref medium default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 expires 260sec hoplimit 64 pref medium Though the ping6 from VM to hypervisor doesn't work: root at ubuntu:~# ping6 fd12:67:1::1 -c4 PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes --- fd12:67:1::1 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss I'm able to tcpdump inside the router1 netns and see that request packet is passing there, but can't see any reply packets: $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump -l -i any icmp6 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 0, length 64 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 1, length 64 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 2, length 64 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 3, length 64 The same happens from hypervisor to VM. I only acan see the request packets, but no reply packets. Thanks in advance, Lucio Seki -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 12 23:03:41 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 12 Sep 2019 18:03:41 -0500 Subject: [nova] Deprecating the XenAPI driver In-Reply-To: References: Message-ID: <2d8a9d19-4e73-ae53-7c04-760e5b904888@gmail.com> On 9/12/2019 11:11 AM, James Benson wrote: > I am currently working on trying to deploy Xen OpenStack.  Currently I > have been trying to get it working on Rocky with Xen6.0 and will code > fix for Stein/Train as well if possible. Trying to get a working > solution with Rocky then will patch up the line. I have reached out to > the last person who submitted a bug fix in Xen (with no response), but I > am actively trying to get this working.  Unfortunately it is a one-man > job, so it is taking a lot of time. Currently facing issues with Nova > and Neutron. Thanks for letting us know you're trying to get nova working with the xenapi driver James. The last time there was sustained effort on that driver was in Rocky so I would not be surprised if there are issues in Stein or Train. If you have fixes please contribute them upstream. However, I think we should still move forward with deprecation of the driver as a clear indication of the lack of maintainers on the driver. If that changes in the Ussuri release we have the option to undeprecate but I think it's important to clearly signal the state of maintenance for parts of nova so people don't start using them just to find out later they'll be in a bad state (which you might have already found out). -- Thanks, Matt From smooney at redhat.com Thu Sep 12 23:51:30 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 13 Sep 2019 00:51:30 +0100 Subject: [nova] Deprecating the XenAPI driver In-Reply-To: <2d8a9d19-4e73-ae53-7c04-760e5b904888@gmail.com> References: <2d8a9d19-4e73-ae53-7c04-760e5b904888@gmail.com> Message-ID: On Thu, 2019-09-12 at 18:03 -0500, Matt Riedemann wrote: > On 9/12/2019 11:11 AM, James Benson wrote: > > I am currently working on trying to deploy Xen OpenStack. Currently I > > have been trying to get it working on Rocky with Xen6.0 and will code > > fix for Stein/Train as well if possible. Trying to get a working > > solution with Rocky then will patch up the line. I have reached out to > > the last person who submitted a bug fix in Xen (with no response), but I > > am actively trying to get this working. Unfortunately it is a one-man > > job, so it is taking a lot of time. Currently facing issues with Nova > > and Neutron. > > Thanks for letting us know you're trying to get nova working with the > xenapi driver James. The last time there was sustained effort on that > driver was in Rocky so I would not be surprised if there are issues in > Stein or Train. If you have fixes please contribute them upstream. > However, I think we should still move forward with deprecation of the > driver as a clear indication of the lack of maintainers on the driver. > If that changes in the Ussuri release we have the option to undeprecate > but I think it's important to clearly signal the state of maintenance > for parts of nova so people don't start using them just to find out > later they'll be in a bad state (which you might have already found out). i dont think this applies to libvirt + xen but i think the direct to xen server implmenation requires a specific version of like python 2.6 or an early version of 2.7 to work or put another it wont work with python 3. that it might have changed but i remember trying to help somomn debug the xenserver driver in kolla aboud a year ago and i dont think that any work has been done to make it python 3 compatiable. so if we are to keep it in Ussuri some heavy lifting would be needed to make it run python 3 only. > From miguel at mlavalle.com Fri Sep 13 01:10:29 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 12 Sep 2019 20:10:29 -0500 Subject: [openstack-dev] [neutron] Cancelling Neutron Drivers meeting on September 13th Message-ID: Dear Neutrinos, We don't have RFEs ready to be discussed during this week's drivers meeting. As a consequence, let's skip it. However, last week we discussed https://bugs.launchpad.net/neutron/+bug/1837847 and asked the submitter to write a spec, which he did: https://review.opendev.org/#/c/680990/. Please review it and let's be ready to go back the this RFE during the meeting on the 20th Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Fri Sep 13 08:54:34 2019 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Fri, 13 Sep 2019 08:54:34 +0000 Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support Message-ID: Hi Dharmendra and all Core reviewers In train cycle ,we are committed to implement spec “VNF packages support for VNF onboarding” [1]. All patches [2] are uploaded on the gerrit and code review is in progress but as we have dependency on tosca-parser library, patches are not yet merged. Now, tosca-parser library new version 1.6.0. is released but we are waiting for patch [3] to merge which will update the constraints of tosca-parser to 1.6.0 in requirements project. Once that happens, we will make changes to the tacker patch [4] to update the lower constraints of tosca-parser to 1.6.0 which will run all functional and unit tests added for this feature successfully on the CI job. I would like to request feature freeze exception for “VNF packages support for VNF onboarding” [1]. We will make sure all the review comments on the patches will be fixed promptly so that we can merge them as soon as possible. [1] : https://review.opendev.org/#/c/582930/ [2] : https://review.opendev.org/#/q/topic:bp/tosca-csar-mgmt-driver+(status:open+OR+status:merged) [3] : https://review.opendev.org/#/c/681819/ [4]: https://review.opendev.org/#/c/675600/ Thanks, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From thierry at openstack.org Fri Sep 13 09:04:05 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 13 Sep 2019 11:04:05 +0200 Subject: [tc][uc][meta-sig] How to help SIGs to have a better life? (Needs feedback for SIGs guideline) In-Reply-To: References: Message-ID: <0e3947c8-8f4d-44de-b361-7ebb9e30fd84@openstack.org> Rico Lin wrote: > The question I would like to ask is, what else we can do here to help > SIGs to have a better life? What can we do from here? And is there any > feedback regarding on experience of participate a SIG or chairing one? I think the best thing we can do to help SIGs to have a better life is to make it as lightweight as possible to run one. > For some work in progress (or completed) actions: > I'm working on a guideline for SIGs ( > https://etherpad.openstack.org/p/SIGs-guideline ) because I believe it > might provide some value for SIGs, especially new-formed SIGs. Please > kindly provide your feedback on it. Will send a patch to update current > document under governance-sigs once we got good enough confident on it. In the spirit of keeping things lightweight, I feel like this document is already overwhelming. I understand it's meant as a resource guide in case SIGs need guidance, but as it stands it looks a bit intimidating, with its 7 bullet points for "Creating a SIG". Actually the only thing needed to create a SIG is the first bullet point (patch to governance-sigs), everything else is VERY optional. I wonder if this should not be made a SIG guide (under the model of the Project Team Guide), with: 1. When to create a SIG 1.1 What's a SIG 1.2 SIGs compared to other working groups in OpenStack 2. Process to create a SIG (file that patch, with name, lead(s) and scope) 3. Optional resources available to SIGs 3.1 Communications 3.2 Meetings (in person and online) 3.3 Documentation (wiki...) 3.4 Git Repositories 3.5 Task tracker 4. SIG lifecycle 4.1 Keeping SIG leads and URLs up to date 4.2 Marking SIGs inactive 4.3 Removing a SIG While it would make a larger document overall, it would IMHO make it clearer what's necessary and what's guidance / optional. I'm happy to help setting this up as a separate documentation repo, if that sounds like a good idea. -- Thierry Carrez (ttx) From dharmendra.kushwaha at india.nec.com Fri Sep 13 10:10:04 2019 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Fri, 13 Sep 2019 10:10:04 +0000 Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support In-Reply-To: References: Message-ID: Hi Tushar, Thanks for your hard effort. I had released tosce-parser1.6.0 as in [1], and lets wait [2] to get merged. Regarding tackerclient code, we already have merged it, and will release tackerclient today. Tacker have cycle-with-rc release model, So ok, we can wait some time for this feature(server patches). We just needs to make sure that no broken code goes in the last movement and can be tested before rc release. [1]: https://review.opendev.org/#/c/681240 [2]: https://review.opendev.org/#/c/681819 Thanks & Regards Dharmendra Kushwaha ________________________________________ From: Patil, Tushar Sent: Friday, September 13, 2019 2:24 PM To: openstack-discuss at lists.openstack.org Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support Hi Dharmendra and all Core reviewers In train cycle ,we are committed to implement spec “VNF packages support for VNF onboarding” [1]. All patches [2] are uploaded on the gerrit and code review is in progress but as we have dependency on tosca-parser library, patches are not yet merged. Now, tosca-parser library new version 1.6.0. is released but we are waiting for patch [3] to merge which will update the constraints of tosca-parser to 1.6.0 in requirements project. Once that happens, we will make changes to the tacker patch [4] to update the lower constraints of tosca-parser to 1.6.0 which will run all functional and unit tests added for this feature successfully on the CI job. I would like to request feature freeze exception for “VNF packages support for VNF onboarding” [1]. We will make sure all the review comments on the patches will be fixed promptly so that we can merge them as soon as possible. [1] : https://review.opendev.org/#/c/582930/ [2] : https://review.opendev.org/#/q/topic:bp/tosca-csar-mgmt-driver+(status:open+OR+status:merged) [3] : https://review.opendev.org/#/c/681819/ [4]: https://review.opendev.org/#/c/675600/ Thanks, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. From cdent+os at anticdent.org Fri Sep 13 11:19:51 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 13 Sep 2019 12:19:51 +0100 (BST) Subject: [placement] update 19-35 Message-ID: HTML: https://anticdent.org/placement-update-19-36.html Here's placement update 19-36. There won't be one next week, I will be away. Because of my forthcoming "less time available for OpenStack" I will also be stopping these updates at some point in the next month or so so I can focus the limited time I will have on reviewing and coding. There will be at least one more. # Most Important The big news this week is that after returning from a trip (that meant he was away during the nomination period) Tetsuro has stepped up to be the PTL for placement in Ussuri. Thanks very much to him for taking this up, I'm sure he will be excellent. We need to work on useful documentation for the features developed this cycle. I've also made a [now worklist](https://storyboard.openstack.org/#!/worklist/754) in StoryBoard to draw attention to placement project stories that are relevant to the next few weeks, making it easier to ignore those that are not relevant now, but may be later. # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 23 (-1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 5 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 10 (-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 5 (1) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 6 (0). # osc-placement * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * This has some good comments on it from melwitt. I'm going to be away next week, so if someone else would like to address them that would be great. If it is deemed fit to merge, we should, despite feature freeze passing, since we haven't had much churn lately. If it doesn't make it in Train, that's fine too. The goal is to have it ready for Nova in Ussuri as early as possible. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. Performance related explorations continue: * Refactor initialization of research context. This puts the code that might cause an exit earlier in the process so we can avoid useless work. One outcome of the performance work needs to be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. * These are the patches for meeting the build pdf docs goal for the various placement projects. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There are three [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And two [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). The latter are docs-related. # Other Service Users New reviews are added to the end of the list. Reviews that haven't had attention in a long time (boo!) or have merged or approved (yay!) are removed. * helm: add placement chart * Nova: WIP: Add a placement audit command * tempest: Add placement API methods for testing routed provider nets * Nova: cross cell resize * Nova: Scheduler translate properties to traits * Nova: single pass instance info fetch in host manager * Nova: using provider config file for custom resource providers * Nova: clean up some lingering placement stuff * OSA: Add nova placement to placement migration * Charms: Disable nova placement API in Train * Nova: stop using @safe_connect in report client # End 🐈 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From donny at fortnebula.com Fri Sep 13 13:15:53 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 09:15:53 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Security group rules? Donny Davis c: 805 814 6800 On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: > Hi folks, I'm having troubles to ping6 a VM running over DevStack from its > hypervisor. > Could you please help me troubleshooting it? > > I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, > and manually created the networks, subnets and router. Following is my > router: > > $ openstack router show router1 -c external_gateway_info -c interfaces_info > > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > > | > > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | external_gateway_info | {"network_id": > "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, > "external_fixed_ips": [{"subnet_id": > "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, > {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": > "fd12:67:1::3c"}]} | > | interfaces_info | [{"subnet_id": > "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", > "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] > > | > > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > I'm trying to ping6 the following VM: > > $ openstack server list > > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | ID | Name | Status | Networks > | Image | Flavor | > > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | > private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | > > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > > I intend to reach it via br-ex interface of the hypervisor: > > $ ip a show dev br-ex > 9: br-ex: mtu 1500 qdisc noqueue state > UNKNOWN group default qlen 1000 > link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff > inet6 fd12:67:1::1/64 scope global > valid_lft forever preferred_lft forever > inet6 fe80::c82:a1ff:feba:774c/64 scope link > valid_lft forever preferred_lft forever > > The hypervisor has the following routes: > > $ ip -6 route > fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > fe80::/64 dev br-ex proto kernel metric 256 pref medium > fe80::/64 dev br-int proto kernel metric 256 pref medium > fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium > > And within the VM has the following routes: > > root at ubuntu:~# ip -6 route > root at ubuntu:~# ip -6 route > fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium > fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref > medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 > expires 260sec hoplimit 64 pref medium > > Though the ping6 from VM to hypervisor doesn't work: > root at ubuntu:~# ping6 fd12:67:1::1 -c4 > PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes > --- fd12:67:1::1 ping statistics --- > 4 packets transmitted, 0 packets received, 100% packet loss > > I'm able to tcpdump inside the router1 netns and see that request packet > is passing there, but can't see any reply packets: > > $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump > -l -i any icmp6 > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 > bytes > 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 0, length 64 > 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: > ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 > 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: > ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 > 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 1, length 64 > 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 2, length 64 > 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 3, length 64 > > The same happens from hypervisor to VM. I only acan see the request > packets, but no reply packets. > > Thanks in advance, > Lucio Seki > -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Fri Sep 13 13:23:12 2019 From: saphi070 at gmail.com (Sa Pham) Date: Fri, 13 Sep 2019 22:23:12 +0900 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: Hi Francois, You can try this patch: https://review.opendev.org/#/c/680858/ Sa Pham On Thu, Sep 12, 2019 at 11:49 PM Francois Scheurer < francois.scheurer at everyware.ch> wrote: > Hello > > > > Apparently other people have the same issue and cannot use cron triggers > anymore: > > https://bugs.launchpad.net/mistral/+bug/1843175 > > > We also tried with following patch installed but the same error persists: > > > https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split > > > > Cheers > > Francois > > > > > On 9/9/19 6:23 PM, Francois Scheurer wrote: > > Dear All > > > We are using Mistral 7.0.1.1 with Openstack Rocky. (with federated users) > > We can create and execute a workflow via horizon, but cron triggers always > fail with this error: > > { > "result": > "The action raised an exception [ > action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, > action_cls=' 'mistral.actions.action_factory.NovaAction'>', > attributes='{u'client_method_name': u'servers.find'}', > params='{ > u'action_region': u'ch-zh1', > u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa' > }' > ] > \n NovaAction.servers.find failed: You are not authorized > to perform the requested action: identity:validate_token. (HTTP 403) > (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33) > " > } > > Adding the role *admin* or *service* to the user logged in horizon is > "fixing" the issue, I mean that the cron trigger then works as expected, > > but it would be obviously a bad idea to do this for all normal users ;-) > > So my question: is it a config problem on our side ? is it a known bug? or > is it a feature in the sense that cron triggers are for normal users? > > > After digging in the keystone debug logs (see at the end below), I found > that RBAC check identity:validate_token an deny the authorization. > > But according to the policy.json (in keystone and in horizon), rule:owner > should be enough to grant it...: > > "identity:validate_token": "rule:service_admin_or_owner", > "service_admin_or_owner": "rule:service_or_admin or > rule:owner", > "service_or_admin": "rule:admin_required or > rule:service_role", > "service_role": "role:service", > "owner": "user_id:%(user_id)s or > user_id:%(target.token.user_id)s", > > Thank you in advance for your help. > > > Best Regards > > Francois Scheurer > > > > > Keystone logs: > > 2019-09-05 09:38:00.902 29 DEBUG keystone.policy.backends.rules > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom] > enforce identity:validate_token: > { > 'service_project_id':None, > 'service_user_id':None, > 'service_user_domain_id':None, > 'service_project_domain_id':None, > 'trustor_id':None, > 'user_domain_id':u'testdom', > 'domain_id':None, > 'trust_id':u'mytrustid', > 'project_domain_id':u'testdom', > 'service_roles':[], > 'group_ids':[], > 'user_id':u'fsc', > 'roles':[ > u'_member_', > u'creator', > u'reader', > u'heat_stack_owner', > u'member', > u'load-balancer_member'], > 'system_scope':None, > 'trustee_id':None, > 'domain_name':None, > 'is_admin_project':True, > 'token': audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, > 'project_id':u'fscproject' > } enforce > /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 > 2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom] > You are not authorized to perform the requested action: > identity:validate_token.: *ForbiddenAction: You are not authorized to > perform the requested action: identity:validate_token.* > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > > -- Sa Pham Dang Master Student - Soongsil University Kakaotalk: sapd95 Skype: great_bn -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Fri Sep 13 13:32:20 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 13 Sep 2019 15:32:20 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: Hi Sa Pham Yes this is the good one. Bo Tran pointed it to me yesterday as well and it fixed the issue. See also: https://bugs.launchpad.net/mistral/+bug/1843175 Many Thanks to both of you ! Best Regards Francois Scheurer On 9/13/19 3:23 PM, Sa Pham wrote: > Hi Francois, > > You can try this patch: https://review.opendev.org/#/c/680858/ > > Sa Pham > > On Thu, Sep 12, 2019 at 11:49 PM Francois Scheurer > > wrote: > > Hello > > > > Apparently other people have the same issue and cannot use cron > triggers anymore: > > https://bugs.launchpad.net/mistral/+bug/1843175 > > > We also tried with following patch installed but the same error > persists: > > https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split > > > > Cheers > > Francois > > > > > On 9/9/19 6:23 PM, Francois Scheurer wrote: >> >> Dear All >> >> >> We are using Mistral 7.0.1.1 with  Openstack Rocky. (with >> federated users) >> >> We can create and execute a workflow via horizon, but cron >> triggers always fail with this error: >> >>     { >>         "result": >>             "The action raised an exception [ >> action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, >>                     action_cls='> 'mistral.actions.action_factory.NovaAction'>', >>                     attributes='{u'client_method_name': >> u'servers.find'}', >>                     params='{ >>                         u'action_region': u'ch-zh1', >>                         u'name': >> u'42724489-1912-44d1-9a59-6c7a4bebebfa' >>                     }' >>                 ] >>                 \n NovaAction.servers.find failed: You are not >> authorized to perform the requested action: >> identity:validate_token. (HTTP 403) (Request-ID: >> req-ec1aea36-c198-4307-bf01-58aca74fad33) >>             " >>     } >> >> Adding the role *admin* or *service* to the user logged in >> horizon is "fixing" the issue, I mean that the cron trigger then >> works as expected, >> >> but it would be obviously a bad idea to do this for all normal >> users ;-) >> >> So my question: is it a config problem on our side ? is it a >> known bug? or is it a feature in the sense that cron triggers are >> for normal users? >> >> >> After digging in the keystone debug logs (see at the end below), >> I found that RBAC check identity:validate_token an deny the >> authorization. >> >> But according to the policy.json (in keystone and in horizon), >> rule:owner should be enough to grant it...: >> >>             "identity:validate_token": "rule:service_admin_or_owner", >>                 "service_admin_or_owner": "rule:service_or_admin >> or rule:owner", >>                     "service_or_admin": "rule:admin_required or >> rule:service_role", >>                         "service_role": "role:service", >>                     "owner": "user_id:%(user_id)s or >> user_id:%(target.token.user_id)s", >> >> Thank you in advance for your help. >> >> >> Best Regards >> >> Francois Scheurer >> >> >> >> >> Keystone logs: >> >>         2019-09-05 09:38:00.902 29 DEBUG >> keystone.policy.backends.rules >> [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - >> testdom testdom] >>             enforce identity:validate_token: >>             { >>                'service_project_id':None, >>                'service_user_id':None, >>                'service_user_domain_id':None, >>                'service_project_domain_id':None, >>                'trustor_id':None, >>                'user_domain_id':u'testdom', >>                'domain_id':None, >>                'trust_id':u'mytrustid', >>                'project_domain_id':u'testdom', >>                'service_roles':[], >>                'group_ids':[], >>                'user_id':u'fsc', >>                'roles':[ >>                   u'_member_', >>                   u'creator', >>                   u'reader', >>                   u'heat_stack_owner', >>                   u'member', >>                   u'load-balancer_member'], >>                'system_scope':None, >>                'trustee_id':None, >>                'domain_name':None, >>                'is_admin_project':True, >>                'token':> (audit_id=0LAsW_0dQMWXh2cTZTLcWA, >> audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, >>                'project_id':u'fscproject' >>             } enforce >> /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 >>         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi >> [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - >> testdom testdom] >>             You are not authorized to perform the requested >> action: identity:validate_token.: *ForbiddenAction: You are not >> authorized to perform the requested action: identity:validate_token.* >> >> >> -- >> >> >> EveryWare AG >> François Scheurer >> Senior Systems Engineer >> Zurlindenstrasse 52a >> CH-8003 Zürich >> >> tel: +41 44 466 60 00 >> fax: +41 44 466 60 10 >> mail:francois.scheurer at everyware.ch >> web:http://www.everyware.ch > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail:francois.scheurer at everyware.ch > web:http://www.everyware.ch > > > > -- > Sa Pham Dang > Master Student - Soongsil University > Kakaotalk: sapd95 > Skype: great_bn > > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From corey.bryant at canonical.com Fri Sep 13 13:58:20 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 13 Sep 2019 09:58:20 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-0) Message-ID: This is the goal-0 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. Today is the final day for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == We're down to 3 projects with failing tests, and 2 projects with successful tests. Barbican and PowerVM are actively working on getting patches landed. I've not been successful in making contact with the Freezer PTL. Thank you to all who have contributed their time and fixes to enable patches to land! == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 13 14:00:44 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 13 Sep 2019 09:00:44 -0500 Subject: [release] Release countdown for week R-4, September 16-20 Message-ID: <20190913140044.GA20572@sm-workstation> Development Focus ----------------- We just passed feature freeze! Until release branches are cut, you should stop accepting featureful changes to deliverables following the cycle-with-rc release model, or to libraries. Exceptions should be discussed on separate threads on the mailing-list, and approved by the team's PTL. Focus should be on finding and fixing release-critical bugs, so that release candidates and final versions of the Train deliverables can be proposed, well ahead of the final Train release date. General Information ------------------- We are still finishing up processing a few release requests, but the Train release requirements are now frozen. If new library releases are needed to fix release-critical bugs in Train, you must request a Feature Freeze Exception (FFE) from the requirements team before we can do a new release to avoid having something released in Train that is not actually usable. This is done by posting to the openstack-discuss mailing list with a subject line similar to: [$PROJECT][requirements] FFE requested for $PROJECT_LIB Include justification/reasoning for why a FFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. A soft String freeze is now in effect, in order to let the I18N team do the translation work in good conditions. In Horizon and the various dashboard plugins, you should stop accepting changes that modify user-visible strings. Exceptions should be discussed on the mailing-list. By September 26 this will become a hard string freeze, with no changes in user-visible strings allowed. Actions --------- stable/train branches should be created soon for all not-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update. Please review those in priority so that the branch can be functional ASAP. The Prelude section of reno release notes is rendered as the top level overview for the release. Any important overall messaging for Train changes should be added there to make sure the consumers of your release notes see them. Finally, if you haven't proposed Train cycle-highlights yet, you are already late to the party. Please see http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009137.html for details. Upcoming Deadlines & Dates -------------------------- RC1 deadline: September 26 (R-3 week) Final RC deadline: October 10 (R-1 week) Final Train release: October 16 Forum+PTG at Shanghai summit: November 4 From haleyb.dev at gmail.com Fri Sep 13 14:10:23 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 13 Sep 2019 10:10:23 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: <24283fad-a8b6-6672-549e-bd1d27a9747b@gmail.com> On 9/12/19 5:49 PM, Lucio Seki wrote: > Hi folks, I'm having troubles to ping6 a VM running over DevStack from > its hypervisor. > Could you please help me troubleshooting it? > > I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, I think this is your problem. When this is set to True, create_neutron_initial_network() is called, which does a little "hacking" by bringing interfaces up, moving addresses and adding routes so that you can communicate with floating IP and IPv6 addresses. You would have to look at that code and do similar things manually. -Brian > and manually created the networks, subnets and router. Following is my > router: > > $ openstack router show router1 -c external_gateway_info -c interfaces_info > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field                 | Value > > > >        | > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | external_gateway_info | {"network_id": > "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, > "external_fixed_ips": [{"subnet_id": > "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, > {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": > "fd12:67:1::3c"}]} | > | interfaces_info       | [{"subnet_id": > "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", > "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] > >                                       | > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > I'm trying to ping6 the following VM: > > $ openstack server list > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | ID                                   | Name    | Status | Networks >                             | Image  | Flavor | > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | > private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > > I intend to reach it via br-ex interface of the hypervisor: > > $ ip a show dev br-ex > 9: br-ex: mtu 1500 qdisc noqueue state > UNKNOWN group default qlen 1000 >     link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >     inet6 fd12:67:1::1/64 scope global >        valid_lft forever preferred_lft forever >     inet6 fe80::c82:a1ff:feba:774c/64 scope link >        valid_lft forever preferred_lft forever > > The hypervisor has the following routes: > > $ ip -6 route > fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > fe80::/64 dev br-ex proto kernel metric 256 pref medium > fe80::/64 dev br-int proto kernel metric 256 pref medium > fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium > > And within the VM has the following routes: > > root at ubuntu:~# ip -6 route > root at ubuntu:~# ip -6 route > fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium > fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref > medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 > expires 260sec hoplimit 64 pref medium > > Though the ping6 from VM to hypervisor doesn't work: > root at ubuntu:~# ping6 fd12:67:1::1 -c4 > PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes > --- fd12:67:1::1 ping statistics --- > 4 packets transmitted, 0 packets received, 100% packet loss > > I'm able to tcpdump inside the router1 netns and see that request packet > is passing there, but can't see any reply packets: > > $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 > tcpdump -l -i any icmp6 > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on any, link-type LINUX_SLL (Linux cooked), capture size > 262144 bytes > 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 0, length 64 > 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has > fe80::f816:3eff:fe0e:17c3, length 32 > 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is > fe80::f816:3eff:fe0e:17c3, length 24 > 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 1, length 64 > 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 2, length 64 > 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 3, length 64 > > The same happens from hypervisor to VM. I only acan see the request > packets, but no reply packets. > > Thanks in advance, > Lucio Seki From skaplons at redhat.com Fri Sep 13 14:45:33 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 13 Sep 2019 16:45:33 +0200 Subject: [neutron] DevStack with IPv6 In-Reply-To: <24283fad-a8b6-6672-549e-bd1d27a9747b@gmail.com> References: <24283fad-a8b6-6672-549e-bd1d27a9747b@gmail.com> Message-ID: <9205585D-DE05-4D02-947B-F2248F250004@redhat.com> Hi, > On 13 Sep 2019, at 16:10, Brian Haley wrote: > > On 9/12/19 5:49 PM, Lucio Seki wrote: >> Hi folks, I'm having troubles to ping6 a VM running over DevStack from its hypervisor. >> Could you please help me troubleshooting it? >> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, > > I think this is your problem. When this is set to True, create_neutron_initial_network() is called, which does a little "hacking" by bringing interfaces up, moving addresses and adding routes so that you can communicate with floating IP and IPv6 addresses. You would have to look at that code and do similar things manually. I agree with Brian. Probably You need to add IP address from same subnet to br-ex interface that Your floating IPs will be reachable via br-ex. That is the way how this is done by Devstack by default IIRC. > > -Brian > > >> and manually created the networks, subnets and router. Following is my router: >> $ openstack router show router1 -c external_gateway_info -c interfaces_info >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value | >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | external_gateway_info | {"network_id": "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": "fd12:67:1::3c"}]} | >> | interfaces_info | [{"subnet_id": "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] | >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> I'm trying to ping6 the following VM: >> $ openstack server list >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | ID | Name | Status | Networks | Image | Flavor | >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> I intend to reach it via br-ex interface of the hypervisor: >> $ ip a show dev br-ex >> 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 >> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >> inet6 fd12:67:1::1/64 scope global >> valid_lft forever preferred_lft forever >> inet6 fe80::c82:a1ff:feba:774c/64 scope link >> valid_lft forever preferred_lft forever >> The hypervisor has the following routes: >> $ ip -6 route >> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> fe80::/64 dev br-ex proto kernel metric 256 pref medium >> fe80::/64 dev br-int proto kernel metric 256 pref medium >> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >> And within the VM has the following routes: >> root at ubuntu:~# ip -6 route >> root at ubuntu:~# ip -6 route >> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 expires 260sec hoplimit 64 pref medium >> Though the ping6 from VM to hypervisor doesn't work: >> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >> --- fd12:67:1::1 ping statistics --- >> 4 packets transmitted, 0 packets received, 100% packet loss >> I'm able to tcpdump inside the router1 netns and see that request packet is passing there, but can't see any reply packets: >> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump -l -i any icmp6 >> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode >> listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes >> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 0, length 64 >> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 >> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 >> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 1, length 64 >> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 2, length 64 >> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 3, length 64 >> The same happens from hypervisor to VM. I only acan see the request packets, but no reply packets. >> Thanks in advance, >> Lucio Seki > — Slawek Kaplonski Senior software engineer Red Hat From dmellado at redhat.com Fri Sep 13 14:57:09 2019 From: dmellado at redhat.com (Daniel Mellado) Date: Fri, 13 Sep 2019 16:57:09 +0200 Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables In-Reply-To: References: Message-ID: <2ce98e15-4e72-af49-6ef0-03a7539932fa@redhat.com> Hi Thierry, I've put https://review.opendev.org/#/c/682073/ for now, waiting on Michal review. Best! Daniel On 9/12/19 12:13 PM, Thierry Carrez wrote: > Hi everyone, > > Quick reminder that we'll need a release very soon for a number of > deliverables following a cycle-with-intermediary release model but which > have not done *any* release yet in the Train cycle: > > - freezer and freezer-web-ui > - karbor and karbor-dashboard > - kuryr-kubernetes > - magnum-ui > - manila-ui > - monasca-agent, monasca-api, monasca-ceilometer, monasca-events-api, > monasca-log-api, monasca-notification, monasca-persister and > monasca-transform > - networking-hyperv > - neutron-fwaas-dashboard and neutron-vpnaas-dashboard > - senlin-dashboard > - tacker-horizon > > Those should be released ASAP, and in all cases before September 26th, > so that we have a release to include in the final Train release. > > Thanks in advance, > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From rosmaita.fossdev at gmail.com Fri Sep 13 15:36:20 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 13 Sep 2019 11:36:20 -0400 Subject: [dev] [glance] proposal for S3 store driver re-support as galnce_store backend In-Reply-To: References: Message-ID: <4f48c659-d216-31a7-f34f-c09e9d51f31d@gmail.com> On 9/11/19 6:25 AM, Naohiro Sameshima wrote: > Hi all, > > I know that glance_store had supported S3 backend until version > OpenStack Mitaka, > and it has already been removed due to lack of maintainers [1][2]. > > I started refactoring the S3 driver to work with version OpenStack Stein > and recently completed it. > (e.g. Add Multi Store Support, Using the latest AWS SDK) > > So, it would be great if glance_store could support the S3 driver again. > > However, I'm not familiar with the procedure for that. > > Would it be possible to discuss this? >From what I've heard, there's a revival of interest in the S3 driver, so it's great that you've decided to work on it. You've missed the Train for this cycle, however, (sorry, I couldn't resist) as the final release for nonclient libraries was last week. The easiest way to discuss getting S3 support into Usurri would be at the weekly Glance meeting on Thursdays at 1400 UTC. You can put an item on the agenda: https://etherpad.openstack.org/p/glance-team-meeting-agenda If that's not good for your time zone, you can continue the discussion with the Glance community on this mailing list. Basically, what will have to happen is you'll propose a spec or spec-lite for glance_store (see [0]; Abhishek can tell you which one he'll prefer). The key issues will be finding a committed maintainer (you?) and a testing strategy. Once that's figured out, it's just a matter of putting up a patch with your code and getting it reviewed and approved. (Just a quick reminder that one way to facilitate getting your code reviewed is to review other people's code.) cheers, brian [0] https://docs.openstack.org/glance/latest/contributor/blueprints.html > Thanks, > > Naohiro > > [1] https://docs.openstack.org/releasenotes/glance/newton.html > [2] https://opendev.org/openstack/glance_store/src/branch/master/releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml > From lucioseki at gmail.com Fri Sep 13 13:24:36 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 10:24:36 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Hi Donny, following are the rules: $ openstack security group list --project admin +--------------------------------------+---------+------------------------+----------------------------------+------+ | ID | Name | Description | Project | Tags | +--------------------------------------+---------+------------------------+----------------------------------+------+ | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group | 68e3942285a24fb5bd1aed30e166aaee | [] | +--------------------------------------+---------+------------------------+----------------------------------+------+ $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd +--------------------------------------+-------------+----------+------------+--------------------------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+----------+------------+--------------------------------------+ | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 | None | | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | | None | | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 | None | | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | | None | +--------------------------------------+-------------+----------+------------+--------------------------------------+ $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-09-03T16:51:41Z | | description | | | direction | egress | | ether_type | IPv6 | | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 | | location | Munch({'project': Munch({'domain_id': 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | 68e3942285a24fb5bd1aed30e166aaee | | protocol | ipv6-icmp | | remote_group_id | None | | remote_ip_prefix | None | | revision_number | 0 | | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | tags | [] | | updated_at | 2019-09-03T16:51:41Z | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-09-03T16:51:30Z | | description | | | direction | ingress | | ether_type | IPv6 | | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf | | location | Munch({'project': Munch({'domain_id': 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | 68e3942285a24fb5bd1aed30e166aaee | | protocol | ipv6-icmp | | remote_group_id | None | | remote_ip_prefix | None | | revision_number | 0 | | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | tags | [] | | updated_at | 2019-09-03T16:51:30Z | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ On Fri, Sep 13, 2019 at 10:16 AM Donny Davis wrote: > Security group rules? > > Donny Davis > c: 805 814 6800 > > On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: > >> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >> its hypervisor. >> Could you please help me troubleshooting it? >> >> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >> and manually created the networks, subnets and router. Following is my >> router: >> >> $ openstack router show router1 -c external_gateway_info -c >> interfaces_info >> >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> >> | >> >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | external_gateway_info | {"network_id": >> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >> "external_fixed_ips": [{"subnet_id": >> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >> "fd12:67:1::3c"}]} | >> | interfaces_info | [{"subnet_id": >> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >> >> | >> >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> I'm trying to ping6 the following VM: >> >> $ openstack server list >> >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | ID | Name | Status | Networks >> | Image | Flavor | >> >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >> >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> >> I intend to reach it via br-ex interface of the hypervisor: >> >> $ ip a show dev br-ex >> 9: br-ex: mtu 1500 qdisc noqueue state >> UNKNOWN group default qlen 1000 >> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >> inet6 fd12:67:1::1/64 scope global >> valid_lft forever preferred_lft forever >> inet6 fe80::c82:a1ff:feba:774c/64 scope link >> valid_lft forever preferred_lft forever >> >> The hypervisor has the following routes: >> >> $ ip -6 route >> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> fe80::/64 dev br-ex proto kernel metric 256 pref medium >> fe80::/64 dev br-int proto kernel metric 256 pref medium >> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >> >> And within the VM has the following routes: >> >> root at ubuntu:~# ip -6 route >> root at ubuntu:~# ip -6 route >> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref >> medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >> expires 260sec hoplimit 64 pref medium >> >> Though the ping6 from VM to hypervisor doesn't work: >> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >> --- fd12:67:1::1 ping statistics --- >> 4 packets transmitted, 0 packets received, 100% packet loss >> >> I'm able to tcpdump inside the router1 netns and see that request packet >> is passing there, but can't see any reply packets: >> >> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump >> -l -i any icmp6 >> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode >> listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 >> bytes >> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 0, length 64 >> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >> fe80::f816:3eff:fe0e:17c3, length 32 >> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >> fe80::f816:3eff:fe0e:17c3, length 24 >> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 1, length 64 >> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 2, length 64 >> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 3, length 64 >> >> The same happens from hypervisor to VM. I only acan see the request >> packets, but no reply packets. >> >> Thanks in advance, >> Lucio Seki >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Sep 13 17:22:17 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 13:22:17 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Well here is the output from my rule list that is in prod right now with ipv6 +--------------------------------------+-------------+-----------+------------+-----------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+------------+-----------------------+ | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | | None | | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | | None | | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | | None | | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | | None | | ec1ea961-9025-4229-92cf-618026a1851b | None | None | | None | +--------------------------------------+-------------+-----------+------------+-----------------------+ +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-07-30T00:50:25Z | | description | | | direction | ingress | | ether_type | IPv6 | | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | e8fd161dc34c421a979a9e6421f823e9 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | ::/0 | | revision_number | 0 | | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 | | tags | [] | | updated_at | 2019-07-30T00:50:25Z | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: > Hi Donny, following are the rules: > > $ openstack security group list --project admin > > +--------------------------------------+---------+------------------------+----------------------------------+------+ > | ID | Name | Description > | Project | Tags | > > +--------------------------------------+---------+------------------------+----------------------------------+------+ > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group > | 68e3942285a24fb5bd1aed30e166aaee | [] | > > +--------------------------------------+---------+------------------------+----------------------------------+------+ > > $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > | ID | IP Protocol | IP Range | Port > Range | Remote Security Group | > > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 > | None | > | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | > | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | > | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | > | None | > | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 > | None | > | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | > | None | > > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > > $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-09-03T16:51:41Z > > | > | description | > > | > | direction | egress > > | > | ether_type | IPv6 > > | > | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 > > | > | location | Munch({'project': Munch({'domain_id': 'default', > 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': > None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | 68e3942285a24fb5bd1aed30e166aaee > > | > | protocol | ipv6-icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | None > > | > | revision_number | 0 > > | > | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > | > | tags | [] > > | > | updated_at | 2019-09-03T16:51:41Z > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-09-03T16:51:30Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf > > | > | location | Munch({'project': Munch({'domain_id': 'default', > 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': > None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | 68e3942285a24fb5bd1aed30e166aaee > > | > | protocol | ipv6-icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | None > > | > | revision_number | 0 > > | > | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > | > | tags | [] > > | > | updated_at | 2019-09-03T16:51:30Z > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > On Fri, Sep 13, 2019 at 10:16 AM Donny Davis wrote: > >> Security group rules? >> >> Donny Davis >> c: 805 814 6800 >> >> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >> >>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >>> its hypervisor. >>> Could you please help me troubleshooting it? >>> >>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>> and manually created the networks, subnets and router. Following is my >>> router: >>> >>> $ openstack router show router1 -c external_gateway_info -c >>> interfaces_info >>> >>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> >>> | >>> >>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | external_gateway_info | {"network_id": >>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>> "external_fixed_ips": [{"subnet_id": >>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>> "fd12:67:1::3c"}]} | >>> | interfaces_info | [{"subnet_id": >>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>> >>> | >>> >>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> I'm trying to ping6 the following VM: >>> >>> $ openstack server list >>> >>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>> | ID | Name | Status | Networks >>> | Image | Flavor | >>> >>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>> >>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>> >>> I intend to reach it via br-ex interface of the hypervisor: >>> >>> $ ip a show dev br-ex >>> 9: br-ex: mtu 1500 qdisc noqueue state >>> UNKNOWN group default qlen 1000 >>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>> inet6 fd12:67:1::1/64 scope global >>> valid_lft forever preferred_lft forever >>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>> valid_lft forever preferred_lft forever >>> >>> The hypervisor has the following routes: >>> >>> $ ip -6 route >>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>> >>> And within the VM has the following routes: >>> >>> root at ubuntu:~# ip -6 route >>> root at ubuntu:~# ip -6 route >>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref >>> medium >>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>> expires 260sec hoplimit 64 pref medium >>> >>> Though the ping6 from VM to hypervisor doesn't work: >>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>> --- fd12:67:1::1 ping statistics --- >>> 4 packets transmitted, 0 packets received, 100% packet loss >>> >>> I'm able to tcpdump inside the router1 netns and see that request packet >>> is passing there, but can't see any reply packets: >>> >>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>> tcpdump -l -i any icmp6 >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> decode >>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>> 262144 bytes >>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 0, length 64 >>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>> fe80::f816:3eff:fe0e:17c3, length 32 >>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>> fe80::f816:3eff:fe0e:17c3, length 24 >>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 1, length 64 >>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 2, length 64 >>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 3, length 64 >>> >>> The same happens from hypervisor to VM. I only acan see the request >>> packets, but no reply packets. >>> >>> Thanks in advance, >>> Lucio Seki >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Sep 13 17:24:00 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 13:24:00 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Also I have no v6 address on my br-ex On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: > Well here is the output from my rule list that is in prod right now with > ipv6 > > +--------------------------------------+-------------+-----------+------------+-----------------------+ > | ID | IP Protocol | IP Range | Port > Range | Remote Security Group | > > +--------------------------------------+-------------+-----------+------------+-----------------------+ > | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | > | None | > | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | > | None | > | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | > | None | > | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | > | None | > | ec1ea961-9025-4229-92cf-618026a1851b | None | None | > | None | > > +--------------------------------------+-------------+-----------+------------+-----------------------+ > > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-07-30T00:50:25Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > | protocol | icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | ::/0 > > | > | revision_number | 0 > > | > | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > | tags | [] > > | > | updated_at | 2019-07-30T00:50:25Z > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > > On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: > >> Hi Donny, following are the rules: >> >> $ openstack security group list --project admin >> >> +--------------------------------------+---------+------------------------+----------------------------------+------+ >> | ID | Name | Description >> | Project | Tags | >> >> +--------------------------------------+---------+------------------------+----------------------------------+------+ >> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group >> | 68e3942285a24fb5bd1aed30e166aaee | [] | >> >> +--------------------------------------+---------+------------------------+----------------------------------+------+ >> >> $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd >> >> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >> | ID | IP Protocol | IP Range | Port >> Range | Remote Security Group | >> >> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 >> | None | >> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >> | None | >> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 >> | None | >> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >> | None | >> >> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >> >> $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-09-03T16:51:41Z >> >> | >> | description | >> >> | >> | direction | egress >> >> | >> | ether_type | IPv6 >> >> | >> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >> >> | >> | location | Munch({'project': Munch({'domain_id': 'default', >> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >> | name | None >> >> | >> | port_range_max | None >> >> | >> | port_range_min | None >> >> | >> | project_id | 68e3942285a24fb5bd1aed30e166aaee >> >> | >> | protocol | ipv6-icmp >> >> | >> | remote_group_id | None >> >> | >> | remote_ip_prefix | None >> >> | >> | revision_number | 0 >> >> | >> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >> >> | >> | tags | [] >> >> | >> | updated_at | 2019-09-03T16:51:41Z >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-09-03T16:51:30Z >> >> | >> | description | >> >> | >> | direction | ingress >> >> | >> | ether_type | IPv6 >> >> | >> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >> >> | >> | location | Munch({'project': Munch({'domain_id': 'default', >> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >> | name | None >> >> | >> | port_range_max | None >> >> | >> | port_range_min | None >> >> | >> | project_id | 68e3942285a24fb5bd1aed30e166aaee >> >> | >> | protocol | ipv6-icmp >> >> | >> | remote_group_id | None >> >> | >> | remote_ip_prefix | None >> >> | >> | revision_number | 0 >> >> | >> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >> >> | >> | tags | [] >> >> | >> | updated_at | 2019-09-03T16:51:30Z >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> >> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >> wrote: >> >>> Security group rules? >>> >>> Donny Davis >>> c: 805 814 6800 >>> >>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>> >>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >>>> its hypervisor. >>>> Could you please help me troubleshooting it? >>>> >>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>> and manually created the networks, subnets and router. Following is my >>>> router: >>>> >>>> $ openstack router show router1 -c external_gateway_info -c >>>> interfaces_info >>>> >>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> >>>> | >>>> >>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | external_gateway_info | {"network_id": >>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>> "external_fixed_ips": [{"subnet_id": >>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>> "fd12:67:1::3c"}]} | >>>> | interfaces_info | [{"subnet_id": >>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>> >>>> | >>>> >>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> I'm trying to ping6 the following VM: >>>> >>>> $ openstack server list >>>> >>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>> | ID | Name | Status | Networks >>>> | Image | Flavor | >>>> >>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>> >>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>> >>>> I intend to reach it via br-ex interface of the hypervisor: >>>> >>>> $ ip a show dev br-ex >>>> 9: br-ex: mtu 1500 qdisc noqueue >>>> state UNKNOWN group default qlen 1000 >>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>> inet6 fd12:67:1::1/64 scope global >>>> valid_lft forever preferred_lft forever >>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>> valid_lft forever preferred_lft forever >>>> >>>> The hypervisor has the following routes: >>>> >>>> $ ip -6 route >>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>> >>>> And within the VM has the following routes: >>>> >>>> root at ubuntu:~# ip -6 route >>>> root at ubuntu:~# ip -6 route >>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref >>>> medium >>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>> expires 260sec hoplimit 64 pref medium >>>> >>>> Though the ping6 from VM to hypervisor doesn't work: >>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>> --- fd12:67:1::1 ping statistics --- >>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>> >>>> I'm able to tcpdump inside the router1 netns and see that request >>>> packet is passing there, but can't see any reply packets: >>>> >>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>> tcpdump -l -i any icmp6 >>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>> decode >>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>> 262144 bytes >>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 0, length 64 >>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 1, length 64 >>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 2, length 64 >>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 3, length 64 >>>> >>>> The same happens from hypervisor to VM. I only acan see the request >>>> packets, but no reply packets. >>>> >>>> Thanks in advance, >>>> Lucio Seki >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Sep 13 19:03:04 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 13 Sep 2019 12:03:04 -0700 Subject: Long, Slow Zuul Queues and Why They Happen Message-ID: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> Hello, We've been fielding a fair bit of questions and suggestions around Zuul's long change (and job) queues over the last week or so. As a result I tried to put a quick FAQ type document [0] on how we schedule jobs, why we schedule that way, and how we can improve the long queues. Hoping that gives us all a better understanding of why were are in the current situation and ideas on how we can help to improve things. [0] https://docs.openstack.org/infra/manual/testing.html#why-are-jobs-for-changes-queued-for-a-long-time Thanks, Clark From mriedemos at gmail.com Fri Sep 13 19:44:19 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 13 Sep 2019 14:44:19 -0500 Subject: Long, Slow Zuul Queues and Why They Happen In-Reply-To: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> References: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> Message-ID: <9aaf8782-92d1-dae7-c3b1-1a1d720bdd7f@gmail.com> On 9/13/2019 2:03 PM, Clark Boylan wrote: > We've been fielding a fair bit of questions and suggestions around Zuul's long change (and job) queues over the last week or so. As a result I tried to put a quick FAQ type document [0] on how we schedule jobs, why we schedule that way, and how we can improve the long queues. > > Hoping that gives us all a better understanding of why were are in the current situation and ideas on how we can help to improve things. > > [0]https://docs.openstack.org/infra/manual/testing.html#why-are-jobs-for-changes-queued-for-a-long-time Thanks for writing this up Clark. As for the current status of the gate, several nova devs have been closely monitoring the gate since we have 3 fairly lengthy series of feature changes approved since yesterday and we're trying to shepherd those through but we're seeing failures and trying to react to them. Two issues of note this week: 1. http://status.openstack.org/elastic-recheck/index.html#1843615 I had pushed a fix for that one earlier in the week but there was a bug in my fix which Takashi has fixed: https://review.opendev.org/#/c/682025/ That was promoted to the gate earlier today but failed on... 2. http://status.openstack.org/elastic-recheck/index.html#1813147 We have a couple of patches up for that now which might get promoted once we are reasonably sure those are going to pass check (promote to gate means skipping check which is risky because if it fails in the gate we have to re-queue the gate as the doc above explains). As far as overall failure classifications we're pretty good there in elastic-recheck: http://status.openstack.org/elastic-recheck/data/integrated_gate.html Meaning for the most part we know what's failing, we just need to fix the bugs. One that continues to dog us (and by "us" I mean OpenStack, not just nova) is this one: http://status.openstack.org/elastic-recheck/gate.html#1686542 The QA team's work to split apart the big tempest full jobs into service-oriented jobs like tempest-integrated-compute should have helped here but we're still seeing there are lots of jobs timing out which likely means there are some really slow tests running in too many jobs and those require investigation. It could also be devstack setup that is taking a long time like Clark identified with OSC usage awhile back: http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008071.html If you have questions about how elastic-recheck works or how to help investigate some of these failures, like with using logstash.openstack.org, please reach out to me (mriedem), clarkb and/or gmann in #openstack-qa. -- Thanks, Matt From lucioseki at gmail.com Fri Sep 13 18:48:32 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 15:48:32 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Hmm OK, I'll try to figure out what hacking create_neutron_initial_network does... BTW, I noticed that I can ping6 the router interface at private subnet from the DevStack host: $ ping6 fd12:67:1:1::1 PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms And also I can ping6 the public subnet interface from the VM: root at ubuntu:~# ping6 fd12:67:1::3c PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes ping: getnameinfo: Temporary failure in name resolution 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms ping: getnameinfo: Temporary failure in name resolution 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms ping: getnameinfo: Temporary failure in name resolution 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms Not sure if it means that there's something missing within the router itself... On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: > Also I have no v6 address on my br-ex > > On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: > >> Well here is the output from my rule list that is in prod right now with >> ipv6 >> >> +--------------------------------------+-------------+-----------+------------+-----------------------+ >> | ID | IP Protocol | IP Range | Port >> Range | Remote Security Group | >> >> +--------------------------------------+-------------+-----------+------------+-----------------------+ >> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >> | None | >> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >> | None | >> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >> | None | >> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >> | None | >> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >> | None | >> >> +--------------------------------------+-------------+-----------+------------+-----------------------+ >> >> >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> | >> >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-07-30T00:50:25Z >> >> | >> | description | >> >> | >> | direction | ingress >> >> | >> | ether_type | IPv6 >> >> | >> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >> >> | >> | location | Munch({'cloud': '', 'region_name': 'regionOne', >> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >> | name | None >> >> | >> | port_range_max | None >> >> | >> | port_range_min | None >> >> | >> | project_id | e8fd161dc34c421a979a9e6421f823e9 >> >> | >> | protocol | icmp >> >> | >> | remote_group_id | None >> >> | >> | remote_ip_prefix | ::/0 >> >> | >> | revision_number | 0 >> >> | >> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >> >> | >> | tags | [] >> >> | >> | updated_at | 2019-07-30T00:50:25Z >> >> | >> >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> >> >> >> >> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >> >>> Hi Donny, following are the rules: >>> >>> $ openstack security group list --project admin >>> >>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>> | ID | Name | Description >>> | Project | Tags | >>> >>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>> >>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>> >>> $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>> >>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>> | ID | IP Protocol | IP Range | Port >>> Range | Remote Security Group | >>> >>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 >>> | None | >>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>> | None | >>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 >>> | None | >>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>> | None | >>> >>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>> >>> $ openstack security group rule show >>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | created_at | 2019-09-03T16:51:41Z >>> >>> | >>> | description | >>> >>> | >>> | direction | egress >>> >>> | >>> | ether_type | IPv6 >>> >>> | >>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>> >>> | >>> | location | Munch({'project': Munch({'domain_id': 'default', >>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>> | name | None >>> >>> | >>> | port_range_max | None >>> >>> | >>> | port_range_min | None >>> >>> | >>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>> >>> | >>> | protocol | ipv6-icmp >>> >>> | >>> | remote_group_id | None >>> >>> | >>> | remote_ip_prefix | None >>> >>> | >>> | revision_number | 0 >>> >>> | >>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>> >>> | >>> | tags | [] >>> >>> | >>> | updated_at | 2019-09-03T16:51:41Z >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | created_at | 2019-09-03T16:51:30Z >>> >>> | >>> | description | >>> >>> | >>> | direction | ingress >>> >>> | >>> | ether_type | IPv6 >>> >>> | >>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>> >>> | >>> | location | Munch({'project': Munch({'domain_id': 'default', >>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>> | name | None >>> >>> | >>> | port_range_max | None >>> >>> | >>> | port_range_min | None >>> >>> | >>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>> >>> | >>> | protocol | ipv6-icmp >>> >>> | >>> | remote_group_id | None >>> >>> | >>> | remote_ip_prefix | None >>> >>> | >>> | revision_number | 0 >>> >>> | >>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>> >>> | >>> | tags | [] >>> >>> | >>> | updated_at | 2019-09-03T16:51:30Z >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> >>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>> wrote: >>> >>>> Security group rules? >>>> >>>> Donny Davis >>>> c: 805 814 6800 >>>> >>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>> >>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >>>>> its hypervisor. >>>>> Could you please help me troubleshooting it? >>>>> >>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>> and manually created the networks, subnets and router. Following is my >>>>> router: >>>>> >>>>> $ openstack router show router1 -c external_gateway_info -c >>>>> interfaces_info >>>>> >>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> >>>>> | >>>>> >>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | external_gateway_info | {"network_id": >>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>> "external_fixed_ips": [{"subnet_id": >>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>> "fd12:67:1::3c"}]} | >>>>> | interfaces_info | [{"subnet_id": >>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>> >>>>> | >>>>> >>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> I'm trying to ping6 the following VM: >>>>> >>>>> $ openstack server list >>>>> >>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>> | ID | Name | Status | Networks >>>>> | Image | Flavor | >>>>> >>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>> >>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>> >>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>> >>>>> $ ip a show dev br-ex >>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>> state UNKNOWN group default qlen 1000 >>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>> inet6 fd12:67:1::1/64 scope global >>>>> valid_lft forever preferred_lft forever >>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>> valid_lft forever preferred_lft forever >>>>> >>>>> The hypervisor has the following routes: >>>>> >>>>> $ ip -6 route >>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>> >>>>> And within the VM has the following routes: >>>>> >>>>> root at ubuntu:~# ip -6 route >>>>> root at ubuntu:~# ip -6 route >>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>> pref medium >>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>> expires 260sec hoplimit 64 pref medium >>>>> >>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>> --- fd12:67:1::1 ping statistics --- >>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>> >>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>> packet is passing there, but can't see any reply packets: >>>>> >>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>> tcpdump -l -i any icmp6 >>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>> decode >>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>> 262144 bytes >>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 0, length 64 >>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 1, length 64 >>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 2, length 64 >>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 3, length 64 >>>>> >>>>> The same happens from hypervisor to VM. I only acan see the request >>>>> packets, but no reply packets. >>>>> >>>>> Thanks in advance, >>>>> Lucio Seki >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Sep 13 18:55:20 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 14:55:20 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: So outbound traffic works, but inbound traffic doesn't? Here is my icmp security group rule for ipv6. +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-07-30T00:50:25Z | | description | | | direction | ingress | | ether_type | IPv6 | | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | e8fd161dc34c421a979a9e6421f823e9 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | ::/0 | | revision_number | 0 | | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 | | tags | [] | | updated_at | 2019-07-30T00:50:25Z | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > Hmm OK, I'll try to figure out what hacking create_neutron_initial_network > does... > > BTW, I noticed that I can ping6 the router interface at private subnet > from the DevStack host: > > $ ping6 fd12:67:1:1::1 > PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes > 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms > 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms > 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms > 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms > > And also I can ping6 the public subnet interface from the VM: > > root at ubuntu:~# ping6 fd12:67:1::3c > PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes > ping: getnameinfo: Temporary failure in name resolution > 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms > ping: getnameinfo: Temporary failure in name resolution > 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms > ping: getnameinfo: Temporary failure in name resolution > 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms > > Not sure if it means that there's something missing within the router > itself... > > On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: > >> Also I have no v6 address on my br-ex >> >> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: >> >>> Well here is the output from my rule list that is in prod right now with >>> ipv6 >>> >>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>> | ID | IP Protocol | IP Range | Port >>> Range | Remote Security Group | >>> >>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >>> | None | >>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >>> | None | >>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >>> | None | >>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >>> | None | >>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >>> | None | >>> >>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>> >>> >>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> | >>> >>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | created_at | 2019-07-30T00:50:25Z >>> >>> | >>> | description | >>> >>> | >>> | direction | ingress >>> >>> | >>> | ether_type | IPv6 >>> >>> | >>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >>> >>> | >>> | location | Munch({'cloud': '', 'region_name': 'regionOne', >>> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >>> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>> | name | None >>> >>> | >>> | port_range_max | None >>> >>> | >>> | port_range_min | None >>> >>> | >>> | project_id | e8fd161dc34c421a979a9e6421f823e9 >>> >>> | >>> | protocol | icmp >>> >>> | >>> | remote_group_id | None >>> >>> | >>> | remote_ip_prefix | ::/0 >>> >>> | >>> | revision_number | 0 >>> >>> | >>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >>> >>> | >>> | tags | [] >>> >>> | >>> | updated_at | 2019-07-30T00:50:25Z >>> >>> | >>> >>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> >>> >>> >>> >>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>> >>>> Hi Donny, following are the rules: >>>> >>>> $ openstack security group list --project admin >>>> >>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>> | ID | Name | Description >>>> | Project | Tags | >>>> >>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>> >>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>> >>>> $ openstack security group rule list >>>> d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>> >>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>> | ID | IP Protocol | IP Range | Port >>>> Range | Remote Security Group | >>>> >>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 >>>> | None | >>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>>> | None | >>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 >>>> | None | >>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>>> | None | >>>> >>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>> >>>> $ openstack security group rule show >>>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-09-03T16:51:41Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | egress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>> >>>> | >>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>> >>>> | >>>> | protocol | ipv6-icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | None >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-09-03T16:51:41Z >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> $ openstack security group rule show >>>> 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-09-03T16:51:30Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | ingress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>> >>>> | >>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>> >>>> | >>>> | protocol | ipv6-icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | None >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-09-03T16:51:30Z >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> >>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>>> wrote: >>>> >>>>> Security group rules? >>>>> >>>>> Donny Davis >>>>> c: 805 814 6800 >>>>> >>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>> >>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack >>>>>> from its hypervisor. >>>>>> Could you please help me troubleshooting it? >>>>>> >>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>> and manually created the networks, subnets and router. Following is >>>>>> my router: >>>>>> >>>>>> $ openstack router show router1 -c external_gateway_info -c >>>>>> interfaces_info >>>>>> >>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | Field | Value >>>>>> >>>>>> >>>>>> | >>>>>> >>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | external_gateway_info | {"network_id": >>>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>>> "external_fixed_ips": [{"subnet_id": >>>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>>> "fd12:67:1::3c"}]} | >>>>>> | interfaces_info | [{"subnet_id": >>>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>>> >>>>>> | >>>>>> >>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> >>>>>> I'm trying to ping6 the following VM: >>>>>> >>>>>> $ openstack server list >>>>>> >>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>> | ID | Name | Status | Networks >>>>>> | Image | Flavor | >>>>>> >>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>> >>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>> >>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>> >>>>>> $ ip a show dev br-ex >>>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>>> state UNKNOWN group default qlen 1000 >>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>> inet6 fd12:67:1::1/64 scope global >>>>>> valid_lft forever preferred_lft forever >>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>> valid_lft forever preferred_lft forever >>>>>> >>>>>> The hypervisor has the following routes: >>>>>> >>>>>> $ ip -6 route >>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>>> >>>>>> And within the VM has the following routes: >>>>>> >>>>>> root at ubuntu:~# ip -6 route >>>>>> root at ubuntu:~# ip -6 route >>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>>> pref medium >>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>>> expires 260sec hoplimit 64 pref medium >>>>>> >>>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>>> --- fd12:67:1::1 ping statistics --- >>>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>>> >>>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>>> packet is passing there, but can't see any reply packets: >>>>>> >>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>>> tcpdump -l -i any icmp6 >>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>>> decode >>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>>> 262144 bytes >>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 0, length 64 >>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 1, length 64 >>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 2, length 64 >>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 3, length 64 >>>>>> >>>>>> The same happens from hypervisor to VM. I only acan see the request >>>>>> packets, but no reply packets. >>>>>> >>>>>> Thanks in advance, >>>>>> Lucio Seki >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucioseki at gmail.com Fri Sep 13 19:38:45 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 16:38:45 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: I drawed the environment I have [1]. Also attached it as an image. Currently I have the interfaces 1 pinging 3, and 4 pinging 2. When I attempt to make 1 ping 4, I can only see the request packets at 2. When I attempt to make 4 ping 1, I can only see the request packets at 3. [1] https://docs.google.com/drawings/d/1zhgN9TCINrVIlQpZT9hlCrHxWrQerjIo62oRmTGx0-c/edit?usp=sharing On Fri, Sep 13, 2019 at 3:55 PM Donny Davis wrote: > So outbound traffic works, but inbound traffic doesn't? > > Here is my icmp security group rule for ipv6. > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-07-30T00:50:25Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > | protocol | icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | ::/0 > > | > | revision_number | 0 > > | > | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > | tags | [] > > | > | updated_at | 2019-07-30T00:50:25Z > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > >> Hmm OK, I'll try to figure out what hacking >> create_neutron_initial_network does... >> >> BTW, I noticed that I can ping6 the router interface at private subnet >> from the DevStack host: >> >> $ ping6 fd12:67:1:1::1 >> PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes >> 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms >> >> And also I can ping6 the public subnet interface from the VM: >> >> root at ubuntu:~# ping6 fd12:67:1::3c >> PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms >> >> Not sure if it means that there's something missing within the router >> itself... >> >> On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: >> >>> Also I have no v6 address on my br-ex >>> >>> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis >>> wrote: >>> >>>> Well here is the output from my rule list that is in prod right now >>>> with ipv6 >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | ID | IP Protocol | IP Range | Port >>>> Range | Remote Security Group | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >>>> | None | >>>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >>>> | None | >>>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >>>> | None | >>>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >>>> | None | >>>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >>>> | None | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | ingress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >>>> >>>> | >>>> | location | Munch({'cloud': '', 'region_name': 'regionOne', >>>> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >>>> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | e8fd161dc34c421a979a9e6421f823e9 >>>> >>>> | >>>> | protocol | icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | ::/0 >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> >>>> >>>> >>>> >>>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>>> >>>>> Hi Donny, following are the rules: >>>>> >>>>> $ openstack security group list --project admin >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | ID | Name | Description >>>>> | Project | Tags | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>>>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> >>>>> $ openstack security group rule list >>>>> d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | ID | IP Protocol | IP Range | Port >>>>> Range | Remote Security Group | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>>>> | None | >>>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>>>> | None | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | egress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | ingress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> >>>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>>>> wrote: >>>>> >>>>>> Security group rules? >>>>>> >>>>>> Donny Davis >>>>>> c: 805 814 6800 >>>>>> >>>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>>> >>>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack >>>>>>> from its hypervisor. >>>>>>> Could you please help me troubleshooting it? >>>>>>> >>>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>>> and manually created the networks, subnets and router. Following is >>>>>>> my router: >>>>>>> >>>>>>> $ openstack router show router1 -c external_gateway_info -c >>>>>>> interfaces_info >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | Field | Value >>>>>>> >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | external_gateway_info | {"network_id": >>>>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>>>> "external_fixed_ips": [{"subnet_id": >>>>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>>>> "fd12:67:1::3c"}]} | >>>>>>> | interfaces_info | [{"subnet_id": >>>>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> >>>>>>> I'm trying to ping6 the following VM: >>>>>>> >>>>>>> $ openstack server list >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | ID | Name | Status | Networks >>>>>>> | Image | Flavor | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> >>>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>>> >>>>>>> $ ip a show dev br-ex >>>>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>>>> state UNKNOWN group default qlen 1000 >>>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>>> inet6 fd12:67:1::1/64 scope global >>>>>>> valid_lft forever preferred_lft forever >>>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>>> valid_lft forever preferred_lft forever >>>>>>> >>>>>>> The hypervisor has the following routes: >>>>>>> >>>>>>> $ ip -6 route >>>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>>>> >>>>>>> And within the VM has the following routes: >>>>>>> >>>>>>> root at ubuntu:~# ip -6 route >>>>>>> root at ubuntu:~# ip -6 route >>>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>>>> pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>>>> expires 260sec hoplimit 64 pref medium >>>>>>> >>>>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>>>> --- fd12:67:1::1 ping statistics --- >>>>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>>>> >>>>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>>>> packet is passing there, but can't see any reply packets: >>>>>>> >>>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>>>> tcpdump -l -i any icmp6 >>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>>>> decode >>>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>>>> 262144 bytes >>>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 0, length 64 >>>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 1, length 64 >>>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 2, length 64 >>>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 3, length 64 >>>>>>> >>>>>>> The same happens from hypervisor to VM. I only acan see the request >>>>>>> packets, but no reply packets. >>>>>>> >>>>>>> Thanks in advance, >>>>>>> Lucio Seki >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DevStack IPv6.png Type: image/png Size: 29595 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Sep 13 19:54:01 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 13 Sep 2019 14:54:01 -0500 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <20190913195401.GA10452@sm-workstation> On Fri, Sep 13, 2019 at 04:07:54PM +0000, zuul at openstack.org wrote: > Build failed. > > - tag-releases https://zuul.opendev.org/t/openstack/build/c95672e425294127821c55ddf1176218 : RETRY_LIMIT in 1m 18s > - publish-tox-docs-static https://zuul.opendev.org/t/openstack/build/None : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures Just to make sure there is a record of it - this was a temporary infrastructure sync issue that was quickly resolved. The job was reenqueued and everything completed fine. Looks like everything is now good and no further action is needed, but of course if anything odd is seen related to this, please let us know. Sean From emilien at redhat.com Fri Sep 13 22:00:30 2019 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 13 Sep 2019 18:00:30 -0400 Subject: [tripleo] Deprecating paunch CLI? Message-ID: With our long-term goal to simplify TripleO and focus on what people actually deploy and how they operate their clouds, it appears that the Paunch CLI hasn't been a critical piece in our project and I propose that we deprecate it; create an Ansible module to call Paunch as a library only. I've been playing with it a little today: https://review.opendev.org/#/c/682093/ https://review.opendev.org/#/c/682094/ Here is how you would call paunch: - name: Start containers for step {{ step }} paunch: config: "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step }}.json" config_id: "tripleo_step{{ step }}" action: apply container_cli: "{{ container_cli }}" managed_by: "tripleo-{{ tripleo_role_name }}" A few benefits: - Deployment tasks in THT would call the new module instead of a shell command - More Pythonic and clean for Ansible, to interact with the actual task during the run - Removing some code in Paunch, make it easier to maintain for us For now, the Ansible module only covers "paunch apply", we will probably cover "delete" and "cleanup" eventually. Please let me know if you have any questions or concerns, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmar.salih1 at gmail.com Fri Sep 13 22:29:38 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Fri, 13 Sep 2019 18:29:38 -0400 Subject: Execute a script on every object upload event (Swift+aodh) Message-ID: Hey all, Need help to configure swift and aodh. The idea is to trigger aodh alarm on very object upload event on swift. Once the alarm triggered, a small script should be executed. So the sequence of operations should be like this: 1- Object just uploaded to swift container 2- Alarm triggered by aodh 3- Once alarm triggered , execute python script. I am using Devstack Stein release installed on virtual box. Best regards. Anmar Salih -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Fri Sep 13 23:19:14 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Fri, 13 Sep 2019 18:19:14 -0500 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: +1 - I think this is a great idea and will help simplify quite a bit. -- Kevin Carter IRC: Cloudnull On Fri, Sep 13, 2019 at 5:07 PM Emilien Macchi wrote: > With our long-term goal to simplify TripleO and focus on what people > actually deploy and how they operate their clouds, it appears that the > Paunch CLI hasn't been a critical piece in our project and I propose that > we deprecate it; create an Ansible module to call Paunch as a library only. > > I've been playing with it a little today: > https://review.opendev.org/#/c/682093/ > https://review.opendev.org/#/c/682094/ > > Here is how you would call paunch: > - name: Start containers for step {{ step }} > paunch: > config: > "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step > }}.json" > config_id: "tripleo_step{{ step }}" > action: apply > container_cli: "{{ container_cli }}" > managed_by: "tripleo-{{ tripleo_role_name }}" > > A few benefits: > - Deployment tasks in THT would call the new module instead of a shell > command > - More Pythonic and clean for Ansible, to interact with the actual task > during the run > - Removing some code in Paunch, make it easier to maintain for us > > For now, the Ansible module only covers "paunch apply", we will probably > cover "delete" and "cleanup" eventually. > > Please let me know if you have any questions or concerns, > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucioseki at gmail.com Fri Sep 13 20:23:24 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 17:23:24 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: I recreated my security group rules, to set remote_ip_prefix to ::/0 instead of None as in Donny's environment, but made no difference. :-( On Fri, Sep 13, 2019 at 3:55 PM Donny Davis wrote: > So outbound traffic works, but inbound traffic doesn't? > > Here is my icmp security group rule for ipv6. > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-07-30T00:50:25Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > | protocol | icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | ::/0 > > | > | revision_number | 0 > > | > | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > | tags | [] > > | > | updated_at | 2019-07-30T00:50:25Z > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > >> Hmm OK, I'll try to figure out what hacking >> create_neutron_initial_network does... >> >> BTW, I noticed that I can ping6 the router interface at private subnet >> from the DevStack host: >> >> $ ping6 fd12:67:1:1::1 >> PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes >> 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms >> >> And also I can ping6 the public subnet interface from the VM: >> >> root at ubuntu:~# ping6 fd12:67:1::3c >> PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms >> >> Not sure if it means that there's something missing within the router >> itself... >> >> On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: >> >>> Also I have no v6 address on my br-ex >>> >>> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis >>> wrote: >>> >>>> Well here is the output from my rule list that is in prod right now >>>> with ipv6 >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | ID | IP Protocol | IP Range | Port >>>> Range | Remote Security Group | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >>>> | None | >>>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >>>> | None | >>>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >>>> | None | >>>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >>>> | None | >>>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >>>> | None | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | ingress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >>>> >>>> | >>>> | location | Munch({'cloud': '', 'region_name': 'regionOne', >>>> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >>>> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | e8fd161dc34c421a979a9e6421f823e9 >>>> >>>> | >>>> | protocol | icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | ::/0 >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> >>>> >>>> >>>> >>>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>>> >>>>> Hi Donny, following are the rules: >>>>> >>>>> $ openstack security group list --project admin >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | ID | Name | Description >>>>> | Project | Tags | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>>>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> >>>>> $ openstack security group rule list >>>>> d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | ID | IP Protocol | IP Range | Port >>>>> Range | Remote Security Group | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>>>> | None | >>>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>>>> | None | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | egress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | ingress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> >>>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>>>> wrote: >>>>> >>>>>> Security group rules? >>>>>> >>>>>> Donny Davis >>>>>> c: 805 814 6800 >>>>>> >>>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>>> >>>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack >>>>>>> from its hypervisor. >>>>>>> Could you please help me troubleshooting it? >>>>>>> >>>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>>> and manually created the networks, subnets and router. Following is >>>>>>> my router: >>>>>>> >>>>>>> $ openstack router show router1 -c external_gateway_info -c >>>>>>> interfaces_info >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | Field | Value >>>>>>> >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | external_gateway_info | {"network_id": >>>>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>>>> "external_fixed_ips": [{"subnet_id": >>>>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>>>> "fd12:67:1::3c"}]} | >>>>>>> | interfaces_info | [{"subnet_id": >>>>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> >>>>>>> I'm trying to ping6 the following VM: >>>>>>> >>>>>>> $ openstack server list >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | ID | Name | Status | Networks >>>>>>> | Image | Flavor | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> >>>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>>> >>>>>>> $ ip a show dev br-ex >>>>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>>>> state UNKNOWN group default qlen 1000 >>>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>>> inet6 fd12:67:1::1/64 scope global >>>>>>> valid_lft forever preferred_lft forever >>>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>>> valid_lft forever preferred_lft forever >>>>>>> >>>>>>> The hypervisor has the following routes: >>>>>>> >>>>>>> $ ip -6 route >>>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>>>> >>>>>>> And within the VM has the following routes: >>>>>>> >>>>>>> root at ubuntu:~# ip -6 route >>>>>>> root at ubuntu:~# ip -6 route >>>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>>>> pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>>>> expires 260sec hoplimit 64 pref medium >>>>>>> >>>>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>>>> --- fd12:67:1::1 ping statistics --- >>>>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>>>> >>>>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>>>> packet is passing there, but can't see any reply packets: >>>>>>> >>>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>>>> tcpdump -l -i any icmp6 >>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>>>> decode >>>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>>>> 262144 bytes >>>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 0, length 64 >>>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 1, length 64 >>>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 2, length 64 >>>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 3, length 64 >>>>>>> >>>>>>> The same happens from hypervisor to VM. I only acan see the request >>>>>>> packets, but no reply packets. >>>>>>> >>>>>>> Thanks in advance, >>>>>>> Lucio Seki >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Sat Sep 14 02:50:36 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 13 Sep 2019 19:50:36 -0700 Subject: [all][PTL] Call for Cycle Highlights for Train In-Reply-To: References: Message-ID: Last call for cycle highlights! If you don't know how to start, take a look at some of the others that have gotten merged[1][2][3]. -Kendall (diablo_rojo) [1]https://review.opendev.org/#/c/681896/ [2]https://review.opendev.org/#/c/681943/ [3]https://review.opendev.org/#/c/680675/ On Wed, 11 Sep 2019, 5:52 pm Kendall Nelson, wrote: > Reminder that cycle highlights are due the end of this week! > > -Kendall (diablo_rojo) > > On Thu, 5 Sep 2019, 11:48 am Kendall Nelson, > wrote: > >> Hello Everyone! >> >> As you may or may not have read last week in the release update from >> Sean, its time to call out 'cycle-highlights' in your deliverables! >> >> As PTLs, you probably get many pings towards the end of every release >> cycle by various parties (marketing, management, journalists, etc) asking >> for highlights of what is new and what significant changes are coming in >> the new release. By putting them all in the same place it makes them easy >> to reference because they get compiled into a pretty website like this from >> Rocky[1] or this one for Stein[2]. >> >> We don't need a fully fledged marketing message, just a few highlights >> (3-4 ideally), from each project team. >> >> *The deadline for cycle highlights is the end of the R-5 week [3] on Sept >> 13th.* >> >> How To Reminder: >> ------------------------- >> >> Simply add them to the deliverables/train/$PROJECT.yaml in the >> openstack/releases repo similar to this: >> >> cycle-highlights: >> - Introduced new service to use unused host to mine bitcoin. >> >> The formatting options for this tag are the same as what you are probably >> used to with Reno release notes. >> >> Also, you can check on the formatting of the output by either running >> locally: >> >> tox -e docs >> >> And then checking the resulting doc/build/html/train/highlights.html >> file or the output of the build-openstack-sphinx-docs job under >> html/train/highlights.html. >> >> Thanks :) >> -Kendall Nelson (diablo_rojo) >> >> [1] https://releases.openstack.org/rocky/highlights.html >> [2] https://releases.openstack.org/stein/highlights.html >> [3] https://releases.openstack.org/train/schedule.html >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Sat Sep 14 02:57:55 2019 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 13 Sep 2019 22:57:55 -0400 Subject: Execute a script on every object upload event (Swift+aodh) In-Reply-To: References: Message-ID: <8e112e0f-4bf9-565d-12ea-835bff5dac6f@redhat.com> On 13/09/19 6:29 PM, Anmar Salih wrote: > Hey all, > > Need help to configure swift and aodh. The idea is to trigger aodh alarm > on very object upload event on swift. Once the alarm triggered, a > small script should be executed. So the sequence of operations should be > like this: > 1- Object just uploaded to swift container > 2- Alarm triggered by aodh > 3- Once alarm triggered , execute python script. > > I am using Devstack Stein release installed on virtual box. > > Best regards. > Anmar Salih Have you looked at https://docs.openstack.org/storlets/latest/ ? From gmann at ghanshyammann.com Sat Sep 14 05:19:19 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 14 Sep 2019 14:19:19 +0900 Subject: [all] stable/ocata gate failure Message-ID: <16d2e36161f.c54e46a1161326.4333158062553456987@ghanshyammann.com> Hello Everyone, If you have noticed that stable/ocata gate is blocked where 'legacy-tempest-dsvm-neutron-full/-*' job is failing due to the latest Tempest changes. Tempest started the JSON schema strict validation for Volume APIs which caught the failure or you can say Tempest master cannot be used in Ocata testing. More details- https://bugs.launchpad.net/tempest/+bug/1843762 As per the Tempest stable branch testing policy[1], Tempst does not support the stable/ocata (which is EM )in the current development cycle. Current supported stable branches by Tempest are Queens, Rocky, Stein and Train-on-going. We can keep using Tempest master on EM stable/branches as long as it run successfully and if it start failing which is current case of stable/ocata then use Tempest tag to test that EM stable branch. To unblock the stable/ocata gate, I am trying to install the Tempest 20.0.0(compatible version for Ocata) in ocata gate - https://review.opendev.org/#/c/681950/ Fix is not working as of now (it still install Tempest master). I will debug that later (my current priority is for Train feature freeze). [1] https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html -gmann From antonio.ojea.garcia at gmail.com Sat Sep 14 09:01:56 2019 From: antonio.ojea.garcia at gmail.com (Antonio Ojea) Date: Sat, 14 Sep 2019 11:01:56 +0200 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Can you check if ipv6 forwarding is enabled in the router namespace? net.ipv6.conf.all.forwarding=1 On Sat, 14 Sep 2019 at 02:13, Lucio Seki wrote: > > I recreated my security group rules, to set remote_ip_prefix to ::/0 instead of None as in Donny's environment, but made no difference. :-( > > On Fri, Sep 13, 2019 at 3:55 PM Donny Davis wrote: >> >> So outbound traffic works, but inbound traffic doesn't? >> >> Here is my icmp security group rule for ipv6. >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value | >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-07-30T00:50:25Z | >> | description | | >> | direction | ingress | >> | ether_type | IPv6 | >> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | >> | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >> | name | None | >> | port_range_max | None | >> | port_range_min | None | >> | project_id | e8fd161dc34c421a979a9e6421f823e9 | >> | protocol | icmp | >> | remote_group_id | None | >> | remote_ip_prefix | ::/0 | >> | revision_number | 0 | >> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 | >> | tags | [] | >> | updated_at | 2019-07-30T00:50:25Z | >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> >> >> On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: >>> >>> Hmm OK, I'll try to figure out what hacking create_neutron_initial_network does... >>> >>> BTW, I noticed that I can ping6 the router interface at private subnet from the DevStack host: >>> >>> $ ping6 fd12:67:1:1::1 >>> PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes >>> 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms >>> 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms >>> 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms >>> 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms >>> >>> And also I can ping6 the public subnet interface from the VM: >>> >>> root at ubuntu:~# ping6 fd12:67:1::3c >>> PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes >>> ping: getnameinfo: Temporary failure in name resolution >>> 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms >>> ping: getnameinfo: Temporary failure in name resolution >>> 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms >>> ping: getnameinfo: Temporary failure in name resolution >>> 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms >>> >>> Not sure if it means that there's something missing within the router itself... >>> >>> On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: >>>> >>>> Also I have no v6 address on my br-ex >>>> >>>> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: >>>>> >>>>> Well here is the output from my rule list that is in prod right now with ipv6 >>>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>>> | ID | IP Protocol | IP Range | Port Range | Remote Security Group | >>>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | | None | >>>>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | | None | >>>>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | | None | >>>>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | | None | >>>>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | | None | >>>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>>> >>>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value | >>>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-07-30T00:50:25Z | >>>>> | description | | >>>>> | direction | ingress | >>>>> | ether_type | IPv6 | >>>>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | >>>>> | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>>>> | name | None | >>>>> | port_range_max | None | >>>>> | port_range_min | None | >>>>> | project_id | e8fd161dc34c421a979a9e6421f823e9 | >>>>> | protocol | icmp | >>>>> | remote_group_id | None | >>>>> | remote_ip_prefix | ::/0 | >>>>> | revision_number | 0 | >>>>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 | >>>>> | tags | [] | >>>>> | updated_at | 2019-07-30T00:50:25Z | >>>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>>>>> >>>>>> Hi Donny, following are the rules: >>>>>> >>>>>> $ openstack security group list --project admin >>>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>>> | ID | Name | Description | Project | Tags | >>>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>>> >>>>>> $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>>> | ID | IP Protocol | IP Range | Port Range | Remote Security Group | >>>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 | None | >>>>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | | None | >>>>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 | None | >>>>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | | None | >>>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>>> >>>>>> $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | Field | Value | >>>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | created_at | 2019-09-03T16:51:41Z | >>>>>> | description | | >>>>>> | direction | egress | >>>>>> | ether_type | IPv6 | >>>>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 | >>>>>> | location | Munch({'project': Munch({'domain_id': 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>>> | name | None | >>>>>> | port_range_max | None | >>>>>> | port_range_min | None | >>>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee | >>>>>> | protocol | ipv6-icmp | >>>>>> | remote_group_id | None | >>>>>> | remote_ip_prefix | None | >>>>>> | revision_number | 0 | >>>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>>> | tags | [] | >>>>>> | updated_at | 2019-09-03T16:51:41Z | >>>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> >>>>>> $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | Field | Value | >>>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | created_at | 2019-09-03T16:51:30Z | >>>>>> | description | | >>>>>> | direction | ingress | >>>>>> | ether_type | IPv6 | >>>>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf | >>>>>> | location | Munch({'project': Munch({'domain_id': 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>>> | name | None | >>>>>> | port_range_max | None | >>>>>> | port_range_min | None | >>>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee | >>>>>> | protocol | ipv6-icmp | >>>>>> | remote_group_id | None | >>>>>> | remote_ip_prefix | None | >>>>>> | revision_number | 0 | >>>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>>> | tags | [] | >>>>>> | updated_at | 2019-09-03T16:51:30Z | >>>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> >>>>>> >>>>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis wrote: >>>>>>> >>>>>>> Security group rules? >>>>>>> >>>>>>> Donny Davis >>>>>>> c: 805 814 6800 >>>>>>> >>>>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>>>>> >>>>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from its hypervisor. >>>>>>>> Could you please help me troubleshooting it? >>>>>>>> >>>>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>>>> and manually created the networks, subnets and router. Following is my router: >>>>>>>> >>>>>>>> $ openstack router show router1 -c external_gateway_info -c interfaces_info >>>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> | Field | Value | >>>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> | external_gateway_info | {"network_id": "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": "fd12:67:1::3c"}]} | >>>>>>>> | interfaces_info | [{"subnet_id": "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] | >>>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> >>>>>>>> I'm trying to ping6 the following VM: >>>>>>>> >>>>>>>> $ openstack server list >>>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>>> | ID | Name | Status | Networks | Image | Flavor | >>>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>>> >>>>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>>>> >>>>>>>> $ ip a show dev br-ex >>>>>>>> 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 >>>>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>>>> inet6 fd12:67:1::1/64 scope global >>>>>>>> valid_lft forever preferred_lft forever >>>>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>>>> valid_lft forever preferred_lft forever >>>>>>>> >>>>>>>> The hypervisor has the following routes: >>>>>>>> >>>>>>>> $ ip -6 route >>>>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>>>>> >>>>>>>> And within the VM has the following routes: >>>>>>>> >>>>>>>> root at ubuntu:~# ip -6 route >>>>>>>> root at ubuntu:~# ip -6 route >>>>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref medium >>>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 expires 260sec hoplimit 64 pref medium >>>>>>>> >>>>>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>>>>> --- fd12:67:1::1 ping statistics --- >>>>>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>>>>> >>>>>>>> I'm able to tcpdump inside the router1 netns and see that request packet is passing there, but can't see any reply packets: >>>>>>>> >>>>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump -l -i any icmp6 >>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode >>>>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes >>>>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 0, length 64 >>>>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 >>>>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 >>>>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 1, length 64 >>>>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 2, length 64 >>>>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 3, length 64 >>>>>>>> >>>>>>>> The same happens from hypervisor to VM. I only acan see the request packets, but no reply packets. >>>>>>>> >>>>>>>> Thanks in advance, >>>>>>>> Lucio Seki From marcin.juszkiewicz at linaro.org Sat Sep 14 16:45:54 2019 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Sat, 14 Sep 2019 18:45:54 +0200 Subject: [kolla] State of ppc64le support Message-ID: About 2.5 years ago I added AArch64 (64-bit ARM) architecture support to Kolla project. Side effect of that work was adding ppc64le (64-bit Power, Little Endian) support. Time passed, from time to time someone jumped to the irc channel and said that they use it. No one in core team spent much time on supporting it as it was outside of our interest. >From time to time I was reserving Power machine in Red Hat to do build and check how we are with ppc64le support. This week I did that again. >From 3 distributions we target only Debian/source combo was buildable. CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) and 'gnocchi' binary images are not buildable due to lack of some packages (issue already reported to CentOS by TripleO team). Ubuntu builds lack MariaDB 10.3 because upstream repo is broken. Packages index is provided for 'ppc64le' but no packages so we get 404 errors. Due to all those issues and fact that there are no users of ppc64le Kolla containers I want to drop support for it in this cycle. Any objections? From mriedemos at gmail.com Sat Sep 14 17:01:56 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 14 Sep 2019 12:01:56 -0500 Subject: [all] stable/ocata gate failure In-Reply-To: <16d2e36161f.c54e46a1161326.4333158062553456987@ghanshyammann.com> References: <16d2e36161f.c54e46a1161326.4333158062553456987@ghanshyammann.com> Message-ID: <437ff66c-aa63-7bcc-d181-13ed1668ac76@gmail.com> On 9/14/2019 12:19 AM, Ghanshyam Mann wrote: > If you have noticed that stable/ocata gate is blocked where 'legacy-tempest-dsvm-neutron-full/-*' job > is failing due to the latest Tempest changes. > > Tempest started the JSON schema strict validation for Volume APIs which caught the failure or you can say > Tempest master cannot be used in Ocata testing. More details-https://bugs.launchpad.net/tempest/+bug/1843762 > > As per the Tempest stable branch testing policy[1], Tempst does not support the stable/ocata (which is EM )in the > current development cycle. Current supported stable branches by Tempest are Queens, Rocky, Stein and Train-on-going. > We can keep using Tempest master on EM stable/branches as long as it run successfully and if it start failing which is current > case of stable/ocata then use Tempest tag to test that EM stable branch. > > To unblock the stable/ocata gate, I am trying to install the Tempest 20.0.0(compatible version for Ocata) in ocata gate > -https://review.opendev.org/#/c/681950/ > Fix is not working as of now (it still install Tempest master). I will debug that later (my current priority is for Train feature freeze). > > [1]https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html Thanks for the heads up. I agree that being able to continue to run tempest integration jobs on stable/ocata changes, even with a frozen tempest version, is better than not running integration testing on stable/ocata at all. When I was at IBM and we were supported branches downstream that were end of life upstream what I'd do was create an internal branch for tempest (stable/ocata in this case) so we'd run against that rather than master tempest, just in case we needed to make changes and couldn't use a tag (back then tags for tempest were also pretty new as I recall). I'm not advocating creating a stable/ocata branch for tempest upstream, I'm just giving an example of one downstream process for this sort of thing. Alternatively Cinder could fix the API regression, but that would likely be a regression of its own at this point right? Meaning if they added something to an API response without a microversion and then removed it without a microversion, that's not really helping the situation. As it stands clients (in this case tempest) have to deal with the API change. Another alternative would be putting some kind of compat code in tempest for this particular API breakage but if Tempest isn't going to gate on stable/ocata then that's not really the responsibility of the QA team to carry that compat code. -- Thanks, Matt From colleen at gazlene.net Sat Sep 14 18:14:37 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Sat, 14 Sep 2019 11:14:37 -0700 Subject: [keystone] Keystone Team Update - Week of 9 September 2019 Message-ID: <000aefc5-8d72-411d-8b42-294569f40314@www.fastmail.com> # Keystone Team Update - Week of 9 September 2019 ## News ### Feature Freeze Status We have a number of policy changes and one spec implementation still crawling through the gate[1]. Since these have all been reviewed and approved we'll continue to babysit them until they have made it. Federated user attributes[2] and expiring group membership[3] are being deferred until next cycle due to insufficient reviews/incomplete implementation. [1[ https://etherpad.openstack.org/p/keystone-train-feature-freeze-todo [2] https://review.opendev.org/#/q/topic:bp/support-federated-attr [3] https://review.opendev.org/#/q/topic:bug/1809116 ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. The topic for next week's office hour will be: bug triage and prioritizing bugs for RC. The location for next week's office hour will be: https://meet.jit.si/keystone-office-hours Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 41 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 51 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews https://etherpad.openstack.org/p/keystone-train-feature-freeze-todo ## Bugs This week we opened 5 new bugs and closed 7. Bugs opened (5) Bug #1843609 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1843609 Bug #1843464 (keystone:Medium) opened by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1843464 Bug #1843925 (keystone:Undecided) opened by Ben Nemec https://bugs.launchpad.net/keystone/+bug/1843925 Bug #1843903 (python-keystoneclient:Undecided) opened by Andrew Berezovskiy https://bugs.launchpad.net/python-keystoneclient/+bug/1843903 Bug #1843931 (oslo.policy:Medium) opened by Ben Nemec https://bugs.launchpad.net/oslo.policy/+bug/1843931 Bugs closed (1) Bug #1843903 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1843903 Bugs fixed (6) Bug #1750669 (keystone:High) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1750669 Bug #1805368 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1805368 Bug #1805371 (keystone:Medium) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1805371 Bug #1818846 (keystone:Low) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1818846 Bug #1818850 (keystone:Low) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1818850 Bug #1805409 (keystone:Wishlist) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1805409 ## Milestone Outlook https://releases.openstack.org/train/schedule.html We're now past feature freeze and entering the RC period. Development focus should be on fixing bugs and helping to stabilize CI. RC1 will be cut in 2 weeks. We're also at requirements freeze, which means no new dependencies can be added and versions can't be changed. We're also at soft string freeze, so be mindful of checking that changes are not modifying strings. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From abishop at redhat.com Sat Sep 14 21:05:22 2019 From: abishop at redhat.com (Alan Bishop) Date: Sat, 14 Sep 2019 14:05:22 -0700 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Fri, Sep 13, 2019 at 3:06 PM Emilien Macchi wrote: > With our long-term goal to simplify TripleO and focus on what people > actually deploy and how they operate their clouds, it appears that the > Paunch CLI hasn't been a critical piece in our project and I propose that > we deprecate it; create an Ansible module to call Paunch as a library only. > > I've been playing with it a little today: > https://review.opendev.org/#/c/682093/ > https://review.opendev.org/#/c/682094/ > > Here is how you would call paunch: > - name: Start containers for step {{ step }} > paunch: > config: > "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step > }}.json" > config_id: "tripleo_step{{ step }}" > action: apply > container_cli: "{{ container_cli }}" > managed_by: "tripleo-{{ tripleo_role_name }}" > > A few benefits: > - Deployment tasks in THT would call the new module instead of a shell > command > - More Pythonic and clean for Ansible, to interact with the actual task > during the run > - Removing some code in Paunch, make it easier to maintain for us > > For now, the Ansible module only covers "paunch apply", we will probably > cover "delete" and "cleanup" eventually. > The paunch cli's "print-cmd" action has been occasionally useful as a debug aid. Will this info be available through some other means? Alan Please let me know if you have any questions or concerns, > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rony.khan at brilliant.com.bd Sun Sep 15 05:36:16 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Sun, 15 Sep 2019 11:36:16 +0600 Subject: Rabbitmq error report Message-ID: <163601d56b87$7e68ebe0$7b3ac3a0$@brilliant.com.bd> Hi, [root at controller1 ~]# rabbitmqctl list_queues |grep notification notifications.debug 0 notifications.critical 0 notifications.error 0 notifications.info 0 notifications.warn 0 notifications.audit 0 notifications.sample 0 versioned_notifications.error 6 [root at controller1 ~]# From: Md. Farhad Hasan Khan Sent: Thursday, September 12, 2019 3:29 PM To: 'OpenStack Discuss' Subject: Rabbitmq error report Hi, I'm getting this error continuously in rabbitmq log. Though all operation going normal, but slow. Sometimes taking long time to perform operation. Please help me to solve this. rabbitmq version: rabbitmq_server-3.6.16 =ERROR REPORT==== 12-Sep-2019::13:04:55 === Channel error on connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: operation queue.declare caused a channel exception not_found: failed to perform operation on queue 'versioned_notifications.info' in vhost '/' due to timeout =WARNING REPORT==== 12-Sep-2019::13:04:55 === closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672 - nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', user: 'openstack'): client unexpectedly closed TCP connection Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Sep 15 18:17:22 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 15 Sep 2019 13:17:22 -0500 Subject: [neutron] Bug deputy report week of September 9th Message-ID: Hi, I was the bug deputy for the week of September 9th. Here's the list of what we got reported: Critical: - https://bugs.launchpad.net/neutron/+bug/1843413 neutron-tempest-iptables_hybrid-fedora job is failing with RETRY_LIMIT constantly Status: Assigned to slaweq. Job was made non-voting temporarily while bug is fixed High: - https://bugs.launchpad.net/neutron/+bug/1843478 Mitigate frequent fixtures._fixtures.timeout.TimeoutException in UTs and FTs Status: Assigned to ralonsoh: Proposed fix: https://review.opendev.org/#/c/681432/ Medium: - https://bugs.launchpad.net/neutron/+bug/1843418 Functional tests shouldn't fail if kill command will have "no such process" during cleanup Status: Assigned to ralonsoh. Proposed fix: https://review.opendev.org/#/c/681671/ - https://bugs.launchpad.net/neutron/+bug/1843425 br-int lose flows ephemerally due to unnecessary flow operation Status: In progress. Proposed fix: https://review.opendev.org/#/c/681462/ - https://bugs.launchpad.net/neutron/+bug/1843428 List port by mac address is case sensitive Status: Fix in progress. Proposed fix: https://review.opendev.org/#/c/681390 - https://bugs.launchpad.net/neutron/+bug/1843446 Implement "kill" operation using python method os.kill() Status: Assigned to ralonsoh: Proposed fix: https://review.opendev.org/#/c/681671/ - https://bugs.launchpad.net/neutron/+bug/1843870 ovsdb monitor ignores modified ports Status: proposed fix: https://review.opendev.org/#/c/681984 - https://bugs.launchpad.net/neutron/+bug/1843889 Windows: IPv6 tunnel endpoints Status: proposed fix: https://review.opendev.org/#/c/682031 Incomplete: - https://bugs.launchpad.net/neutron/+bug/1843359 The iptables rules are covered when add a port from the FW group Status: requested more data from submitter - https://bugs.launchpad.net/neutron/+bug/1843801 metadata-proxy process stops listening on port 80 Status: Incomplete. Submitter was directed to use haproxy >= 1.8.15 Invalid: - https://bugs.launchpad.net/neutron/+bug/1843379 Tagging is not work for tags of QoS Policy Status: submitter was using invalid resource for the request RFE: - https://bugs.launchpad.net/neutron/+bug/1843165 RFE: Adding support for direct ports with qos in ovs Status: Under discussion with submitter - https://bugs.launchpad.net/neutron/+bug/1843924 [RFE] Create optional bulk resource_extend Status: Under discussion with submitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Sep 16 03:23:46 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 16 Sep 2019 10:23:46 +0700 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: <46c4523f-8d63-4c13-898c-a636f38054f5@Spark> Hi! Are you aware of other issues with cron triggers and trusts? I’d like to reconcile all of that somehow. The users who I personally work with don’t use cron triggers so I don’t have that much practical experience with them. Thanks Renat Akhmerov @Nokia On 13 Sep 2019, 20:34 +0700, Francois Scheurer , wrote: > Hi Sa Pham > > Yes this is the good one. > Bo Tran pointed it to me yesterday as well and it fixed the issue. > See also: https://bugs.launchpad.net/mistral/+bug/1843175 > Many Thanks to both of you ! > > Best Regards > Francois Scheurer > > > > On 9/13/19 3:23 PM, Sa Pham wrote: > > Hi Francois, > > > > You can try this patch: https://review.opendev.org/#/c/680858/ > > > > Sa Pham > > > > > On Thu, Sep 12, 2019 at 11:49 PM Francois Scheurer wrote: > > > > Hello > > > > > > > > > > > > Apparently other people have the same issue and cannot use cron triggers anymore: > > > > https://bugs.launchpad.net/mistral/+bug/1843175 > > > > > > > > We also tried with following patch installed but the same error persists: > > > > https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split > > > > > > > > > > > > Cheers > > > > Francois > > > > > > > > > > > > > > > > On 9/9/19 6:23 PM, Francois Scheurer wrote: > > > > > Dear All > > > > > > > > > > We are using Mistral 7.0.1.1 with  Openstack Rocky. (with federated users) > > > > > We can create and execute a workflow via horizon, but cron triggers always fail with this error: > > > > >     { > > > > >         "result": > > > > >             "The action raised an exception [ > > > > >                     action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, > > > > >                     action_cls='', > > > > >                     attributes='{u'client_method_name': u'servers.find'}', > > > > >                     params='{ > > > > >                         u'action_region': u'ch-zh1', > > > > >                         u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa' > > > > >                     }' > > > > >                 ] > > > > >                 \n NovaAction.servers.find failed: You are not authorized to perform the requested action: identity:validate_token. (HTTP 403) (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33) > > > > >             " > > > > >     } > > > > > Adding the role admin or service to the user logged in horizon is "fixing" the issue, I mean that the cron trigger then works as expected, > > > > > but it would be obviously a bad idea to do this for all normal users ;-) > > > > > So my question: is it a config problem on our side ? is it a known bug? or is it a feature in the sense that cron triggers are for normal users? > > > > > > > > > > After digging in the keystone debug logs (see at the end below), I found that RBAC check identity:validate_token an deny the authorization. > > > > > But according to the policy.json (in keystone and in horizon), rule:owner should be enough to grant it...: > > > > >             "identity:validate_token": "rule:service_admin_or_owner", > > > > >                 "service_admin_or_owner": "rule:service_or_admin or rule:owner", > > > > >                     "service_or_admin": "rule:admin_required or rule:service_role", > > > > >                         "service_role": "role:service", > > > > >                     "owner": "user_id:%(user_id)s or user_id:%(target.token.user_id)s", > > > > > Thank you in advance for your help. > > > > > > > > > > Best Regards > > > > > Francois Scheurer > > > > > > > > > > > > > > > > > > > > Keystone logs: > > > > >         2019-09-05 09:38:00.902 29 DEBUG keystone.policy.backends.rules [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom] > > > > >             enforce identity:validate_token: > > > > >             { > > > > >                'service_project_id':None, > > > > >                'service_user_id':None, > > > > >                'service_user_domain_id':None, > > > > >                'service_project_domain_id':None, > > > > >                'trustor_id':None, > > > > >                'user_domain_id':u'testdom', > > > > >                'domain_id':None, > > > > >                'trust_id':u'mytrustid', > > > > >                'project_domain_id':u'testdom', > > > > >                'service_roles':[], > > > > >                'group_ids':[], > > > > >                'user_id':u'fsc', > > > > >                'roles':[ > > > > >                   u'_member_', > > > > >                   u'creator', > > > > >                   u'reader', > > > > >                   u'heat_stack_owner', > > > > >                   u'member', > > > > >                   u'load-balancer_member'], > > > > >                'system_scope':None, > > > > >                'trustee_id':None, > > > > >                'domain_name':None, > > > > >                'is_admin_project':True, > > > > >                'token':, > > > > >                'project_id':u'fscproject' > > > > >             } enforce /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 > > > > >         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom] > > > > >             You are not authorized to perform the requested action: identity:validate_token.: ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token. > > > > > > > > > > -- > > > > > > > > > > > > > > > EveryWare AG > > > > > François Scheurer > > > > > Senior Systems Engineer > > > > > Zurlindenstrasse 52a > > > > > CH-8003 Zürich > > > > > > > > > > tel: +41 44 466 60 00 > > > > > fax: +41 44 466 60 10 > > > > > mail: francois.scheurer at everyware.ch > > > > > web: http://www.everyware.ch > > > > -- > > > > > > > > > > > > EveryWare AG > > > > François Scheurer > > > > Senior Systems Engineer > > > > Zurlindenstrasse 52a > > > > CH-8003 Zürich > > > > > > > > tel: +41 44 466 60 00 > > > > fax: +41 44 466 60 10 > > > > mail: francois.scheurer at everyware.ch > > > > web: http://www.everyware.ch > > > > > > -- > > Sa Pham Dang > > Master Student - Soongsil University > > Kakaotalk: sapd95 > > Skype: great_bn > > > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From dh3 at sanger.ac.uk Mon Sep 16 08:21:00 2019 From: dh3 at sanger.ac.uk (Dave Holland) Date: Mon, 16 Sep 2019 09:21:00 +0100 Subject: [tripleo] Deprecating paunch CLI? [EXT] In-Reply-To: References: Message-ID: <20190916082059.GA10148@sanger.ac.uk> We've found "paunch debug" useful in tracking down container issues that we've reported to RH and then fixed, e.g. when diagnosing a too-low file handle limit: paunch debug --file /var/lib/tripleo-config/hashed-docker-container-startup-config-step_4.json --overrides '{ "ulimit": ["nofile=9999"] }' --container neutron_l3_agent --action run Will there be a way to achieve this run-with-overrides functionality without the CLI? Thanks, Dave -- ** Dave Holland ** Systems Support -- Informatics Systems Group ** ** 01223 496923 ** Wellcome Sanger Institute, Hinxton, UK ** On Fri, Sep 13, 2019 at 06:00:30PM -0400, Emilien Macchi wrote: > With our long-term goal to simplify TripleO and focus on what people > actually deploy and how they operate their clouds, it appears that > the Paunch CLI hasn't been a critical piece in our project and I > propose that we deprecate it; create an Ansible module to call Paunch > as a library only. > I've been playing with it a little today: > [1]https://review.opendev.org/#/c/682093/ [review.opendev.org] > [2]https://review.opendev.org/#/c/682094/ [review.opendev.org] > Here is how you would call paunch: > - name: Start containers for step {{ step }} > paunch: > config: > "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step > }}.json" > config_id: "tripleo_step{{ step }}" > action: apply > container_cli: "{{ container_cli }}" > managed_by: "tripleo-{{ tripleo_role_name }}" > A few benefits: > - Deployment tasks in THT would call the new module instead of a > shell command > - More Pythonic and clean for Ansible, to interact with the actual > task during the run > - Removing some code in Paunch, make it easier to maintain for us > For now, the Ansible module only covers "paunch apply", we will > probably cover "delete" and "cleanup" eventually. > Please let me know if you have any questions or concerns, > -- > Emilien Macchi > > References > > 1. https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_682093_&d=DwMFaQ&c=D7ByGjS34AllFgecYw0iC6Zq7qlm8uclZFI0SqQnqBo&r=64bKjxgut4Pa0xs5b84yPg&m=UzaP5_-Gt5C5Oyp0rQnntvqGufCQyDrPINAQB-a9l6g&s=aoB_wM3phD5R4iJA6DqIp1v7NJIV8fxQA41a6OyfIYI&e= > 2. https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_682094_&d=DwMFaQ&c=D7ByGjS34AllFgecYw0iC6Zq7qlm8uclZFI0SqQnqBo&r=64bKjxgut4Pa0xs5b84yPg&m=UzaP5_-Gt5C5Oyp0rQnntvqGufCQyDrPINAQB-a9l6g&s=gWOvDz_lchmRc5im_2FHvqqo0s7pLB0DtNl4NZ83vTg&e= -- The Wellcome Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. From lennyb at mellanox.com Mon Sep 16 08:50:17 2019 From: lennyb at mellanox.com (Lenny Verkhovsky) Date: Mon, 16 Sep 2019 08:50:17 +0000 Subject: [zuul3] zuul2 -> zuul3 migration Message-ID: Hi Team, We would like to migrate our Third Party CI[1] from zuul2 to zuul3. We have a lot of Jenkins jobs initially based on infra project-config-example But I guess we need to re write all the jobs now to support ansible. Any guide/example/tips are highly appreciated. [1] https://wiki.openstack.org/wiki/ThirdPartySystems/Mellanox_CI [2] https://github.com/openstack-infra/project-config-example/tree/master/jenkins/jobs Best Regards Lenny Verkhovsky (aka lennyb) Mellanox Technologies office: +972 74 712 92 44 fax: +972 74 712 91 11 mobile: +972 54 554 02 33 irc: lennyb -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Sep 16 09:53:58 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 16 Sep 2019 18:53:58 +0900 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability Message-ID: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> Hello Everyone, As per discussion over ML, Tempest started the JSON schema strict validation for Volume APIs response [1]. Because it may affect the interop certification, it was explained to the Interop team as well as in the Board of Director meeting[2]. In Train, Tempest started implementing the validation and found an API change where the new field was added in API response without versioning[3] (Cinder has API microversion mechanism). IMO, that was not the correct way to change the API and as per API-WG guidelines[4] any field added/modified/removed in API should be with microverison(means old versions/user should not be affected by that change) and must for API interoperability. With JSON schema validation, Tempest verifies the API interoperability recommended behaviour by API-WG. But as per IRC conversion with cinder team, we have different opinion on API interoperability and how API should be changed under microversion mechanism. I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html [2] - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html [3] https://bugs.launchpad.net/tempest/+bug/1843762 https://review.opendev.org/#/c/439461/ [4] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html -gmann From gmann at ghanshyammann.com Mon Sep 16 09:57:33 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 16 Sep 2019 18:57:33 +0900 Subject: [all] stable/ocata gate failure In-Reply-To: <437ff66c-aa63-7bcc-d181-13ed1668ac76@gmail.com> References: <16d2e36161f.c54e46a1161326.4333158062553456987@ghanshyammann.com> <437ff66c-aa63-7bcc-d181-13ed1668ac76@gmail.com> Message-ID: <16d398187fb.f6c6bb5927020.4472876578749508969@ghanshyammann.com> ---- On Sun, 15 Sep 2019 02:01:56 +0900 Matt Riedemann wrote ---- > On 9/14/2019 12:19 AM, Ghanshyam Mann wrote: > > If you have noticed that stable/ocata gate is blocked where 'legacy-tempest-dsvm-neutron-full/-*' job > > is failing due to the latest Tempest changes. > > > > Tempest started the JSON schema strict validation for Volume APIs which caught the failure or you can say > > Tempest master cannot be used in Ocata testing. More details-https://bugs.launchpad.net/tempest/+bug/1843762 > > > > As per the Tempest stable branch testing policy[1], Tempst does not support the stable/ocata (which is EM )in the > > current development cycle. Current supported stable branches by Tempest are Queens, Rocky, Stein and Train-on-going. > > We can keep using Tempest master on EM stable/branches as long as it run successfully and if it start failing which is current > > case of stable/ocata then use Tempest tag to test that EM stable branch. > > > > To unblock the stable/ocata gate, I am trying to install the Tempest 20.0.0(compatible version for Ocata) in ocata gate > > -https://review.opendev.org/#/c/681950/ > > Fix is not working as of now (it still install Tempest master). I will debug that later (my current priority is for Train feature freeze). > > > > [1]https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html > > Thanks for the heads up. I agree that being able to continue to run > tempest integration jobs on stable/ocata changes, even with a frozen > tempest version, is better than not running integration testing on > stable/ocata at all. When I was at IBM and we were supported branches > downstream that were end of life upstream what I'd do was create an > internal branch for tempest (stable/ocata in this case) so we'd run > against that rather than master tempest, just in case we needed to make > changes and couldn't use a tag (back then tags for tempest were also > pretty new as I recall). I'm not advocating creating a stable/ocata > branch for tempest upstream, I'm just giving an example of one > downstream process for this sort of thing. Thanks for that information. I think creating stable/ocata in Tempest will face the maintenance issue. Let's try with tag first if that work fine. > > Alternatively Cinder could fix the API regression, but that would likely > be a regression of its own at this point right? Meaning if they added > something to an API response without a microversion and then removed it > without a microversion, that's not really helping the situation. As it > stands clients (in this case tempest) have to deal with the API change. I am on same page with you on this but there are different opinion on how to change API with microversion. I have started a separate thread on this to discuss the correct way to change API - http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009365.html -gmann > > Another alternative would be putting some kind of compat code in tempest > for this particular API breakage but if Tempest isn't going to gate on > stable/ocata then that's not really the responsibility of the QA team to > carry that compat code. Yeah, as per Extended Maintainance stable branch testing policy, Tempest would not be able to maintain those code. It becomes difficult from maintenance as well as strict verification side also. -gmann > > -- > > Thanks, > > Matt > > From dtantsur at redhat.com Mon Sep 16 10:00:18 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 16 Sep 2019 12:00:18 +0200 Subject: [requirements] [ironic] Bumping test-requirements on stable/train for the PDF goal? Message-ID: <6a9f5c02-46b1-b9d2-9f86-7d71b68bf9f0@redhat.com> Hi all, we (ironic) have a few deliverables that got the PDF goal fulfilled after stable/train was branched. Is it reasonable to still pursue the goal on stable/train even if it requires raising the openstackdoctheme requirement? An example is https://review.opendev.org/#/c/682274/ Thanks, Dmitry From gmann at ghanshyammann.com Mon Sep 16 10:02:43 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 16 Sep 2019 19:02:43 +0900 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> Message-ID: <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann wrote ---- > Hello Everyone, > > As per discussion over ML, Tempest started the JSON schema strict validation for Volume APIs response [1]. > Because it may affect the interop certification, it was explained to the Interop team as well as in the Board of Director meeting[2]. > > In Train, Tempest started implementing the validation and found an API change where the new field was added in API response without versioning[3] (Cinder has API microversion mechanism). IMO, that was not the correct way to change the API and as per API-WG guidelines[4] any field added/modified/removed in API should be with microverison(means old versions/user should not be affected by that change) and must for API interoperability. > > With JSON schema validation, Tempest verifies the API interoperability recommended behaviour by API-WG. But as per IRC conversion with cinder team, we have different opinion on API interoperability and how API should be changed under microversion mechanism. I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. I found the same flow chart what Sean created in Nova about "when to bump microverison" in Cinder also which clearly say any addition to response need new microversion. - https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html -gmann > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html > [2] > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html > [3] https://bugs.launchpad.net/tempest/+bug/1843762 https://review.opendev.org/#/c/439461/ > [4] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > -gmann > From james.slagle at gmail.com Mon Sep 16 11:02:45 2019 From: james.slagle at gmail.com (James Slagle) Date: Mon, 16 Sep 2019 07:02:45 -0400 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Sat, Sep 14, 2019 at 5:10 PM Alan Bishop wrote: > > > On Fri, Sep 13, 2019 at 3:06 PM Emilien Macchi wrote: >> >> With our long-term goal to simplify TripleO and focus on what people actually deploy and how they operate their clouds, it appears that the Paunch CLI hasn't been a critical piece in our project and I propose that we deprecate it; create an Ansible module to call Paunch as a library only. >> >> I've been playing with it a little today: >> https://review.opendev.org/#/c/682093/ >> https://review.opendev.org/#/c/682094/ >> >> Here is how you would call paunch: >> - name: Start containers for step {{ step }} >> paunch: >> config: "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step }}.json" >> config_id: "tripleo_step{{ step }}" >> action: apply >> container_cli: "{{ container_cli }}" >> managed_by: "tripleo-{{ tripleo_role_name }}" >> >> A few benefits: >> - Deployment tasks in THT would call the new module instead of a shell command >> - More Pythonic and clean for Ansible, to interact with the actual task during the run >> - Removing some code in Paunch, make it easier to maintain for us >> >> For now, the Ansible module only covers "paunch apply", we will probably cover "delete" and "cleanup" eventually. > > > The paunch cli's "print-cmd" action has been occasionally useful as a debug aid. Will this info be available through some other means? I also rely on print-cmd and the other debug features. I sometimes use apply to reproduce issues but I suppose I could do without. -- -- James Slagle -- From ramishra at redhat.com Mon Sep 16 11:22:36 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 16 Sep 2019 16:52:36 +0530 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Sat, Sep 14, 2019 at 3:34 AM Emilien Macchi wrote: > With our long-term goal to simplify TripleO and focus on what people > actually deploy and how they operate their clouds, it appears that the > Paunch CLI hasn't been a critical piece in our project and I propose that > we deprecate it; create an Ansible module to call Paunch as a library only. > > I've been playing with it a little today: > https://review.opendev.org/#/c/682093/ > https://review.opendev.org/#/c/682094/ > > Why not use ansible podman/docker modules (though I don't know how good they are atm) directly form ansible tasks? Also, why deprecate the cli? As many others mentioned, lot of us use it for debugging. > Here is how you would call paunch: > - name: Start containers for step {{ step }} > paunch: > config: > "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step > }}.json" > config_id: "tripleo_step{{ step }}" > action: apply > container_cli: "{{ container_cli }}" > managed_by: "tripleo-{{ tripleo_role_name }}" > > A few benefits: > - Deployment tasks in THT would call the new module instead of a shell > command > - More Pythonic and clean for Ansible, to interact with the actual task > during the run > - Removing some code in Paunch, make it easier to maintain for us > > For now, the Ansible module only covers "paunch apply", we will probably > cover "delete" and "cleanup" eventually. > > Please let me know if you have any questions or concerns, > -- > Emilien Macchi > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinzs2048 at gmail.com Mon Sep 16 12:05:59 2019 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Mon, 16 Sep 2019 20:05:59 +0800 Subject: [openstack-dev] [neutron]IPv6 Prefix Delegation could not activated in newest version Neutron Message-ID: Hi All, I'm working on validate the IPv6 PD in newest Neutron. What I want is to offer the Global Unified address to the VM and I find PD is the good solutions for me. I follow the guide *https://docs.openstack.org/neutron/latest/admin/config-ipv6.html *to setup PD and dibbler-server and devstack, but I find I could not to trigger the PD process. The Dibbler server print nothing when attach the Subnet to router has external gateway. All procedure has recorded to the bug: https://bugs.launchpad.net/neutron/+bug/1844123. Thanks for your action and help in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Sep 16 12:17:29 2019 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 16 Sep 2019 08:17:29 -0400 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Mon, Sep 16, 2019 at 7:22 AM Rabi Mishra wrote: > Why not use ansible podman/docker modules (though I don't know how good > they are atm) directly form ansible tasks? > Paunch is a tool for defining and running TripleO containers. Paunch consumes JSON files to configure the containers and the Paunch runs makes the abstraction between the stable configuration into the container api (compose or podman for now). There is quite a bunch of logic in Paunch that, imho would make take some time to convert to Ansible playbooks/modules, specially around resiliency. Not saying it's impossible, but I would rather be interested in having TripleO generating the container config, and the container tool consuming it and directly managing the containers without something like Paunch. This doesn't exist with Podman as far as I know. We have investigated the usage of Kubelet running on localhost, where TripleO would generate yaml files working with k8s API, it worked ok'ish for our containers but the major issue we encountered is that this solution isn't supported by Red Hat. So... it seems like we still need something like Paunch for now, and we can maybe investigate making the podman-ansible module more robust to sustain our needs in TripleO > Also, why deprecate the cli? As many others mentioned, lot of us use it > for debugging. > Based on the answers so far, it's pretty clear we won't touch this command. As for the "paunch apply", we'll see, if the Ansible replacement works for everyone, then we might deprecate it in Paunch but not the debug command for sure. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Mon Sep 16 13:29:34 2019 From: mthode at mthode.org (Matthew Thode) Date: Mon, 16 Sep 2019 08:29:34 -0500 Subject: [requirements][docs][ironic] Bumping test-requirements on stable/train for the PDF goal? In-Reply-To: <6a9f5c02-46b1-b9d2-9f86-7d71b68bf9f0@redhat.com> References: <6a9f5c02-46b1-b9d2-9f86-7d71b68bf9f0@redhat.com> Message-ID: <20190916132934.vwardzzpmpaz6ozo@mthode.org> On 19-09-16 12:00:18, Dmitry Tantsur wrote: > Hi all, > > we (ironic) have a few deliverables that got the PDF goal fulfilled after > stable/train was branched. Is it reasonable to still pursue the goal on > stable/train even if it requires raising the openstackdoctheme requirement? > > An example is https://review.opendev.org/#/c/682274/ > It looks like you are just raising the minimum. If that is the case then you are fine to manage it within your project as lower-constraints/lower bounds are managed per-project and not openstack wide (since stein at least). I've pinged the docs team just in case though. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aschultz at redhat.com Mon Sep 16 13:35:35 2019 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 16 Sep 2019 07:35:35 -0600 Subject: [kolla] State of ppc64le support In-Reply-To: References: Message-ID: On Sat, Sep 14, 2019 at 10:51 AM Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> wrote: > About 2.5 years ago I added AArch64 (64-bit ARM) architecture support to > Kolla project. Side effect of that work was adding ppc64le (64-bit > Power, Little Endian) support. > > Time passed, from time to time someone jumped to the irc channel and > said that they use it. No one in core team spent much time on supporting > it as it was outside of our interest. > > From time to time I was reserving Power machine in Red Hat to do build > and check how we are with ppc64le support. This week I did that again. > > From 3 distributions we target only Debian/source combo was buildable. > > CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) and > 'gnocchi' binary images are not buildable due to lack of some packages > (issue already reported to CentOS by TripleO team). > > Ubuntu builds lack MariaDB 10.3 because upstream repo is broken. > Packages index is provided for 'ppc64le' but no packages so we get 404 > errors. > > > Due to all those issues and fact that there are no users of ppc64le > Kolla containers I want to drop support for it in this cycle. Any > objections? > > I think TripleO uses the ppc64le containers. I'm unsure if we're relying on anything special to build them however. I know there's been some effort to get a pp64le upstream build system going. ccing Wes as he might have the status on this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 16 14:20:09 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 16 Sep 2019 09:20:09 -0500 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> Message-ID: <51bef2ec-4e01-58ac-2b67-cfaca8f16e87@gmail.com> On 9/16/2019 4:53 AM, Ghanshyam Mann wrote: > I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. In Nova, when changing the request or response contents, it's pretty simple - we always microversion those. Why? Because if my application is written to work against different OpenStack clouds, maybe multiple public clouds or maybe a hybrid environment where it's running in both my local private cloud and in let's say VEXXHOST, and I need to know what I can do on each cloud, I use microversions, e.g. just because something works on the version of OpenStack I have running in private doesn't mean it's going to work in the public cloud provider I'm using and vice-versa. I think generally the people that are OK with not versioning additions to a response justify it by saying if the app code isn't looking for that specific field anyway they won't notice or care, and if they are looking for a specific field, they should treat all fields as optional and fallback gracefully. That may work in some cases, but I don't think as a general rule that's something we should be following since we don't know the situation for the app code or how the field is going to be used. For example, maybe the app code wouldn't make an initial request to perform some action on the resource if they know they can't properly monitor it with a later GET response. So rather than say a microversion for response changes is OK in some cases and not others, just keep it simple and always require it. The problem we've faced in nova several times when asking if we need a microversion is more about behavioral changes that signal when you can do something in the API, since that's a grey area. For example, we added a microversion when we added support for multiattach volumes even though the type of volume you're using or the compute driver your VM is on might not support multiattach. Feature discovery is still a problem in OpenStack but at least with the microversion you can determine via API version discovery which cloud supports the feature at a basic level and which doesn't. Any issues you hit after that are likely due to the cloud provider's configuration, which as a user yes that sucks, but we as a community haven't solved the "capability discovery" problem and likely never will at this rate of development. Anyway, that's a tangent, but my point is it's much easier to enforce a consistent development policy for request/response changes than it is for behavioral changes. -- Thanks, Matt From luka.peschke at objectif-libre.com Mon Sep 16 14:39:29 2019 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Mon, 16 Sep 2019 16:39:29 +0200 Subject: [cloudkitty] 13/09 meeting summary Message-ID: <527b2ab845684dc3c557677962ea9d19@objectif-libre.com> Hi everyone, This is the recap of the cloudkitty team IRC meeting that happened on 13/09. The logs can be found at [1]. All features patches we wanted to integrate have been merged for the feature freeze dealine. An experimental v2 storage driver for Elasticsearch has also been merged. Concerning the community goals: python3-first has been implemented early, however we're late for PDF docs generation, as we've got several issues when trying to generate PDF documentation, especially with tabs. A patch for cloudkitty has been proposed today [2]. Patches for the client and specs repos are to be done. They're our current top priority. From now on, the IRC meeting will be held twice every month, on the first and third monday at 14h UTC. This means that the next meeting will happen on october 7th. We plan to send a recap to this mailing list for each meeting. For the U cycle, we'd like to use storyboard a lot more. For now, it's been used for features and major bugs only, but we'll use it for every non-trivial patch from now on. Our temporary roadmap for U is the following: * Design and implement a new rating module. * Support time-based grouping in the /v2/summary endpoint. This would allow to easily create charts with cloudkitty's API. * [To be discussed] Stop differenciating "groupby" and "metadata" attributes, as these seem to be confusing for admins. * Port as many v1 API endpoints as possible to v2. Best regards, Luka Peschke (peschk_l) [1] http://eavesdrop.openstack.org/meetings/cloudkitty/2019/cloudkitty.2019-09-13-15.02.log.html [2] https://review.opendev.org/#/c/682364/ From elmiko at redhat.com Mon Sep 16 15:30:44 2019 From: elmiko at redhat.com (Michael McCune) Date: Mon, 16 Sep 2019 11:30:44 -0400 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> Message-ID: On Mon, Sep 16, 2019 at 6:06 AM Ghanshyam Mann wrote: > ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > Hello Everyone, > > > > As per discussion over ML, Tempest started the JSON schema strict > validation for Volume APIs response [1]. > > Because it may affect the interop certification, it was explained to > the Interop team as well as in the Board of Director meeting[2]. > > > > In Train, Tempest started implementing the validation and found an API > change where the new field was added in API response without versioning[3] > (Cinder has API microversion mechanism). IMO, that was not the correct way > to change the API and as per API-WG guidelines[4] any field > added/modified/removed in API should be with microverison(means old > versions/user should not be affected by that change) and must for API > interoperability. > > > > With JSON schema validation, Tempest verifies the API interoperability > recommended behaviour by API-WG. But as per IRC conversion with cinder > team, we have different opinion on API interoperability and how API should > be changed under microversion mechanism. I would like to have a conclusion > on this so that Tempest can verify or leave the Volume API for strict > validation. > > I found the same flow chart what Sean created in Nova about "when to bump > microverison" in Cinder also which clearly say any addition to response > need new microversion. > - > https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html > > i would also expect any change in the request or response to result in a microversion bump as well. peace o/ -gmann > > > > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html > > [2] > > - > http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html > > - > http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html > > [3] https://bugs.launchpad.net/tempest/+bug/1843762 > https://review.opendev.org/#/c/439461/ > > [4] > https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > > > -gmann > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Mon Sep 16 15:34:30 2019 From: james.page at canonical.com (James Page) Date: Mon, 16 Sep 2019 17:34:30 +0200 Subject: [charms] no irc meeting today Message-ID: Hi Team Due to travel + events for most of the team there will be no Charms IRC meeting today. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Mon Sep 16 15:47:20 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 16 Sep 2019 21:17:20 +0530 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Mon, Sep 16, 2019 at 5:47 PM Emilien Macchi wrote: > On Mon, Sep 16, 2019 at 7:22 AM Rabi Mishra wrote: > >> Why not use ansible podman/docker modules (though I don't know how good >> they are atm) directly form ansible tasks? >> > > Paunch is a tool for defining and running TripleO containers. Paunch > consumes JSON files to configure the containers and the Paunch runs makes > the abstraction between the stable configuration into the container api > (compose or podman for now). > There is quite a bunch of logic in Paunch that, imho would make take some > time to convert to Ansible playbooks/modules, specially around resiliency. > Not saying it's impossible, but I would rather be interested in having > TripleO generating the container config, and the container tool consuming > it and directly managing the containers without something like Paunch. > I'm not sure if podman as container tool would move in that direction, as it's meant to be a command line tool. If we really want to reduce the overhead of so many layers in TripleO and podman is the container tool for us (I'll ignore the k8s related discussions for the time being), I would think the logic of translating the JSON configs to podman calls should be be in ansible (we can even write a TripleO specific podman module). My 2 cents.. This doesn't exist with Podman as far as I know. > We have investigated the usage of Kubelet running on localhost, where > TripleO would generate yaml files working with k8s API, it worked ok'ish > for our containers but the major issue we encountered is that this solution > isn't supported by Red Hat. > So... it seems like we still need something like Paunch for now, and we > can maybe investigate making the podman-ansible module more robust to > sustain our needs in TripleO > > >> Also, why deprecate the cli? As many others mentioned, lot of us use it >> for debugging. >> > > Based on the answers so far, it's pretty clear we won't touch this > command. > As for the "paunch apply", we'll see, if the Ansible replacement works for > everyone, then we might deprecate it in Paunch but not the debug command > for sure. > > Thanks, > -- > Emilien Macchi > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Sep 16 15:54:21 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 16 Sep 2019 09:54:21 -0600 Subject: [kolla] State of ppc64le support In-Reply-To: References: Message-ID: On Mon, Sep 16, 2019 at 7:36 AM Alex Schultz wrote: > > > On Sat, Sep 14, 2019 at 10:51 AM Marcin Juszkiewicz < > marcin.juszkiewicz at linaro.org> wrote: > >> About 2.5 years ago I added AArch64 (64-bit ARM) architecture support to >> Kolla project. Side effect of that work was adding ppc64le (64-bit >> Power, Little Endian) support. >> >> Time passed, from time to time someone jumped to the irc channel and >> said that they use it. No one in core team spent much time on supporting >> it as it was outside of our interest. >> >> From time to time I was reserving Power machine in Red Hat to do build >> and check how we are with ppc64le support. This week I did that again. >> >> From 3 distributions we target only Debian/source combo was buildable. >> >> CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) and >> 'gnocchi' binary images are not buildable due to lack of some packages >> (issue already reported to CentOS by TripleO team). >> >> Ubuntu builds lack MariaDB 10.3 because upstream repo is broken. >> Packages index is provided for 'ppc64le' but no packages so we get 404 >> errors. >> >> >> Due to all those issues and fact that there are no users of ppc64le >> Kolla containers I want to drop support for it in this cycle. Any >> objections? >> >> > I think TripleO uses the ppc64le containers. I'm unsure if we're relying > on anything special to build them however. I know there's been some effort > to get a pp64le upstream build system going. ccing Wes as he might have > the status on this. > > The TripleO team is working on ppc64le right now and should be uploading ppc64le containers to docker.io in a few weeks. I'm pretty sure the ppc team is using kolla to build the containers, but I'm not 100% sure as the ppc64le container build jobs are in a 3rd party ci system. I'm checking with some folks to get the details that would be helpful here. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Rajini.Karthik at Dell.com Mon Sep 16 15:57:12 2019 From: Rajini.Karthik at Dell.com (Rajini.Karthik at Dell.com) Date: Mon, 16 Sep 2019 15:57:12 +0000 Subject: [zuul3] zuul2 -> zuul3 migration In-Reply-To: References: Message-ID: <00048ac15af741ba854553c3d7e33678@AUSX13MPS308.AMER.DELL.COM> We are looking for the same. Thanks Rajini From: Lenny Verkhovsky Sent: Monday, September 16, 2019 3:50 AM To: openstack-discuss Subject: [zuul3] zuul2 -> zuul3 migration [EXTERNAL EMAIL] Hi Team, We would like to migrate our Third Party CI[1] from zuul2 to zuul3. We have a lot of Jenkins jobs initially based on infra project-config-example But I guess we need to re write all the jobs now to support ansible. Any guide/example/tips are highly appreciated. [1] https://wiki.openstack.org/wiki/ThirdPartySystems/Mellanox_CI [2] https://github.com/openstack-infra/project-config-example/tree/master/jenkins/jobs Best Regards Lenny Verkhovsky (aka lennyb) Mellanox Technologies office: +972 74 712 92 44 fax: +972 74 712 91 11 mobile: +972 54 554 02 33 irc: lennyb -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Sep 16 16:07:31 2019 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 16 Sep 2019 12:07:31 -0400 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Mon, Sep 16, 2019 at 11:47 AM Rabi Mishra wrote: > I'm not sure if podman as container tool would move in that direction, as > it's meant to be a command line tool. If we really want to reduce the > overhead of so many layers in TripleO and podman is the container tool for > us (I'll ignore the k8s related discussions for the time being), I would > think the logic of translating the JSON configs to podman calls should be > be in ansible (we can even write a TripleO specific podman module). > I think we're both in strong agreement and say "let's convert paunch into ansible module". And make the module robust enough for our needs. Then we could replace paunch by calling the podman module directly. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Mon Sep 16 16:45:02 2019 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 16 Sep 2019 12:45:02 -0400 Subject: [kolla] State of ppc64le support In-Reply-To: References: Message-ID: <7f3bf5dc-3f1e-6369-4c55-eb9780f05eda@linux.vnet.ibm.com> Hey all, We do use kolla. Let me see if I can shed some light here. On 9/16/19 11:54 AM, Wesley Hayutin wrote: > > > On Mon, Sep 16, 2019 at 7:36 AM Alex Schultz > wrote: > > > > On Sat, Sep 14, 2019 at 10:51 AM Marcin Juszkiewicz > > wrote: > > About 2.5 years ago I added AArch64 (64-bit ARM) architecture > support to > Kolla project. Side effect of that work was adding ppc64le (64-bit > Power, Little Endian) support. > > Time passed, from time to time someone jumped to the irc > channel and > said that they use it. No one in core team spent much time on > supporting > it as it was outside of our interest. > > From time to time I was reserving Power machine in Red Hat to > do build > and check how we are with ppc64le support. This week I did > that again. > > From 3 distributions we target only Debian/source combo was > buildable. > > CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) and > 'gnocchi' binary images are not buildable due to lack of some > packages > (issue already reported to CentOS by TripleO team). > We should be getting the gnocchi issue fixed. See this thread https://lists.centos.org/pipermail/centos-devel/2019-September/017721.html The rabbitmq issue is confusing me. The version provided for x86_64 seems to be the same one provided for ppc64le, but maybe I'm missing something. If there's a package we need to get published, I can investigate. > > Ubuntu builds lack MariaDB 10.3 because upstream repo is broken. > Packages index is provided for 'ppc64le' but no packages so we > get 404 > errors. > Unfortunately I'm not well versed on the gaps in Ubuntu. > > Due to all those issues and fact that there are no users of > ppc64le > Kolla containers I want to drop support for it in this cycle. Any > objections? > > > I think TripleO uses the ppc64le containers. I'm unsure if we're > relying on anything special to build them however.  I know there's > been some effort to get a pp64le upstream build system going.  > ccing Wes as he might have the status on this. > > > The TripleO team is working on ppc64le right now and should be > uploading ppc64le containers to docker.io in a few > weeks.  I'm pretty sure the ppc team is using kolla to build the > containers, but I'm not 100% sure as the ppc64le container build jobs > are in a 3rd party ci system. Just to clarify, the builds happen here https://ci.centos.org/job/tripleo-upstream-containers-build-master-ppc64le/ We are using kolla with buildah. The Dockerfiles are generated by kolla and then consumed by buildah. Logs can be found here as well - https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/ > > I'm checking with some folks to get the details that would be helpful > here. > > Thanks. The question I have is, what do you need to maintain support? I can join this week's IRC meeting if that would be helpful. Also, last week mnasiadka reached out to me asking if we might be able to turn on kolla jobs in pkvmci (our third party CI - https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI ). I plan to talk to our CI folks this week to see if we have capacity for this. Thanks, Mike Turek -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Sep 16 16:58:50 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 16 Sep 2019 11:58:50 -0500 Subject: Open Infrastructure Summit Shanghai: Forum Submissions Open In-Reply-To: <5D705C83.8020203@openstack.org> References: <5D705C83.8020203@openstack.org> Message-ID: <5D7FBF4A.5010101@openstack.org> Well hello! A gentle reminder that Forum submission deadline is September 20. Please proceed to https://cfp.openstack.org and complete your submission today! Cheers, Jimmy > Jimmy McArthur > September 4, 2019 at 7:53 PM > Hello Everyone! > > We are now accepting Forum [1] submissions for the 2019 Open > Infrastructure Summit in Shanghai [2]. Please submit your ideas > through the Summit CFP tool [3] through September20th. Don't forget > to put your brainstorming etherpad up on the Shanghai Forum page [4]. > > This is not a classic conference track with speakers and > presentations. OSF community members (participants in development > teams, operators, working groups, SIGs, and other interested > individuals) discuss the topics they want to cover and get alignment > on and we welcome your participation. The Forum is your opportunity > to help shape the development of future project releases. More > information about the Forum [1]. Keep in mind, Forum submissions are > for discussions, not presentations. > > The timeline for submissions is as follows: > > Sep 4th | Formal topic submission tool opens: https://cfp.openstack.org. > Sep 20th | Deadline for proposing Forum topics. Scheduling committee > meeting to make draft agenda. > Sep 30th | Draft Forum schedule published. Crowd sourced session > conflict detection. Forum promotion begins. > Oct 7th | Scheduling committee final meeting > Oct 14th | Forum schedule final > Nov 4-6| Forum Time! > > If you have questions or concerns, please reach out to > speakersupport at openstack.org . > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/shanghai-2019/ > [3] https://cfp.openstack.org > [4] https://wiki.openstack.org/wiki/Forum/Shanghai2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.santomaggio at gmail.com Mon Sep 16 17:36:55 2019 From: g.santomaggio at gmail.com (Gabriele Santomaggio) Date: Mon, 16 Sep 2019 19:36:55 +0200 Subject: Rabbitmq error report In-Reply-To: <2d2076f9-0eb1-98e8-f9e0-1067b4472f23@nemebean.com> References: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> <2d2076f9-0eb1-98e8-f9e0-1067b4472f23@nemebean.com> Message-ID: An internal queue is locked for some reason. Try to delete it with: rabbitmqctl eval 'rabbit_amqqueue:internal_delete({resource,<<"/">>,queue,<<" versioned_notifications.info">>}).' - Gabriele Santomaggio Il giorno gio 12 set 2019 alle ore 15:37 Ben Nemec ha scritto: > Have you checked that your notification queues aren't filling up? It can > cause performance problems in Rabbit if nothing is clearing out those > queues. > > On 9/12/19 4:35 AM, Md. Farhad Hasan Khan wrote: > > Hi, > > > > I’m getting this error continuously in rabbitmq log. Though all > > operation going normal, but slow. Sometimes taking long time to perform > > operation. Please help me to solve this. > > > > rabbitmq version: rabbitmq_server-3.6.16 > > > > =ERROR REPORT==== 12-Sep-2019::13:04:55 === > > > > Channel error on connection <0.8105.3> (192.168.21.56:60116 -> > > 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: > > > > operation queue.declare caused a channel exception not_found: failed to > > perform operation on queue 'versioned_notifications.info' in vhost '/' > > due to timeout > > > > =WARNING REPORT==== 12-Sep-2019::13:04:55 === > > > > closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> > > 192.168.21.11:5672 - > > nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', > > user: 'openstack'): > > > > client unexpectedly closed TCP connection > > > > Thanks & B’Rgds, > > > > Rony > > > > -- Gabriele Santomaggio -------------- next part -------------- An HTML attachment was scrubbed... URL: From eharney at redhat.com Mon Sep 16 17:40:36 2019 From: eharney at redhat.com (Eric Harney) Date: Mon, 16 Sep 2019 13:40:36 -0400 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> Message-ID: On 9/16/19 6:02 AM, Ghanshyam Mann wrote: > ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > As per discussion over ML, Tempest started the JSON schema strict validation for Volume APIs response [1]. > > Because it may affect the interop certification, it was explained to the Interop team as well as in the Board of Director meeting[2]. > > > > In Train, Tempest started implementing the validation and found an API change where the new field was added in API response without versioning[3] (Cinder has API microversion mechanism). IMO, that was not the correct way to change the API and as per API-WG guidelines[4] any field added/modified/removed in API should be with microverison(means old versions/user should not be affected by that change) and must for API interoperability. > > > > With JSON schema validation, Tempest verifies the API interoperability recommended behaviour by API-WG. But as per IRC conversion with cinder team, we have different opinion on API interoperability and how API should be changed under microversion mechanism. I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. > > I found the same flow chart what Sean created in Nova about "when to bump microverison" in Cinder also which clearly say any addition to response need new microversion. > - https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html > > -gmann > I don't believe that it is clear that a microversion bump was required for the "groups" response showing up in a GET quota-sets response, and here's why: This API has, since at least Havana, returned dynamic fields based on quotas that are assigned to volume types. i.e.: $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": -1, "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, "backup_gigabytes": 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, "gigabytes_enc": -1, "backups": 10, "id": "b73b1b7e82a247038cd01a441ec5a806"}} "gigabytes_ceph" is in that response because there's a "ceph" volume type defined, same for "gigabytes_enc", etc. This puts this API alongside something more like listing volume types -- you get a list of what's defined on the deployment, not a pre-baked list of defined fields. Complaints about the fact that "groups" being added without a microversion imply that these other dynamic fields shouldn't be in this response either -- but this is how this API works. There's a lot of talk here about interoperability problems... what are those problems, exactly? If we ignore Ocata and just look at Train -- why is this API not problematic for interoperability there, when requests on different clouds would return different data, depending on how types are configured? It's not clear to me that rectifying the microversion concerns around the "groups" field is helpful without also understanding this piece, because if the concern is that different clouds return different fields for this API -- that will still happen. We need more detail to understand how to address this, and what the problem is that we are trying to solve exactly. (Other than the problem that Tempest currently fails on Ocata. My inclination is still that the Tempest tests could just be wrong.) > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html > > [2] > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html > > [3] https://bugs.launchpad.net/tempest/+bug/1843762 https://review.opendev.org/#/c/439461/ > > [4] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > > > -gmann > > > > From sean.mcginnis at gmx.com Mon Sep 16 17:50:06 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 16 Sep 2019 12:50:06 -0500 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> Message-ID: <20190916175006.GA18409@sm-workstation> > > > > I don't believe that it is clear that a microversion bump was required for > the "groups" response showing up in a GET quota-sets response, and here's > why: > > This API has, since at least Havana, returned dynamic fields based on quotas > that are assigned to volume types. i.e.: > > $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 > DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": -1, > "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, "backup_gigabytes": > 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, > "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, "gigabytes_enc": > -1, "backups": 10, "id": "b73b1b7e82a247038cd01a441ec5a806"}} > > "gigabytes_ceph" is in that response because there's a "ceph" volume type > defined, same for "gigabytes_enc", etc. > > This puts this API alongside something more like listing volume types -- you > get a list of what's defined on the deployment, not a pre-baked list of > defined fields. > I think this is the root of the confusion, and why I still think that enforcements, at least as it is now, should be reverted from tempest. This is not an API change where Cinder changed the columns in the response, it's the rows. This is a dynamic list. Like Eric points out, this really is no different than listing volumes or volumes types. This definitely should *not* be a microversion bump and the enforcement by tempest of the content (not the structure) is wrong. Sean From sshnaidm at redhat.com Mon Sep 16 18:24:01 2019 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Mon, 16 Sep 2019 21:24:01 +0300 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: wrt podman_container module, I have submitted podman_container module to ansible [1] that was reverted later, the main reason of revert was lack of idempotency. I planned to add it later, but ansible cores refused to wait. Maybe we can submit it to tripleo-ansible, finish idempotency and polish it, and then to submit to ansible upstream again. That way we'll have well tested with tripleo podman_container module. Meanwhile we can use paunch module, as it's easier to move from one ansible module to another. [1] https://github.com/ansible/ansible/commit/f01468a9d97f5af4ff61b3b7cac6e3f09015f791 On Mon, Sep 16, 2019 at 7:11 PM Emilien Macchi wrote: > On Mon, Sep 16, 2019 at 11:47 AM Rabi Mishra wrote: > >> I'm not sure if podman as container tool would move in that direction, as >> it's meant to be a command line tool. If we really want to reduce the >> overhead of so many layers in TripleO and podman is the container tool for >> us (I'll ignore the k8s related discussions for the time being), I would >> think the logic of translating the JSON configs to podman calls should be >> be in ansible (we can even write a TripleO specific podman module). >> > > I think we're both in strong agreement and say "let's convert paunch into > ansible module". > And make the module robust enough for our needs. Then we could replace > paunch by calling the podman module directly. > -- > Emilien Macchi > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Sep 16 18:36:08 2019 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 16 Sep 2019 14:36:08 -0400 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: On Mon, Sep 16, 2019 at 2:24 PM Sagi Shnaidman wrote: > wrt podman_container module, > I have submitted podman_container module to ansible [1] that was reverted > later, the main reason of revert was lack of idempotency. I planned to add > it later, but ansible cores refused to wait. > Maybe we can submit it to tripleo-ansible, finish idempotency and polish > it, and then to submit to ansible upstream again. > That way we'll have well tested with tripleo podman_container module. > Meanwhile we can use paunch module, as it's easier to move from one > ansible module to another. > Yes, it sounds like a plan. If you can patch: https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/ansible_plugins/modules/podman_container.py It would avoid duplication. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmiko at redhat.com Mon Sep 16 20:00:25 2019 From: elmiko at redhat.com (Michael McCune) Date: Mon, 16 Sep 2019 16:00:25 -0400 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <20190916175006.GA18409@sm-workstation> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <20190916175006.GA18409@sm-workstation> Message-ID: On Mon, Sep 16, 2019 at 1:55 PM Sean McGinnis wrote: > > > > > > > I don't believe that it is clear that a microversion bump was required > for > > the "groups" response showing up in a GET quota-sets response, and here's > > why: > > > > This API has, since at least Havana, returned dynamic fields based on > quotas > > that are assigned to volume types. i.e.: > > > > $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 > > DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": -1, > > "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, "backup_gigabytes": > > 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, > > "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, > "gigabytes_enc": > > -1, "backups": 10, "id": "b73b1b7e82a247038cd01a441ec5a806"}} > > > > "gigabytes_ceph" is in that response because there's a "ceph" volume type > > defined, same for "gigabytes_enc", etc. > > > > This puts this API alongside something more like listing volume types -- > you > > get a list of what's defined on the deployment, not a pre-baked list of > > defined fields. > > > > I think this is the root of the confusion, and why I still think that > enforcements, at least as it is now, should be reverted from tempest. > > This is not an API change where Cinder changed the columns in the response, > it's the rows. This is a dynamic list. Like Eric points out, this really > is no > different than listing volumes or volumes types. > > This definitely should *not* be a microversion bump and the enforcement by > tempest of the content (not the structure) is wrong. > > these details definitely make a difference to me. perhaps i should clarify my previous statement, i would expect any changes to the request or response /schemas/ to be associated with a version bump. if these tolerances are allowed within the current schema, then it makes sense to me that no version change would occur. thanks for the clarification Eric and Sean peace o/ Sean > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 16 21:15:54 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 16 Sep 2019 16:15:54 -0500 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> Message-ID: <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> On 9/16/2019 12:40 PM, Eric Harney wrote: > > There's a lot of talk here about interoperability problems... what are > those problems, exactly?  If we ignore Ocata and just look at Train -- > why is this API not problematic for interoperability there, when > requests on different clouds would return different data, depending on > how types are configured? > > It's not clear to me that rectifying the microversion concerns around > the "groups" field is helpful without also understanding this piece, > because if the concern is that different clouds return different fields > for this API -- that will still happen.  We need more detail to > understand how to address this, and what the problem is that we are > trying to solve exactly. Backend/type specific information leaking out of the API dynamically like that is definitely an interoperability problem and as you said it sounds like it's been that way for a long time. The compute servers diagnostics API had a similar problem for a long time and the associated Tempest test for that API was disabled for a long time because the response body was hypervisor specific, so we eventually standardized it in a microversion so it was driver agnostic. -- Thanks, Matt From miguel at mlavalle.com Mon Sep 16 21:47:42 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 16 Sep 2019 16:47:42 -0500 Subject: [neutron] [neutron-lib] Change of in neutron-lib lead Message-ID: Dear Neutrinos, Many of you might have heard the Russel Boden's employer is changing his focus. As a consequence, Boden won't be able to continue leading our neutron-lib efforts. We want to thank you for the great job he has done over many cycles in advancing this sub-project. We are also looking for a volunteer to lead neutron-lib. Boden put together an etherpad with what needs to be done in the near future: https://etherpad.openstack.org/p/neutron-lib-volunteers-and-punch-list Regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Mon Sep 16 22:07:57 2019 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 17 Sep 2019 00:07:57 +0200 Subject: [kolla] State of ppc64le support In-Reply-To: <7f3bf5dc-3f1e-6369-4c55-eb9780f05eda@linux.vnet.ibm.com> References: <7f3bf5dc-3f1e-6369-4c55-eb9780f05eda@linux.vnet.ibm.com> Message-ID: <29ed66c8-3f49-8e24-ccca-ccb73bc33374@linaro.org> W dniu 16.09.2019 o 18:45, Michael Turek pisze: > Hey all, > > We do use kolla. Let me see if I can shed some light here. I have to admit that I wanted to check is there anyone using Kolla on ppc64le. Thanks for replies. >>         CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) and >>         'gnocchi' binary images are not buildable due to lack of some >>         packages >>         (issue already reported to CentOS by TripleO team). >> > > We should be getting the gnocchi issue fixed. See this thread > https://lists.centos.org/pipermail/centos-devel/2019-September/017721.html I am in that thread ;) > The rabbitmq issue is confusing me. The version provided for x86_64 > seems to be the same one provided for ppc64le, but maybe I'm missing > something. If there's a package we need to get published, I can > investigate. External repo is used and we install "rabbitmq-3.7.10". One 'if ppc64le' check and will work by using in-centos-repo version. >>         Ubuntu builds lack MariaDB 10.3 because upstream repo is broken. >>         Packages index is provided for 'ppc64le' but no packages so we >>         get 404 >>         errors. >> > Unfortunately I'm not well versed on the gaps in Ubuntu. I am fine with it. No one noticed == no one uses. > The question I have is, what do you need to maintain support? I can join > this week's IRC meeting if that would be helpful. For me a knowledge that someone is using is enough to keep it available. Would not call it 'maintaining support' as I do builds on ppc64le once per cycle (if at all per cycle). > Also, last week mnasiadka reached out to me asking if we might be able > to turn on kolla jobs in pkvmci (our third party CI - > https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI ). I > plan to talk to our CI folks this week to see if we have capacity for this. Some kind of CI job would be great. Even simple 'centos/source' combo. I have two patches adding AArch64 CI but we (Linaro) have to fix our OpenStack cluster first. All Ceph nodes use hard drives only and probably not configured optimally. As a result we are unable to fit in three hours required by Zuul. From sean.mcginnis at gmx.com Mon Sep 16 22:11:13 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 16 Sep 2019 17:11:13 -0500 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> Message-ID: <20190916221113.GA31638@sm-workstation> > > Backend/type specific information leaking out of the API dynamically like > that is definitely an interoperability problem and as you said it sounds > like it's been that way for a long time. The compute servers diagnostics API > had a similar problem for a long time and the associated Tempest test for > that API was disabled for a long time because the response body was > hypervisor specific, so we eventually standardized it in a microversion so > it was driver agnostic. > Except this isn't backend specific information that is leaking. It's just reflecting the configuration of the system. From fungi at yuggoth.org Mon Sep 16 22:18:23 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 16 Sep 2019 22:18:23 +0000 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels Message-ID: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> Freenode imposes a hard limit of 120 simultaneously joined channels for any single account. We've once again reached that limit with our channel-logging meetbot. As a quick measure, I've proposed a bit of cleanup: https://review.opendev.org/682500 Analysis of IRC channel logs indicates the following have seen 5 or fewer non-bot comments posted in the past 12 months and are likely of no value to continue logging: 5 #openstack-vpnaas 2 #swift3 2 #openstack-ko 1 #openstack-deployment 1 #midonet 0 #openstack-valet 0 #openstack-swg 0 #openstack-mogan Please let me know either here on the ML or with a comment on the review linked above if you have a reason to continue logging any of these channels. I'd like to merge it later this week if possible. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lucioseki at gmail.com Mon Sep 16 12:59:26 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Mon, 16 Sep 2019 09:59:26 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Hi Antonio. Yes, it is $ sysctl net.ipv6.conf.all.forwarding net.ipv6.conf.all.forwarding = 1 On Sat, Sep 14, 2019 at 6:02 AM Antonio Ojea wrote: > Can you check if ipv6 forwarding is enabled in the router namespace? > > net.ipv6.conf.all.forwarding=1 > > On Sat, 14 Sep 2019 at 02:13, Lucio Seki wrote: > > > > I recreated my security group rules, to set remote_ip_prefix to ::/0 > instead of None as in Donny's environment, but made no difference. :-( > > > > On Fri, Sep 13, 2019 at 3:55 PM Donny Davis > wrote: > >> > >> So outbound traffic works, but inbound traffic doesn't? > >> > >> Here is my icmp security group rule for ipv6. > >> > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >> | Field | Value > > | > >> > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >> | created_at | 2019-07-30T00:50:25Z > > | > >> | description | > > | > >> | direction | ingress > > | > >> | ether_type | IPv6 > > | > >> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > >> | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > >> | name | None > > | > >> | port_range_max | None > > | > >> | port_range_min | None > > | > >> | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > >> | protocol | icmp > > | > >> | remote_group_id | None > > | > >> | remote_ip_prefix | ::/0 > > | > >> | revision_number | 0 > > | > >> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > >> | tags | [] > > | > >> | updated_at | 2019-07-30T00:50:25Z > > | > >> > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >> > >> > >> > >> On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > >>> > >>> Hmm OK, I'll try to figure out what hacking > create_neutron_initial_network does... > >>> > >>> BTW, I noticed that I can ping6 the router interface at private subnet > from the DevStack host: > >>> > >>> $ ping6 fd12:67:1:1::1 > >>> PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes > >>> 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms > >>> 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms > >>> 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms > >>> 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms > >>> > >>> And also I can ping6 the public subnet interface from the VM: > >>> > >>> root at ubuntu:~# ping6 fd12:67:1::3c > >>> PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes > >>> ping: getnameinfo: Temporary failure in name resolution > >>> 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms > >>> ping: getnameinfo: Temporary failure in name resolution > >>> 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms > >>> ping: getnameinfo: Temporary failure in name resolution > >>> 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms > >>> > >>> Not sure if it means that there's something missing within the router > itself... > >>> > >>> On Fri, Sep 13, 2019 at 2:24 PM Donny Davis > wrote: > >>>> > >>>> Also I have no v6 address on my br-ex > >>>> > >>>> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis > wrote: > >>>>> > >>>>> Well here is the output from my rule list that is in prod right now > with ipv6 > >>>>> > +--------------------------------------+-------------+-----------+------------+-----------------------+ > >>>>> | ID | IP Protocol | IP Range | > Port Range | Remote Security Group | > >>>>> > +--------------------------------------+-------------+-----------+------------+-----------------------+ > >>>>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | > | None | > >>>>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | > | None | > >>>>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | > | None | > >>>>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | > | None | > >>>>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | > | None | > >>>>> > +--------------------------------------+-------------+-----------+------------+-----------------------+ > >>>>> > >>>>> > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>> | Field | Value > > | > >>>>> > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>> | created_at | 2019-07-30T00:50:25Z > > | > >>>>> | description | > > | > >>>>> | direction | ingress > > | > >>>>> | ether_type | IPv6 > > | > >>>>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > >>>>> | location | Munch({'cloud': '', 'region_name': > 'regionOne', 'zone': None, 'project': Munch({'id': > 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': > None, 'domain_name': 'Default'})}) | > >>>>> | name | None > > | > >>>>> | port_range_max | None > > | > >>>>> | port_range_min | None > > | > >>>>> | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > >>>>> | protocol | icmp > > | > >>>>> | remote_group_id | None > > | > >>>>> | remote_ip_prefix | ::/0 > > | > >>>>> | revision_number | 0 > > | > >>>>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > >>>>> | tags | [] > > | > >>>>> | updated_at | 2019-07-30T00:50:25Z > > | > >>>>> > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki > wrote: > >>>>>> > >>>>>> Hi Donny, following are the rules: > >>>>>> > >>>>>> $ openstack security group list --project admin > >>>>>> > +--------------------------------------+---------+------------------------+----------------------------------+------+ > >>>>>> | ID | Name | Description > | Project | Tags | > >>>>>> > +--------------------------------------+---------+------------------------+----------------------------------+------+ > >>>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security > group | 68e3942285a24fb5bd1aed30e166aaee | [] | > >>>>>> > +--------------------------------------+---------+------------------------+----------------------------------+------+ > >>>>>> > >>>>>> $ openstack security group rule list > d0136b0e-ee51-461c-afa0-c5adb88dd0dd > >>>>>> > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > >>>>>> | ID | IP Protocol | IP Range | > Port Range | Remote Security Group | > >>>>>> > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > >>>>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | > 22:22 | None | > >>>>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | > >>>>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | > >>>>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | > | None | > >>>>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | > 22:22 | None | > >>>>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | > | None | > >>>>>> > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > >>>>>> > >>>>>> $ openstack security group rule show > 759edd06-b698-45ca-94cd-44e0cc2cc848 > >>>>>> > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>> | Field | Value > > | > >>>>>> > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>> | created_at | 2019-09-03T16:51:41Z > > | > >>>>>> | description | > > | > >>>>>> | direction | egress > > | > >>>>>> | ether_type | IPv6 > > | > >>>>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 > > | > >>>>>> | location | Munch({'project': Munch({'domain_id': > 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', > 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': > None}) | > >>>>>> | name | None > > | > >>>>>> | port_range_max | None > > | > >>>>>> | port_range_min | None > > | > >>>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee > > | > >>>>>> | protocol | ipv6-icmp > > | > >>>>>> | remote_group_id | None > > | > >>>>>> | remote_ip_prefix | None > > | > >>>>>> | revision_number | 0 > > | > >>>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > | > >>>>>> | tags | [] > > | > >>>>>> | updated_at | 2019-09-03T16:51:41Z > > | > >>>>>> > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>> > >>>>>> $ openstack security group rule show > 81f3588d-4159-4af2-ad50-ff6b76add9cf > >>>>>> > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>> | Field | Value > > | > >>>>>> > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>> | created_at | 2019-09-03T16:51:30Z > > | > >>>>>> | description | > > | > >>>>>> | direction | ingress > > | > >>>>>> | ether_type | IPv6 > > | > >>>>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf > > | > >>>>>> | location | Munch({'project': Munch({'domain_id': > 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', > 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': > None}) | > >>>>>> | name | None > > | > >>>>>> | port_range_max | None > > | > >>>>>> | port_range_min | None > > | > >>>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee > > | > >>>>>> | protocol | ipv6-icmp > > | > >>>>>> | remote_group_id | None > > | > >>>>>> | remote_ip_prefix | None > > | > >>>>>> | revision_number | 0 > > | > >>>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > | > >>>>>> | tags | [] > > | > >>>>>> | updated_at | 2019-09-03T16:51:30Z > > | > >>>>>> > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>> > >>>>>> > >>>>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis > wrote: > >>>>>>> > >>>>>>> Security group rules? > >>>>>>> > >>>>>>> Donny Davis > >>>>>>> c: 805 814 6800 > >>>>>>> > >>>>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki > wrote: > >>>>>>>> > >>>>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack > from its hypervisor. > >>>>>>>> Could you please help me troubleshooting it? > >>>>>>>> > >>>>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, > >>>>>>>> and manually created the networks, subnets and router. Following > is my router: > >>>>>>>> > >>>>>>>> $ openstack router show router1 -c external_gateway_info -c > interfaces_info > >>>>>>>> > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>>>> | Field | Value > > > > | > >>>>>>>> > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>>>> | external_gateway_info | {"network_id": > "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, > "external_fixed_ips": [{"subnet_id": > "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, > {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": > "fd12:67:1::3c"}]} | > >>>>>>>> | interfaces_info | [{"subnet_id": > "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", > "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] > > | > >>>>>>>> > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>>>> > >>>>>>>> I'm trying to ping6 the following VM: > >>>>>>>> > >>>>>>>> $ openstack server list > >>>>>>>> > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > >>>>>>>> | ID | Name | Status | > Networks | Image | Flavor | > >>>>>>>> > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > >>>>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | > private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | > >>>>>>>> > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > >>>>>>>> > >>>>>>>> I intend to reach it via br-ex interface of the hypervisor: > >>>>>>>> > >>>>>>>> $ ip a show dev br-ex > >>>>>>>> 9: br-ex: mtu 1500 qdisc > noqueue state UNKNOWN group default qlen 1000 > >>>>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff > >>>>>>>> inet6 fd12:67:1::1/64 scope global > >>>>>>>> valid_lft forever preferred_lft forever > >>>>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link > >>>>>>>> valid_lft forever preferred_lft forever > >>>>>>>> > >>>>>>>> The hypervisor has the following routes: > >>>>>>>> > >>>>>>>> $ ip -6 route > >>>>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref > medium > >>>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium > >>>>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium > >>>>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium > >>>>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium > >>>>>>>> > >>>>>>>> And within the VM has the following routes: > >>>>>>>> > >>>>>>>> root at ubuntu:~# ip -6 route > >>>>>>>> root at ubuntu:~# ip -6 route > >>>>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium > >>>>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires > 86360sec pref medium > >>>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium > >>>>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric > 1024 expires 260sec hoplimit 64 pref medium > >>>>>>>> > >>>>>>>> Though the ping6 from VM to hypervisor doesn't work: > >>>>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 > >>>>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes > >>>>>>>> --- fd12:67:1::1 ping statistics --- > >>>>>>>> 4 packets transmitted, 0 packets received, 100% packet loss > >>>>>>>> > >>>>>>>> I'm able to tcpdump inside the router1 netns and see that request > packet is passing there, but can't see any reply packets: > >>>>>>>> > >>>>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 > tcpdump -l -i any icmp6 > >>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full > protocol decode > >>>>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture > size 262144 bytes > >>>>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > > fd12:67:1::1: ICMP6, echo request, seq 0, length 64 > >>>>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has > fe80::f816:3eff:fe0e:17c3, length 32 > >>>>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is > fe80::f816:3eff:fe0e:17c3, length 24 > >>>>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > > fd12:67:1::1: ICMP6, echo request, seq 1, length 64 > >>>>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > > fd12:67:1::1: ICMP6, echo request, seq 2, length 64 > >>>>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > > fd12:67:1::1: ICMP6, echo request, seq 3, length 64 > >>>>>>>> > >>>>>>>> The same happens from hypervisor to VM. I only acan see the > request packets, but no reply packets. > >>>>>>>> > >>>>>>>> Thanks in advance, > >>>>>>>> Lucio Seki > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Sep 16 22:59:19 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 16 Sep 2019 23:59:19 +0100 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <20190916221113.GA31638@sm-workstation> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> Message-ID: <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > Backend/type specific information leaking out of the API dynamically like > > that is definitely an interoperability problem and as you said it sounds > > like it's been that way for a long time. The compute servers diagnostics API > > had a similar problem for a long time and the associated Tempest test for > > that API was disabled for a long time because the response body was > > hypervisor specific, so we eventually standardized it in a microversion so > > it was driver agnostic. > > > > Except this isn't backend specific information that is leaking. It's just > reflecting the configuration of the system. yes and config driven api behavior is also an iterop problem. ideally you should not be able to tell if cinder is abcked by ceph or emc form the api responce at all. sure you might have a volume type call ceph and another called emc but both should be report capasty in the same field with teh same unit. ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded set. > From gmann at ghanshyammann.com Tue Sep 17 00:01:22 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 09:01:22 +0900 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> Message-ID: <16d3c861117.d3b1337055686.8802713726745370694@ghanshyammann.com> ---- On Tue, 17 Sep 2019 02:40:36 +0900 Eric Harney wrote ---- > On 9/16/19 6:02 AM, Ghanshyam Mann wrote: > > ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann wrote ---- > > > Hello Everyone, > > > > > > As per discussion over ML, Tempest started the JSON schema strict validation for Volume APIs response [1]. > > > Because it may affect the interop certification, it was explained to the Interop team as well as in the Board of Director meeting[2]. > > > > > > In Train, Tempest started implementing the validation and found an API change where the new field was added in API response without versioning[3] (Cinder has API microversion mechanism). IMO, that was not the correct way to change the API and as per API-WG guidelines[4] any field added/modified/removed in API should be with microverison(means old versions/user should not be affected by that change) and must for API interoperability. > > > > > > With JSON schema validation, Tempest verifies the API interoperability recommended behaviour by API-WG. But as per IRC conversion with cinder team, we have different opinion on API interoperability and how API should be changed under microversion mechanism. I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. > > > > I found the same flow chart what Sean created in Nova about "when to bump microverison" in Cinder also which clearly say any addition to response need new microversion. > > - https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html > > > > -gmann > > > > I don't believe that it is clear that a microversion bump was required > for the "groups" response showing up in a GET quota-sets response, and > here's why: > > This API has, since at least Havana, returned dynamic fields based on > quotas that are assigned to volume types. i.e.: > > $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 > DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": -1, > "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, "backup_gigabytes": > 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, > "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, > "gigabytes_enc": -1, "backups": 10, "id": > "b73b1b7e82a247038cd01a441ec5a806"}} > > "gigabytes_ceph" is in that response because there's a "ceph" volume > type defined, same for "gigabytes_enc", etc. > > This puts this API alongside something more like listing volume types -- > you get a list of what's defined on the deployment, not a pre-baked list > of defined fields. > > Complaints about the fact that "groups" being added without a > microversion imply that these other dynamic fields shouldn't be in this > response either -- but this is how this API works. > > There's a lot of talk here about interoperability problems... what are > those problems, exactly? If we ignore Ocata and just look at Train -- > why is this API not problematic for interoperability there, when > requests on different clouds would return different data, depending on > how types are configured? > > It's not clear to me that rectifying the microversion concerns around > the "groups" field is helpful without also understanding this piece, > because if the concern is that different clouds return different fields > for this API -- that will still happen. We need more detail to > understand how to address this, and what the problem is that we are > trying to solve exactly. There are two things here. 1. API behaviour depends on backend. This has been discussed two years back also and Tempest team along with cinder team decided not to test the backend-specific behaviour in Tempest[1]. 2. API is changed without versioning. The second one is the issue here. If any API is changed without versioning cause the interoperability issue here. New field is being added for older microversion also for same backend. *Why this is interoperability: CloudA with same configuration and same backend is upgraded and have API return new field. I deploy my app on that cloud and use that field. Now CloudB with same configuration and same backend is not upgraded yet so does not have API return the new field added. Now I want to move my app from CloudA to CloudB and it will fail because CloudB API does not have that new field. And I cannot check what version it got added or there is no mechanism for app to discover that field as expected in which Cloud. So this is a very clear case of interoperability. There is no way for end-user to discover the API change which is a real pain point for them. Note: same backend and same configuration cloud have different behaviour of API. We should consider the addition of new field same as delete or modify (name or type) any field in API. > > (Other than the problem that Tempest currently fails on Ocata. My > inclination is still that the Tempest tests could just be wrong.) Ocata gate is going to be solved by https://review.opendev.org/#/c/681950/ -gmann [1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html > > > [2] > > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html > > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html > > > [3] https://bugs.launchpad.net/tempest/+bug/1843762 https://review.opendev.org/#/c/439461/ > > > [4] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > > > > > -gmann > > > > > > > > > > From gmann at ghanshyammann.com Tue Sep 17 00:05:56 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 09:05:56 +0900 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> Message-ID: <16d3c8a4243.10a5c411e55700.6669950579531609398@ghanshyammann.com> ---- On Tue, 17 Sep 2019 07:59:19 +0900 Sean Mooney wrote ---- > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > Backend/type specific information leaking out of the API dynamically like > > > that is definitely an interoperability problem and as you said it sounds > > > like it's been that way for a long time. The compute servers diagnostics API > > > had a similar problem for a long time and the associated Tempest test for > > > that API was disabled for a long time because the response body was > > > hypervisor specific, so we eventually standardized it in a microversion so > > > it was driver agnostic. > > > > > > > Except this isn't backend specific information that is leaking. It's just > > reflecting the configuration of the system. > yes and config driven api behavior is also an iterop problem. > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > api responce at all. > > sure you might have a volume type call ceph and another called emc but both should be > report capasty in the same field with teh same unit. > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded > set. Yeah and this is real pain point for end-user or app using API directly. Dynamic API behaviour base don system configuration is interoperability issue. In bug#1687538 case, new field is going to be reflected for the same backend and same configuration Cloud. Cloud provider upgrade their cloud from ocata->anything and user will start getting the new field without any mechanism to discover whether that field is expected to be present or not. -gmann > > > > > From johnsomor at gmail.com Tue Sep 17 00:24:19 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 16 Sep 2019 17:24:19 -0700 Subject: [neutron] [neutron-lib] Change of in neutron-lib lead In-Reply-To: References: Message-ID: Russel, Thank you for your work on neutron-lib and helping the advanced services stay in sync. It was greatly appreciated by the Octavia team. Good luck on your next adventure. Michael On Mon, Sep 16, 2019 at 2:50 PM Miguel Lavalle wrote: > > Dear Neutrinos, > > Many of you might have heard the Russel Boden's employer is changing his focus. As a consequence, Boden won't be able to continue leading our neutron-lib efforts. We want to thank you for the great job he has done over many cycles in advancing this sub-project. > > We are also looking for a volunteer to lead neutron-lib. Boden put together an etherpad with what needs to be done in the near future: https://etherpad.openstack.org/p/neutron-lib-volunteers-and-punch-list > > Regards > > Miguel From kevinzs2048 at gmail.com Tue Sep 17 01:24:41 2019 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Tue, 17 Sep 2019 09:24:41 +0800 Subject: [neutron]IPv6 Prefix Delegation could not be activated in newest version Neutron Message-ID: Hi All, I'm working on validate the IPv6 PD in newest Neutron. What I want is to offer the Global Unified address to the VM and I find PD is the good solutions for me. I follow the guide *https://docs.openstack.org/neutron/latest/admin/config-ipv6.html *to setup PD and dibbler-server and devstack, but I find I could not to trigger the PD process. *The Dibbler server print nothing when attach the Subnet to router has external gateway.* All procedure has recorded to the bug: https://bugs.launchpad.net/neutron/+bug/1844123. Thanks for your action and help in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 17 02:51:42 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 11:51:42 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update Message-ID: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is the only best way to do. Summary: The projects still need to prepare the IPv6 job: * Ec2-Api * Freezer * Heat * Ironic * Karbor * Kolla * Kuryr * Magnum * Manila * Masakari * Mistral * Murano * Octavia * Swift The projects waiting for IPv6 job patch to merge: If patch is failing, help me to debug that otherwise review and merge. * Barbican * Blazar * Cyborg * Tricircle * Vitrage * Zaqar * Cinder * Glance * Monasca * Neutron * Qinling * Quality Assurance * Sahara * Searchlight * Senlin * Tacker The projects have merged the IPv6 jobs: * Designate * Murano * Trove * Cloudkitty * Congress * Horizon * Keystone * Nova * Placement * Solum * Telemetry * Watcher * Zun The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. * Adjutant * Documentation * I18n * Infrastructure * Loci * Openstack Charms * Openstack-Chef * Openstack-Helm * Openstackansible * Openstackclient * Openstacksdk * Oslo * Packaging-Rpm * Powervmstackers * Puppet Openstack * Rally * Release Management * Requirements * Storlets * Tripleo * Winstackers Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/677524/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific verification then it can be added in project side job as post-run playbooks as described in wiki page[1]. [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From naohiro.sameshima at global.ntt Tue Sep 17 03:22:06 2019 From: naohiro.sameshima at global.ntt (=?utf-8?B?TmFvaGlybyBTYW1lc2hpbWHvvIjprqvls7Yg55u05rSL77yJKEdyb3VwKQ==?=) Date: Tue, 17 Sep 2019 03:22:06 +0000 Subject: [dev] [glance] proposal for S3 store driver re-support as galnce_store backend In-Reply-To: <4f48c659-d216-31a7-f34f-c09e9d51f31d@gmail.com> References: , <4f48c659-d216-31a7-f34f-c09e9d51f31d@gmail.com> Message-ID: Hello, Brian Thanks for the reply. > From what I've heard, there's a revival of interest in the S3 driver, so > it's great that you've decided to work on it. You've missed the Train > for this cycle, however, (sorry, I couldn't resist) as the final release > for nonclient libraries was last week. > The easiest way to discuss getting S3 support into Usurri would be at > the weekly Glance meeting on Thursdays at 1400 UTC.  I realize that it was not in time for the Train cycle. Towards S3 support in the Usurri cycle I would like to discuss about this and propose spec or spec-lite. I also want to maintain the S3 driver. Fortunately, the weekly Glance meeting is held at a time that can participate in my time zone. So, I'm interested in participating, but what should I prepare in advance? (spec or spec lite?, code?, ...) Could you give me some advice about this? Thanks, Naohiro This email and all contents are subject to the following disclaimer: https://hello.global.ntt/en-us/email-disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Sep 17 04:45:22 2019 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 17 Sep 2019 10:15:22 +0530 Subject: [dev] [glance] proposal for S3 store driver re-support as galnce_store backend In-Reply-To: References: <4f48c659-d216-31a7-f34f-c09e9d51f31d@gmail.com> Message-ID: Hi Naohiro, At this moment a glance-specs will be enough for discussion. Thanks & Best Regards, Abhishek Kekane On Tue, Sep 17, 2019 at 8:56 AM Naohiro Sameshima(鮫島 直洋)(Group) wrote: > Hello, Brian > > Thanks for the reply. > > > From what I've heard, there's a revival of interest in the S3 driver, so > > it's great that you've decided to work on it. You've missed the Train > > for this cycle, however, (sorry, I couldn't resist) as the final release > > for nonclient libraries was last week. > > > The easiest way to discuss getting S3 support into Usurri would be at > > the weekly Glance meeting on Thursdays at 1400 UTC. > > I realize that it was not in time for the Train cycle. > Towards S3 support in the Usurri cycle I would like to discuss about this > and propose spec or spec-lite. > I also want to maintain the S3 driver. > > Fortunately, the weekly Glance meeting is held at a time that can > participate in my time zone. > So, I'm interested in participating, but what should I prepare in advance? > (spec or spec lite?, code?, ...) > > Could you give me some advice about this? > > Thanks, > > Naohiro > This email and all contents are subject to the following disclaimer: > https://hello.global.ntt/en-us/email-disclaimer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Sep 17 05:11:25 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 17 Sep 2019 14:11:25 +0900 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> Message-ID: On Tue, Sep 17, 2019 at 7:20 AM Jeremy Stanley wrote: > > Freenode imposes a hard limit of 120 simultaneously joined channels > for any single account. We've once again reached that limit with our > channel-logging meetbot. As a quick measure, I've proposed a bit of > cleanup: https://review.opendev.org/682500 > > Analysis of IRC channel logs indicates the following have seen 5 or > fewer non-bot comments posted in the past 12 months and are likely > of no value to continue logging: > > 5 #openstack-vpnaas I would like to add the following channels to this list in addition to #openstack-vpnaas. This is what I think recently but I haven't discussed it yet with the team. - openstack-fwaas - networking-sfc I see only 5~20 members in these channels constantly. Developments in FWaaS and SFC are not so active, so I don't see a good reason to have a separate channel. They can be merged into the main neutron channel #openstack-neutron. Is there any guideline on how to guide users to migrate a channel to another channel? Thanks, Akihiro > 2 #swift3 > 2 #openstack-ko > 1 #openstack-deployment > 1 #midonet > 0 #openstack-valet > 0 #openstack-swg > 0 #openstack-mogan > > Please let me know either here on the ML or with a comment on the > review linked above if you have a reason to continue logging any of > these channels. I'd like to merge it later this week if possible. > Thanks! > -- > Jeremy Stanley From missile0407 at gmail.com Tue Sep 17 05:36:01 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 17 Sep 2019 13:36:01 +0800 Subject: [kolla] Support Octavia for ubuntu binary on stable/rocky? Message-ID: Hi, I'm trying to install Octavia in Rocky release with Ubuntu binary distro. And found that there're no docker images for Ubuntu binary. Then I checked the Kolla dockerfile and found that it will not build the image since it's not support yet. But I found that the Ubuntu Cloud Archive Team has already putted Octavia packages into cloud repository [1]. Since some images built using from this PPA, I think it can support ubuntu binary in Rocky release. I tried put package code into Docker files and build, but it gave me an error message like below: ERROR:kolla.common.utils:octavia-api Failed with status: matched ERROR:kolla.common.utils:octavia-health-manager Failed with status: matched ERROR:kolla.common.utils:octavia-housekeeping Failed with status: matched ERROR:kolla.common.utils:octavia-worker Failed with status: matched So I think there's limit somewhere. How can I release it? Thanks, Eddie. [1] https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/rocky-staging -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue Sep 17 05:47:38 2019 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 17 Sep 2019 13:47:38 +0800 Subject: [oslo] Stepping down from core reviewer Message-ID: Hi folks, I no longer have the time to contribute to Oslo in a meaningful way in past few months, due to the company internal stuff, and would like to step down from core reviewer. It was an honor to be one of the great team since 4 years ago. I still work on OpenStack, just have no enough time to focus on Oslo. I hope have more time to contribute again in the future :-) All the best! -- ChangBo Guo(gcb) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rony.khan at brilliant.com.bd Tue Sep 17 06:11:52 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Tue, 17 Sep 2019 12:11:52 +0600 Subject: Rabbitmq error report In-Reply-To: References: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> <2d2076f9-0eb1-98e8-f9e0-1067b4472f23@nemebean.com> Message-ID: <130001d56d1e$cc7b5960$65720c20$@brilliant.com.bd> Hi Gabriele, This is the output. “versioned_notifications.error ”. so please help me to solve this. [root at controller1 ~]# rabbitmqctl list_queues name consumers messages | grep notifications notifications.debug 3 0 notifications.critical 3 0 notifications.error 3 0 notifications.info 3 0 notifications.warn 3 0 notifications.audit 3 0 notifications.sample 3 0 versioned_notifications.error 0 14 [root at controller1 ~]# Thanks & B’Rgds, Rony From: Gabriele Santomaggio [mailto:g.santomaggio at gmail.com] Sent: Monday, September 16, 2019 11:37 PM To: Ben Nemec Cc: rony.khan at brilliant.com.bd; OpenStack Discuss Subject: Re: Rabbitmq error report An internal queue is locked for some reason. Try to delete it with: rabbitmqctl eval 'rabbit_amqqueue:internal_delete({resource,<<"/">>,queue,<<"versioned_notifications.info ">>}).' - Gabriele Santomaggio Il giorno gio 12 set 2019 alle ore 15:37 Ben Nemec > ha scritto: Have you checked that your notification queues aren't filling up? It can cause performance problems in Rabbit if nothing is clearing out those queues. On 9/12/19 4:35 AM, Md. Farhad Hasan Khan wrote: > Hi, > > I’m getting this error continuously in rabbitmq log. Though all > operation going normal, but slow. Sometimes taking long time to perform > operation. Please help me to solve this. > > rabbitmq version: rabbitmq_server-3.6.16 > > =ERROR REPORT==== 12-Sep-2019::13:04:55 === > > Channel error on connection <0.8105.3> (192.168.21.56:60116 -> > 192.168.21.11:5672 , vhost: '/', user: 'openstack'), channel 1: > > operation queue.declare caused a channel exception not_found: failed to > perform operation on queue 'versioned_notifications.info ' in vhost '/' > due to timeout > > =WARNING REPORT==== 12-Sep-2019::13:04:55 === > > closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> > 192.168.21.11:5672 - > nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', > user: 'openstack'): > > client unexpectedly closed TCP connection > > Thanks & B’Rgds, > > Rony > -- Gabriele Santomaggio -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Sep 17 06:23:25 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 17 Sep 2019 08:23:25 +0200 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> Message-ID: On 17/09/2019 07.11, Akihiro Motoki wrote: > On Tue, Sep 17, 2019 at 7:20 AM Jeremy Stanley wrote: >> >> Freenode imposes a hard limit of 120 simultaneously joined channels >> for any single account. We've once again reached that limit with our >> channel-logging meetbot. As a quick measure, I've proposed a bit of >> cleanup: https://review.opendev.org/682500 >> >> Analysis of IRC channel logs indicates the following have seen 5 or >> fewer non-bot comments posted in the past 12 months and are likely >> of no value to continue logging: >> >> 5 #openstack-vpnaas > > I would like to add the following channels to this list in addition to > #openstack-vpnaas. > This is what I think recently but I haven't discussed it yet with the team. > > - openstack-fwaas > - networking-sfc > > I see only 5~20 members in these channels constantly. > Developments in FWaaS and SFC are not so active, so I don't see a good > reason to have a separate channel. > They can be merged into the main neutron channel #openstack-neutron. If you retire the channel completely, also remove the bot notifications from project-config. > Is there any guideline on how to guide users to migrate a channel to > another channel? I think the following would work: https://docs.openstack.org/infra/system-config/irc.html#renaming-an-irc-channel Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From skaplons at redhat.com Tue Sep 17 06:24:59 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 Sep 2019 08:24:59 +0200 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> Message-ID: <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> Hi, > On 17 Sep 2019, at 07:11, Akihiro Motoki wrote: > > On Tue, Sep 17, 2019 at 7:20 AM Jeremy Stanley wrote: >> >> Freenode imposes a hard limit of 120 simultaneously joined channels >> for any single account. We've once again reached that limit with our >> channel-logging meetbot. As a quick measure, I've proposed a bit of >> cleanup: https://review.opendev.org/682500 >> >> Analysis of IRC channel logs indicates the following have seen 5 or >> fewer non-bot comments posted in the past 12 months and are likely >> of no value to continue logging: >> >> 5 #openstack-vpnaas > > I would like to add the following channels to this list in addition to > #openstack-vpnaas. > This is what I think recently but I haven't discussed it yet with the team. > > - openstack-fwaas > - networking-sfc Ha, I didn’t even know that such channels exists. And from what I can say, if there are any topics related to such stadium projects, we are discussing them on #openstack-neutron channel usually. IMHO we can remove them too. > > I see only 5~20 members in these channels constantly. > Developments in FWaaS and SFC are not so active, so I don't see a good > reason to have a separate channel. > They can be merged into the main neutron channel #openstack-neutron. > > Is there any guideline on how to guide users to migrate a channel to > another channel? > > Thanks, > Akihiro > > >> 2 #swift3 >> 2 #openstack-ko >> 1 #openstack-deployment >> 1 #midonet >> 0 #openstack-valet >> 0 #openstack-swg >> 0 #openstack-mogan >> >> Please let me know either here on the ML or with a comment on the >> review linked above if you have a reason to continue logging any of >> these channels. I'd like to merge it later this week if possible. >> Thanks! >> -- >> Jeremy Stanley > — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Tue Sep 17 06:28:53 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 Sep 2019 08:28:53 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> Message-ID: <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> Hi Ghanshyam, > On 17 Sep 2019, at 04:51, Ghanshyam Mann wrote: > > Hello Everyone, > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > the only best way to do. > > Summary: > > The projects still need to prepare the IPv6 job: > * Ec2-Api > * Freezer > * Heat > * Ironic > * Karbor > * Kolla > * Kuryr > * Magnum > * Manila > * Masakari > * Mistral > * Murano > * Octavia > * Swift > > The projects waiting for IPv6 job patch to merge: > If patch is failing, help me to debug that otherwise review and merge. > * Barbican > * Blazar > * Cyborg > * Tricircle > * Vitrage > * Zaqar > * Cinder > * Glance > * Monasca > * Neutron I thought that Neutron is already done. Do You mean patches for some stadium projects which are still not merged? Can You give me links to such patches with failing job to make sure that I didn’t miss anything? > * Qinling > * Quality Assurance > * Sahara > * Searchlight > * Senlin > * Tacker > > The projects have merged the IPv6 jobs: > * Designate > * Murano > * Trove > * Cloudkitty > * Congress > * Horizon > * Keystone > * Nova > * Placement > * Solum > * Telemetry > * Watcher > * Zun > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > * Adjutant > * Documentation > * I18n > * Infrastructure > * Loci > * Openstack Charms > * Openstack-Chef > * Openstack-Helm > * Openstackansible > * Openstackclient > * Openstacksdk > * Oslo > * Packaging-Rpm > * Powervmstackers > * Puppet Openstack > * Rally > * Release Management > * Requirements > * Storlets > * Tripleo > * Winstackers > > > Storyboard: > ========= > - https://storyboard.openstack.org/#!/story/2005477 > > IPv6 missing support found: > ===================== > 1. https://review.opendev.org/#/c/673397/ > 2. https://review.opendev.org/#/c/673449/ > 3. https://review.opendev.org/#/c/677524/ > > How you can help: > ============== > - Each project needs to look for and review the ipv6 job patch. > - Verify it works fine on ipv6 and no ipv4 used in conf etc > - Any other specific scenario needs to be added as part of project IPv6 verification. > - Help on debugging and fix the bug in IPv6 job is failing. > > Everything related to this goal can be found under this topic: > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > How to define and run new IPv6 Job on project side: > ======================================= > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > Review suggestion: > ============== > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > that point of view. If anything missing, comment on patch. > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > setting. But if your project needs more specific verification then it can be added in project side job as post-run > playbooks as described in wiki page[1]. > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > -gmann > > > — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Tue Sep 17 06:50:52 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 17 Sep 2019 08:50:52 +0200 Subject: [Neutron][vxlan][nsx-v] Message-ID: Hello, I have a vmware vcenter installatiom with nsx-v and an openstack installation with kvm and openvswitch. I am looking for a method for realizing a vxlan shared between nsx-v and openvswitch. I know vmware released nsx-t but I prefer to use an opensource solution. Anyone could suggest a solution? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Sep 17 07:00:48 2019 From: zigo at debian.org (Thomas Goirand) Date: Tue, 17 Sep 2019 09:00:48 +0200 Subject: [all] Please do not bump openstackdocstheme minimum version if possible Message-ID: <93ffb0e0-d800-be75-cd53-474db8b750b7@debian.org> Hi, The openstackdocstheme package in Debian is highly modified, because otherwise, building docs just fail. Namely, we have reverted commit d87aaca30f64502b3dd13cc1ddf46beec90fc015 because otherwise, the docs wouldn't build. Rebasing it is *very* annoying. So, it'd be nice if packages didn't depend on the very latest version of openstackdocstheme so we could keep version 1.20.0. I don't think it's that important to depend on the very latest version (please let me know if I'm wrong, and let me know why). Cheers, Thomas Goirand (zigo) From g.santomaggio at gmail.com Tue Sep 17 07:30:01 2019 From: g.santomaggio at gmail.com (Gabriele Santomaggio) Date: Tue, 17 Sep 2019 09:30:01 +0200 Subject: Rabbitmq error report In-Reply-To: <130001d56d1e$cc7b5960$65720c20$@brilliant.com.bd> References: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> <2d2076f9-0eb1-98e8-f9e0-1067b4472f23@nemebean.com> <130001d56d1e$cc7b5960$65720c20$@brilliant.com.bd> Message-ID: Rony, Sorry but I am not understanding. The error you have is: > > operation queue.declare caused a channel exception not_found: failed to > > perform operation on queue 'versioned_notifications.info' in vhost '/' > > due to timeout so you need to remove the queue called: versioned_notifications.info using: ``` rabbitmqctl eval 'rabbit_amqqueue:internal_delete({resource,<<"/">>,queue,<<" versioned_notifications.info">>}).' ``` >>This is the output. “versioned_notifications.error ”. so please help me to solve this. "versioned_notifications.error" is another queue. Cheers - Gabriele Santomaggio Il giorno mar 17 set 2019 alle ore 08:11 Md. Farhad Hasan Khan < rony.khan at brilliant.com.bd> ha scritto: > Hi Gabriele, > > This is the output. “versioned_notifications.error ”. so please help me > to solve this. > > > > [root at controller1 ~]# rabbitmqctl list_queues name consumers messages | > grep notifications > > notifications.debug 3 0 > > notifications.critical 3 0 > > notifications.error 3 0 > > notifications.info 3 0 > > notifications.warn 3 0 > > notifications.audit 3 0 > > notifications.sample 3 0 > > versioned_notifications.error 0 14 > > [root at controller1 ~]# > > > > > > Thanks & B’Rgds, > > Rony > > > > > > > > > > *From:* Gabriele Santomaggio [mailto:g.santomaggio at gmail.com] > *Sent:* Monday, September 16, 2019 11:37 PM > *To:* Ben Nemec > *Cc:* rony.khan at brilliant.com.bd; OpenStack Discuss > *Subject:* Re: Rabbitmq error report > > > > An internal queue is locked for some reason. > > > > Try to delete it with: > rabbitmqctl eval > 'rabbit_amqqueue:internal_delete({resource,<<"/">>,queue,<<" > versioned_notifications.info">>}).' > > > > - > > Gabriele Santomaggio > > > > Il giorno gio 12 set 2019 alle ore 15:37 Ben Nemec > ha scritto: > > Have you checked that your notification queues aren't filling up? It can > cause performance problems in Rabbit if nothing is clearing out those > queues. > > On 9/12/19 4:35 AM, Md. Farhad Hasan Khan wrote: > > Hi, > > > > I’m getting this error continuously in rabbitmq log. Though all > > operation going normal, but slow. Sometimes taking long time to perform > > operation. Please help me to solve this. > > > > rabbitmq version: rabbitmq_server-3.6.16 > > > > =ERROR REPORT==== 12-Sep-2019::13:04:55 === > > > > Channel error on connection <0.8105.3> (192.168.21.56:60116 -> > > 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: > > > > operation queue.declare caused a channel exception not_found: failed to > > perform operation on queue 'versioned_notifications.info' in vhost '/' > > due to timeout > > > > =WARNING REPORT==== 12-Sep-2019::13:04:55 === > > > > closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> > > 192.168.21.11:5672 - > > nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', > > user: 'openstack'): > > > > client unexpectedly closed TCP connection > > > > Thanks & B’Rgds, > > > > Rony > > > > > > -- > > Gabriele Santomaggio > -- Gabriele Santomaggio -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Sep 17 07:36:21 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 17 Sep 2019 09:36:21 +0200 Subject: [all] Please do not bump openstackdocstheme minimum version if possible In-Reply-To: <93ffb0e0-d800-be75-cd53-474db8b750b7@debian.org> References: <93ffb0e0-d800-be75-cd53-474db8b750b7@debian.org> Message-ID: On 17/09/2019 09.00, Thomas Goirand wrote: > Hi, > > The openstackdocstheme package in Debian is highly modified, because > otherwise, building docs just fail. Namely, we have reverted commit > d87aaca30f64502b3dd13cc1ddf46beec90fc015 because otherwise, the docs > wouldn't build. Rebasing it is *very* annoying. > > So, it'd be nice if packages didn't depend on the very latest version of > openstackdocstheme so we could keep version 1.20.0. I don't think it's > that important to depend on the very latest version (please let me know > if I'm wrong, and let me know why). Newer releases contain fixes for newer Sphinx versions likeI79b40bb5700807ac8ad523a6e0a83cd21965346e Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From lpetrut at cloudbasesolutions.com Tue Sep 17 08:17:16 2019 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Tue, 17 Sep 2019 08:17:16 +0000 Subject: [os-win][requirements] FFE requested for os-win Message-ID: <64050966FCE0B948BCE2B28DB6E0B7D557ABC45A@CBSEX1.cloudbase.local> Hi, I’d like to request a FFE for os-win. One important bug fix has missed the train (4.3.1) release, for which reason we’d need to have a subsequent one. The bug in question prevents Nova from starting after host reboots when using the Hyper-V driver on recent Windows Server 2019 builds. Thanks, Lucian Petrut -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhu.fanglei at zte.com.cn Tue Sep 17 08:41:23 2019 From: zhu.fanglei at zte.com.cn (zhu.fanglei at zte.com.cn) Date: Tue, 17 Sep 2019 16:41:23 +0800 (CST) Subject: =?UTF-8?B?UmU6W2FsbF1baW50ZXJvcF1bY2luZGVyXVtxYV0gQVBJIGNoYW5nZXMgd2l0aC93aXRob3V0bWljcm92ZXJzaW9uIGFuZCBUZW1wZXN0IHZlcmlmaWNhdGlvbiBvZiBBUEkgaW50ZXJvcGVyYWJpbGl0eQ==?= In-Reply-To: <16d3c8a4243.10a5c411e55700.6669950579531609398@ghanshyammann.com> References: 16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com, 16d3c8a4243.10a5c411e55700.6669950579531609398@ghanshyammann.com Message-ID: <201909171641237330226@zte.com.cn> Seems we can hardly reach an agreement about whether to use microverion for fields added in response, but, I think for tempest, things are simpler, we can add schema check according to the api-ref, and if some issues are found (like groups field) in older version, we can simply remove that field from required fields. That won't happen very often. Original Mail Sender: GhanshyamMann To: Sean Mooney ; CC: Sean McGinnis ;Matt Riedemann ;openstack-discuss ; Date: 2019/09/17 08:08 Subject: Re: [all][interop][cinder][qa] API changes with/withoutmicroversion and Tempest verification of API interoperability ---- On Tue, 17 Sep 2019 07:59:19 +0900 Sean Mooney wrote ---- > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > Backend/type specific information leaking out of the API dynamically like > > > that is definitely an interoperability problem and as you said it sounds > > > like it's been that way for a long time. The compute servers diagnostics API > > > had a similar problem for a long time and the associated Tempest test for > > > that API was disabled for a long time because the response body was > > > hypervisor specific, so we eventually standardized it in a microversion so > > > it was driver agnostic. > > > > > > > Except this isn't backend specific information that is leaking. It's just > > reflecting the configuration of the system. > yes and config driven api behavior is also an iterop problem. > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > api responce at all. > > sure you might have a volume type call ceph and another called emc but both should be > report capasty in the same field with teh same unit. > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded > set. Yeah and this is real pain point for end-user or app using API directly. Dynamic API behaviour base don system configuration is interoperability issue. In bug#1687538 case, new field is going to be reflected for the same backend and same configuration Cloud. Cloud provider upgrade their cloud from ocata->anything and user will start getting the new field without any mechanism to discover whether that field is expected to be present or not. -gmann > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Sep 17 08:55:39 2019 From: zigo at debian.org (Thomas Goirand) Date: Tue, 17 Sep 2019 10:55:39 +0200 Subject: [all] Please do not bump openstackdocstheme minimum version if possible In-Reply-To: References: <93ffb0e0-d800-be75-cd53-474db8b750b7@debian.org> Message-ID: On 9/17/19 9:36 AM, Andreas Jaeger wrote: > On 17/09/2019 09.00, Thomas Goirand wrote: >> Hi, >> >> The openstackdocstheme package in Debian is highly modified, because >> otherwise, building docs just fail. Namely, we have reverted commit >> d87aaca30f64502b3dd13cc1ddf46beec90fc015 because otherwise, the docs >> wouldn't build. Rebasing it is *very* annoying. >> >> So, it'd be nice if packages didn't depend on the very latest version of >> openstackdocstheme so we could keep version 1.20.0. I don't think it's >> that important to depend on the very latest version (please let me know >> if I'm wrong, and let me know why). > > Newer releases contain fixes for newer Sphinx versions > likeI79b40bb5700807ac8ad523a6e0a83cd21965346e > > Andreas Hi, The issue had nothing to do with Sphinx 2.x. Thomas From bdobreli at redhat.com Tue Sep 17 08:57:09 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 17 Sep 2019 10:57:09 +0200 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: <4bcf45b6-d915-e6d0-694f-d4a5b883dc45@redhat.com> On 16.09.2019 18:07, Emilien Macchi wrote: > On Mon, Sep 16, 2019 at 11:47 AM Rabi Mishra > wrote: > > I'm not sure if podman as container tool would move in that > direction, as it's meant to be a command line tool. If we really > want to reduce the overhead of so many layers in TripleO and podman > is the container tool for us (I'll ignore the k8s related > discussions for the time being), I would think the  logic of > translating the JSON configs to podman calls should be be in ansible > (we can even write a TripleO specific podman module). > > > I think we're both in strong agreement and say "let's convert paunch > into ansible module". I support the idea of calling paunch code as is from an ansible module. Although I'm strongly opposed against re-implementing the paunch code itself as ansible modules. That only brings maintenance burden (harder will be much to backport fixes into Queens and Train) and more place for potential regressions, without any functional improvements. > And make the module robust enough for our needs. Then we could replace > paunch by calling the podman module directly. > -- > Emilien Macchi -- Best regards, Bogdan Dobrelya, Irc #bogdando From mjozefcz at redhat.com Tue Sep 17 09:07:51 2019 From: mjozefcz at redhat.com (Maciej Jozefczyk) Date: Tue, 17 Sep 2019 11:07:51 +0200 Subject: [requirements] Issues while trying to bump-up ovsdbapp requirement for stable/queens Message-ID: Hello, I'm trying to bump-up ovsdbapp requirement in networking-ovn [0] from 0.8.0 [1] to 0.10.4 [2]. Those two are in the same stable/queens release and we need that change to merge some serious performance improvements to stable/queens. Unfortunately the requirements-check jobs fails on this change [3] with: Requirement for package ovsdbapp : Requirement(package=u'ovsdbapp', location='', specifiers='>=0.10.4', markers=u'', comment=u'# Apache-2.0', extras=frozenset([])) does not match openstack/requirements value : set([Requirement(package='ovsdbapp', location='', specifiers='>=0.8.0', markers='', comment='# Apache-2.0', extras=frozenset([]))]) The only place where >=0.8.0 is set is global-requirements [4]. Do we need to bump up it also there, even the upper-requirements bot proposal [5] has been merged? It is string match? I proposed a change to bump it in global-requirements [6]. Thanks, Maciej [0] https://review.opendev.org/#/c/681562/ [1] https://github.com/openstack/ovsdbapp/releases/tag/0.8.0 [2] https://github.com/openstack/ovsdbapp/releases/tag/0.10.4 [3] https://bb8048f0749367929365-38c02a6f4c2535c3f3f9bfdb5440d261.ssl.cf1.rackcdn.com/681562/3/check/requirements-check/84e1e97/job-output.txt [4] https://github.com/openstack/requirements/blob/stable/queens/global-requirements.txt#L402 [5] https://review.opendev.org/#/c/682323 [6] https://review.opendev.org/#/c/682588 -- Best regards, Maciej Józefczyk -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Sep 17 09:14:18 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 17 Sep 2019 10:14:18 +0100 Subject: [kolla] Support Octavia for ubuntu binary on stable/rocky? In-Reply-To: References: Message-ID: On Tue, 17 Sep 2019 at 06:36, Eddie Yen wrote: > > Hi, > > I'm trying to install Octavia in Rocky release with Ubuntu binary distro. And found that there're no docker images for Ubuntu binary. > Then I checked the Kolla dockerfile and found that it will not build the image since it's not support yet. > But I found that the Ubuntu Cloud Archive Team has already putted Octavia packages into cloud repository [1]. Since some images built using from this PPA, I think it can support ubuntu binary in Rocky release. > > I tried put package code into Docker files and build, but it gave me an error message like below: > ERROR:kolla.common.utils:octavia-api Failed with status: matched > ERROR:kolla.common.utils:octavia-health-manager Failed with status: matched > ERROR:kolla.common.utils:octavia-housekeeping Failed with status: matched > ERROR:kolla.common.utils:octavia-worker Failed with status: matched > > So I think there's limit somewhere. How can I release it? > > Thanks, > Eddie. > > [1] https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/rocky-staging Hi Eddie, We explicitly fail when building octavia images on ubuntu/binary, see docker/octavia/octavia-base/Dockerfile.j2: RUN echo '{{ install_type }} not yet available for {{ base_distro }}' \ && /bin/false If you think we can support octavia now, please propose a patch to master branch. I'm afraid we can't accept new features to stable branches though, so you'll have to carry this change locally. Mark From dtantsur at redhat.com Tue Sep 17 09:54:53 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 17 Sep 2019 11:54:53 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> Message-ID: <309fd569-b9fa-ec1e-aa89-ecf78a53c608@redhat.com> On 9/17/19 4:51 AM, Ghanshyam Mann wrote: > Hello Everyone, > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > the only best way to do. > > Summary: > > The projects still need to prepare the IPv6 job: > * Ec2-Api > * Freezer > * Heat > * Ironic We're hopelessly stuck with it. Finishing such a job in the Ussuri cycle would be an achievement already IMO. Dmitry > * Karbor > * Kolla > * Kuryr > * Magnum > * Manila > * Masakari > * Mistral > * Murano > * Octavia > * Swift > > The projects waiting for IPv6 job patch to merge: > If patch is failing, help me to debug that otherwise review and merge. > * Barbican > * Blazar > * Cyborg > * Tricircle > * Vitrage > * Zaqar > * Cinder > * Glance > * Monasca > * Neutron > * Qinling > * Quality Assurance > * Sahara > * Searchlight > * Senlin > * Tacker > > The projects have merged the IPv6 jobs: > * Designate > * Murano > * Trove > * Cloudkitty > * Congress > * Horizon > * Keystone > * Nova > * Placement > * Solum > * Telemetry > * Watcher > * Zun > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > * Adjutant > * Documentation > * I18n > * Infrastructure > * Loci > * Openstack Charms > * Openstack-Chef > * Openstack-Helm > * Openstackansible > * Openstackclient > * Openstacksdk > * Oslo > * Packaging-Rpm > * Powervmstackers > * Puppet Openstack > * Rally > * Release Management > * Requirements > * Storlets > * Tripleo > * Winstackers > > > Storyboard: > ========= > - https://storyboard.openstack.org/#!/story/2005477 > > IPv6 missing support found: > ===================== > 1. https://review.opendev.org/#/c/673397/ > 2. https://review.opendev.org/#/c/673449/ > 3. https://review.opendev.org/#/c/677524/ > > How you can help: > ============== > - Each project needs to look for and review the ipv6 job patch. > - Verify it works fine on ipv6 and no ipv4 used in conf etc > - Any other specific scenario needs to be added as part of project IPv6 verification. > - Help on debugging and fix the bug in IPv6 job is failing. > > Everything related to this goal can be found under this topic: > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > How to define and run new IPv6 Job on project side: > ======================================= > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > Review suggestion: > ============== > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > that point of view. If anything missing, comment on patch. > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > setting. But if your project needs more specific verification then it can be added in project side job as post-run > playbooks as described in wiki page[1]. > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > -gmann > > > From radoslaw.piliszek at gmail.com Tue Sep 17 10:12:22 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 17 Sep 2019 12:12:22 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> Message-ID: Hiya, Kolla is not going to get an IPv6-only job because it builds docker images and is not tested regarding networking (it does not do devstack/tempest either). Kolla-Ansible, which does the deployment, is going to get some IPv6-only test jobs - https://review.opendev.org/681573 We are testing CentOS and multinode and hence need overlay VXLAN to reach sensible levels of stability there - https://review.opendev.org/670690 The VXLAN patch is probably ready, awaiting review of independent cores. It will be refactored out later to put it in zuul plays as it might be useful to other projects as well. The IPv6 patch needs rebasing on VXLAN and some small tweaks still. Kind regards, Radek wt., 17 wrz 2019 o 04:58 Ghanshyam Mann napisał(a): > Hello Everyone, > > Below is the progress on Ipv6 goal during R6 week. I started the legacy > job for IPv6 deployment with duplicating the run.yaml which is > the only best way to do. > > Summary: > > The projects still need to prepare the IPv6 job: > * Ec2-Api > * Freezer > * Heat > * Ironic > * Karbor > * Kolla > * Kuryr > * Magnum > * Manila > * Masakari > * Mistral > * Murano > * Octavia > * Swift > > The projects waiting for IPv6 job patch to merge: > If patch is failing, help me to debug that otherwise review and merge. > * Barbican > * Blazar > * Cyborg > * Tricircle > * Vitrage > * Zaqar > * Cinder > * Glance > * Monasca > * Neutron > * Qinling > * Quality Assurance > * Sahara > * Searchlight > * Senlin > * Tacker > > The projects have merged the IPv6 jobs: > * Designate > * Murano > * Trove > * Cloudkitty > * Congress > * Horizon > * Keystone > * Nova > * Placement > * Solum > * Telemetry > * Watcher > * Zun > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc > ): > If anything I missed and IPv6 job need, please reply otherwise I will mark > their task in storyboard as invalid. > > * Adjutant > * Documentation > * I18n > * Infrastructure > * Loci > * Openstack Charms > * Openstack-Chef > * Openstack-Helm > * Openstackansible > * Openstackclient > * Openstacksdk > * Oslo > * Packaging-Rpm > * Powervmstackers > * Puppet Openstack > * Rally > * Release Management > * Requirements > * Storlets > * Tripleo > * Winstackers > > > Storyboard: > ========= > - https://storyboard.openstack.org/#!/story/2005477 > > IPv6 missing support found: > ===================== > 1. https://review.opendev.org/#/c/673397/ > 2. https://review.opendev.org/#/c/673449/ > 3. https://review.opendev.org/#/c/677524/ > > How you can help: > ============== > - Each project needs to look for and review the ipv6 job patch. > - Verify it works fine on ipv6 and no ipv4 used in conf etc > - Any other specific scenario needs to be added as part of project IPv6 > verification. > - Help on debugging and fix the bug in IPv6 job is failing. > > Everything related to this goal can be found under this topic: > Topic: > https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > How to define and run new IPv6 Job on project side: > ======================================= > - I prepared a wiki page to describe this section - > https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > Review suggestion: > ============== > - Main goal of these jobs will be whether your service is able to listen > on IPv6 and can communicate to any > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So > check your proposed job with > that point of view. If anything missing, comment on patch. > - One example was - I missed to configure novnc address to IPv6- > https://review.opendev.org/#/c/672493/ > - base script as part of 'devstack-tempest-ipv6' will do basic checks for > endpoints on IPv6 and some devstack var > setting. But if your project needs more specific verification then it can > be added in project side job as post-run > playbooks as described in wiki page[1]. > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > -gmann > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tetsuro.nakamura.bc at hco.ntt.co.jp Tue Sep 17 01:09:21 2019 From: tetsuro.nakamura.bc at hco.ntt.co.jp (Tetsuro Nakamura) Date: Tue, 17 Sep 2019 10:09:21 +0900 Subject: [placement] update 19-35 In-Reply-To: References: Message-ID: <11424611-c1a4-8a82-e431-8911a704aa6e@hco.ntt.co.jp_1> On 2019/09/13 20:19, Chris Dent wrote: > > HTML: https://anticdent.org/placement-update-19-36.html > > Here's placement update 19-36. There won't be one next week, I will > be away. Because of my forthcoming "less time available for > OpenStack" I will also be stopping these updates at some point in > the next month or so so I can focus the limited time I will have on > reviewing and coding. There will be at least one more. > > # Most Important > > The big news this week is that after returning from a trip (that > meant he was away during the nomination period) Tetsuro has stepped > up to be the PTL for placement in Ussuri. Thanks very much to him > for taking this up, I'm sure he will be excellent Yup, I'm looking forward to having Ussuri cycle as the PTL. Since my time zone (UTC+9:00) is almost opposite to other members, this placement update e-mail post is a very good place to start discussions, so though I'm not sure I can provide as great summarize as chris's, but I'd like to take this over and continue. thanks! - tetsuro > > We need to work on useful documentation for the features developed > this cycle. > > I've also made a [now > worklist](https://storyboard.openstack.org/#!/worklist/754) in > StoryBoard to draw attention to placement project stories that are > relevant to the next few weeks, making it easier to ignore those > that are not relevant now, but may be later. > > # Stories/Bugs > > (Numbers in () are the change since the last pupdate.) > > There are 23 (-1) stories in [the placement > group](https://storyboard.openstack.org/#!/project_group/placement). > 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). > 5 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) > are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 10 > (-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). > 5 (1) are [docs](https://storyboard.openstack.org/#!/worklist/637). > > If you're interested in helping out with placement, those stories > are good places to look. > > * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) >   on launchpad: 17 (0). > > * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on >   launchpad: 6 (0). > > # osc-placement > > * >   Add support for multiple member_of. There's been some useful >   discussion about how to achieve this, and a consensus has emerged >   on how to get the best results. > > # Main Themes > > ## Consumer Types > > Adding a type to consumers will allow them to be grouped for various > purposes, including quota accounting. > > * >   This has some good comments on it from melwitt. I'm going to be >   away next week, so if someone else would like to address them that >   would be great. If it is deemed fit to merge, we should, despite >   feature freeze passing, since we haven't had much churn lately. If >   it doesn't make it in Train, that's fine too. The goal is to have >   it ready for Nova in Ussuri as early as possible. > > ## Cleanup > > Cleanup is an overarching theme related to improving documentation, > performance and the maintainability of the code. The changes we are > making this cycle are fairly complex to use and are fairly complex > to write, so it is good that we're going to have plenty of time to > clean and clarify all these things. > > Performance related explorations continue: > > * >   Refactor initialization of research context. This puts the code >   that might cause an exit earlier in the process so we can avoid >   useless work. > > One outcome of the performance work needs to be something like a > _Deployment Considerations_ document to help people choose how to > tweak their placement deployment to match their needs. The simple > answer is use more web servers and more database servers, but that's > often very wasteful. > > * > > >   These are the patches for meeting the build pdf docs goal for the >   various placement projects. > > # Other Placement > > Miscellaneous changes can be found in [the usual > place](https://review.opendev.org/#/q/project:openstack/placement+status:open). > > > There are three [os-traits > changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) > > being discussed. And two [os-resource-classes > changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). > > The latter are docs-related. > > # Other Service Users > > New reviews are added to the end of the list. Reviews that haven't > had attention in a long time (boo!) or have merged or approved > (yay!) are removed. > > * >   helm: add placement chart > > * >   Nova: WIP: Add a placement audit command > * >   tempest: Add placement API methods for testing routed provider nets > > * >   Nova: cross cell resize > > * >   Nova: Scheduler translate properties to traits > > * >   Nova: single pass instance info fetch in host manager > > * >   Nova: using provider config file for custom resource providers > > * >   Nova: clean up some lingering placement stuff > > * >   OSA: Add nova placement to placement migration > > * >   Charms: Disable nova placement API in Train > > * >   Nova: stop using @safe_connect in report client > > # End > > 🐈 > -- Tetsuro Nakamura NTT Network Service Systems Laboratories TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan From missile0407 at gmail.com Tue Sep 17 11:17:41 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 17 Sep 2019 19:17:41 +0800 Subject: [kolla] Support Octavia for ubuntu binary on stable/rocky? In-Reply-To: References: Message-ID: Hi Mark, Roger that. And I'm afraid I can't patch to master branch since I didn't find any Octavia package release for stein and further. Also it seems like the newer Octavia package will release on newer Ubuntu distro, may not release on Bionic for now. So I'll test and use locally if success. And I think it would be better that adding check mechanism into pre-check. Throw an error message if user want to deploy Octavia on ubuntu/binary. I may take times to do this patch. Many thanks, Eddie. Mark Goddard 於 2019年9月17日 週二 下午5:14寫道: > On Tue, 17 Sep 2019 at 06:36, Eddie Yen wrote: > > > > Hi, > > > > I'm trying to install Octavia in Rocky release with Ubuntu binary > distro. And found that there're no docker images for Ubuntu binary. > > Then I checked the Kolla dockerfile and found that it will not build the > image since it's not support yet. > > But I found that the Ubuntu Cloud Archive Team has already putted > Octavia packages into cloud repository [1]. Since some images built using > from this PPA, I think it can support ubuntu binary in Rocky release. > > > > I tried put package code into Docker files and build, but it gave me an > error message like below: > > ERROR:kolla.common.utils:octavia-api Failed with status: matched > > ERROR:kolla.common.utils:octavia-health-manager Failed with status: > matched > > ERROR:kolla.common.utils:octavia-housekeeping Failed with status: matched > > ERROR:kolla.common.utils:octavia-worker Failed with status: matched > > > > So I think there's limit somewhere. How can I release it? > > > > Thanks, > > Eddie. > > > > [1] > https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/rocky-staging > > Hi Eddie, > > We explicitly fail when building octavia images on ubuntu/binary, see > docker/octavia/octavia-base/Dockerfile.j2: > > RUN echo '{{ install_type }} not yet available for {{ base_distro }}' \ > && /bin/false > > If you think we can support octavia now, please propose a patch to > master branch. I'm afraid we can't accept new features to stable > branches though, so you'll have to carry this change locally. > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Sep 17 11:21:02 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 17 Sep 2019 12:21:02 +0100 Subject: [kolla] Support Octavia for ubuntu binary on stable/rocky? In-Reply-To: References: Message-ID: On Tue, 17 Sep 2019 at 12:17, Eddie Yen wrote: > > Hi Mark, > > Roger that. > And I'm afraid I can't patch to master branch since I didn't find any Octavia package release for stein and further. > Also it seems like the newer Octavia package will release on newer Ubuntu distro, may not release on Bionic for now. > > So I'll test and use locally if success. > > And I think it would be better that adding check mechanism into pre-check. Throw an error message if user want to deploy Octavia on ubuntu/binary. > I may take times to do this patch. Interesting suggestion. We haven't done that kind of thing before, but I suppose it could be helpful in some circumstances. OTOH, we are working on a support matrix [1] in our documentation which would make this information easier to find, so maybe it's not necessary? [1] https://review.opendev.org/677500 > > Many thanks, > Eddie. > > Mark Goddard 於 2019年9月17日 週二 下午5:14寫道: >> >> On Tue, 17 Sep 2019 at 06:36, Eddie Yen wrote: >> > >> > Hi, >> > >> > I'm trying to install Octavia in Rocky release with Ubuntu binary distro. And found that there're no docker images for Ubuntu binary. >> > Then I checked the Kolla dockerfile and found that it will not build the image since it's not support yet. >> > But I found that the Ubuntu Cloud Archive Team has already putted Octavia packages into cloud repository [1]. Since some images built using from this PPA, I think it can support ubuntu binary in Rocky release. >> > >> > I tried put package code into Docker files and build, but it gave me an error message like below: >> > ERROR:kolla.common.utils:octavia-api Failed with status: matched >> > ERROR:kolla.common.utils:octavia-health-manager Failed with status: matched >> > ERROR:kolla.common.utils:octavia-housekeeping Failed with status: matched >> > ERROR:kolla.common.utils:octavia-worker Failed with status: matched >> > >> > So I think there's limit somewhere. How can I release it? >> > >> > Thanks, >> > Eddie. >> > >> > [1] https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/rocky-staging >> >> Hi Eddie, >> >> We explicitly fail when building octavia images on ubuntu/binary, see >> docker/octavia/octavia-base/Dockerfile.j2: >> >> RUN echo '{{ install_type }} not yet available for {{ base_distro }}' \ >> && /bin/false >> >> If you think we can support octavia now, please propose a patch to >> master branch. I'm afraid we can't accept new features to stable >> branches though, so you'll have to carry this change locally. >> >> Mark From missile0407 at gmail.com Tue Sep 17 11:33:52 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 17 Sep 2019 19:33:52 +0800 Subject: [kolla] Support Octavia for ubuntu binary on stable/rocky? In-Reply-To: References: Message-ID: Hmm, I didn't notice that review before. And yeah, having the support form for each component inside the document is better solution. Thanks for letting me know about this information. Mark Goddard 於 2019年9月17日 週二 下午7:21寫道: > On Tue, 17 Sep 2019 at 12:17, Eddie Yen wrote: > > > > Hi Mark, > > > > Roger that. > > And I'm afraid I can't patch to master branch since I didn't find any > Octavia package release for stein and further. > > Also it seems like the newer Octavia package will release on newer > Ubuntu distro, may not release on Bionic for now. > > > > So I'll test and use locally if success. > > > > And I think it would be better that adding check mechanism into > pre-check. Throw an error message if user want to deploy Octavia on > ubuntu/binary. > > I may take times to do this patch. > > Interesting suggestion. We haven't done that kind of thing before, but > I suppose it could be helpful in some circumstances. OTOH, we are > working on a support matrix [1] in our documentation which would make > this information easier to find, so maybe it's not necessary? > > [1] https://review.opendev.org/677500 > > > > > Many thanks, > > Eddie. > > > > Mark Goddard 於 2019年9月17日 週二 下午5:14寫道: > >> > >> On Tue, 17 Sep 2019 at 06:36, Eddie Yen wrote: > >> > > >> > Hi, > >> > > >> > I'm trying to install Octavia in Rocky release with Ubuntu binary > distro. And found that there're no docker images for Ubuntu binary. > >> > Then I checked the Kolla dockerfile and found that it will not build > the image since it's not support yet. > >> > But I found that the Ubuntu Cloud Archive Team has already putted > Octavia packages into cloud repository [1]. Since some images built using > from this PPA, I think it can support ubuntu binary in Rocky release. > >> > > >> > I tried put package code into Docker files and build, but it gave me > an error message like below: > >> > ERROR:kolla.common.utils:octavia-api Failed with status: matched > >> > ERROR:kolla.common.utils:octavia-health-manager Failed with status: > matched > >> > ERROR:kolla.common.utils:octavia-housekeeping Failed with status: > matched > >> > ERROR:kolla.common.utils:octavia-worker Failed with status: matched > >> > > >> > So I think there's limit somewhere. How can I release it? > >> > > >> > Thanks, > >> > Eddie. > >> > > >> > [1] > https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/rocky-staging > >> > >> Hi Eddie, > >> > >> We explicitly fail when building octavia images on ubuntu/binary, see > >> docker/octavia/octavia-base/Dockerfile.j2: > >> > >> RUN echo '{{ install_type }} not yet available for {{ base_distro }}' \ > >> && /bin/false > >> > >> If you think we can support octavia now, please propose a patch to > >> master branch. I'm afraid we can't accept new features to stable > >> branches though, so you'll have to carry this change locally. > >> > >> Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Tue Sep 17 12:10:39 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 17 Sep 2019 13:10:39 +0100 Subject: [all] Please do not bump openstackdocstheme minimum version if possible In-Reply-To: <93ffb0e0-d800-be75-cd53-474db8b750b7@debian.org> References: <93ffb0e0-d800-be75-cd53-474db8b750b7@debian.org> Message-ID: On Tue, 2019-09-17 at 09:00 +0200, Thomas Goirand wrote: > Hi, > > The openstackdocstheme package in Debian is highly modified, because > otherwise, building docs just fail. Namely, we have reverted commit > d87aaca30f64502b3dd13cc1ddf46beec90fc015 because otherwise, the docs > wouldn't build. Rebasing it is *very* annoying. > > So, it'd be nice if packages didn't depend on the very latest version of > openstackdocstheme so we could keep version 1.20.0. I don't think it's > that important to depend on the very latest version (please let me know > if I'm wrong, and let me know why). Rather than doing this, can you open bugs explaining what isn't working for you in Debian? Not being able to iterate on that theme doesn't seem like a long-term strategy. Stephen > Cheers, > > Thomas Goirand (zigo) From mjozefcz at redhat.com Tue Sep 17 12:42:00 2019 From: mjozefcz at redhat.com (Maciej Jozefczyk) Date: Tue, 17 Sep 2019 14:42:00 +0200 Subject: [requirements] Issues while trying to bump-up ovsdbapp requirement for stable/queens In-Reply-To: References: Message-ID: Hey, with patching global-requirements [0] our change now pass [1]. [0] https://review.opendev.org/#/c/682588/1 [1] https://review.opendev.org/#/c/681562/4 Maciej On Tue, Sep 17, 2019 at 11:07 AM Maciej Jozefczyk wrote: > Hello, > > I'm trying to bump-up ovsdbapp requirement in networking-ovn [0] from > 0.8.0 [1] to 0.10.4 [2]. Those two are in the same stable/queens release > and we need that change to merge some serious performance improvements to > stable/queens. > > Unfortunately the requirements-check jobs fails on this change [3] with: > > Requirement for package ovsdbapp : Requirement(package=u'ovsdbapp', location='', specifiers='>=0.10.4', markers=u'', comment=u'# Apache-2.0', extras=frozenset([])) does not match openstack/requirements value : set([Requirement(package='ovsdbapp', location='', specifiers='>=0.8.0', markers='', comment='# Apache-2.0', extras=frozenset([]))]) > > The only place where >=0.8.0 is set is global-requirements [4]. Do we need > to bump up it also there, even the upper-requirements bot proposal [5] has > been merged? It is string match? > > I proposed a change to bump it in global-requirements [6]. > > Thanks, > Maciej > > [0] https://review.opendev.org/#/c/681562/ > [1] https://github.com/openstack/ovsdbapp/releases/tag/0.8.0 > [2] https://github.com/openstack/ovsdbapp/releases/tag/0.10.4 > [3] > https://bb8048f0749367929365-38c02a6f4c2535c3f3f9bfdb5440d261.ssl.cf1.rackcdn.com/681562/3/check/requirements-check/84e1e97/job-output.txt > [4] > https://github.com/openstack/requirements/blob/stable/queens/global-requirements.txt#L402 > [5] https://review.opendev.org/#/c/682323 > [6] https://review.opendev.org/#/c/682588 > > -- > Best regards, > Maciej Józefczyk > -- Best regards, Maciej Józefczyk -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Tue Sep 17 12:41:42 2019 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Tue, 17 Sep 2019 14:41:42 +0200 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> Message-ID: On Tue, 17 Sep 2019 at 08:28, Slawek Kaplonski wrote: > Hi, > > > On 17 Sep 2019, at 07:11, Akihiro Motoki wrote: > > > > On Tue, Sep 17, 2019 at 7:20 AM Jeremy Stanley > wrote: > >> > >> Freenode imposes a hard limit of 120 simultaneously joined channels > >> for any single account. We've once again reached that limit with our > >> channel-logging meetbot. As a quick measure, I've proposed a bit of > >> cleanup: https://review.opendev.org/682500 > >> > >> Analysis of IRC channel logs indicates the following have seen 5 or > >> fewer non-bot comments posted in the past 12 months and are likely > >> of no value to continue logging: > >> > >> 5 #openstack-vpnaas > > > > I would like to add the following channels to this list in addition to > > #openstack-vpnaas. > > This is what I think recently but I haven't discussed it yet with the > team. > > > > - openstack-fwaas > > - networking-sfc > > Ha, I didn’t even know that such channels exists. And from what I can say, > if there are any topics related to such stadium projects, we are discussing > them on #openstack-neutron channel usually. > IMHO we can remove them too. > Yes #networking-sfc was created almost 3 years ago when activity was higher, it was also used at some point for IRC meetings. These meetings have stopped and the channel is really quiet now. So it sounds like a good time to formalize the folding back in neutron chan > > > > > I see only 5~20 members in these channels constantly. > > Developments in FWaaS and SFC are not so active, so I don't see a good > > reason to have a separate channel. > > They can be merged into the main neutron channel #openstack-neutron. > > > > Is there any guideline on how to guide users to migrate a channel to > > another channel? > > > > Thanks, > > Akihiro > > > > > >> 2 #swift3 > >> 2 #openstack-ko > >> 1 #openstack-deployment > >> 1 #midonet > >> 0 #openstack-valet > >> 0 #openstack-swg > >> 0 #openstack-mogan > >> > >> Please let me know either here on the ML or with a comment on the > >> review linked above if you have a reason to continue logging any of > >> these channels. I'd like to merge it later this week if possible. > >> Thanks! > >> -- > >> Jeremy Stanley > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From eharney at redhat.com Tue Sep 17 13:22:09 2019 From: eharney at redhat.com (Eric Harney) Date: Tue, 17 Sep 2019 09:22:09 -0400 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <16d3c861117.d3b1337055686.8802713726745370694@ghanshyammann.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <16d3c861117.d3b1337055686.8802713726745370694@ghanshyammann.com> Message-ID: On 9/16/19 8:01 PM, Ghanshyam Mann wrote: > ---- On Tue, 17 Sep 2019 02:40:36 +0900 Eric Harney wrote ---- > > On 9/16/19 6:02 AM, Ghanshyam Mann wrote: > > > ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann wrote ---- > > > > Hello Everyone, > > > > > > > > As per discussion over ML, Tempest started the JSON schema strict validation for Volume APIs response [1]. > > > > Because it may affect the interop certification, it was explained to the Interop team as well as in the Board of Director meeting[2]. > > > > > > > > In Train, Tempest started implementing the validation and found an API change where the new field was added in API response without versioning[3] (Cinder has API microversion mechanism). IMO, that was not the correct way to change the API and as per API-WG guidelines[4] any field added/modified/removed in API should be with microverison(means old versions/user should not be affected by that change) and must for API interoperability. > > > > > > > > With JSON schema validation, Tempest verifies the API interoperability recommended behaviour by API-WG. But as per IRC conversion with cinder team, we have different opinion on API interoperability and how API should be changed under microversion mechanism. I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. > > > > > > I found the same flow chart what Sean created in Nova about "when to bump microverison" in Cinder also which clearly say any addition to response need new microversion. > > > - https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html > > > > > > -gmann > > > > > > > I don't believe that it is clear that a microversion bump was required > > for the "groups" response showing up in a GET quota-sets response, and > > here's why: > > > > This API has, since at least Havana, returned dynamic fields based on > > quotas that are assigned to volume types. i.e.: > > > > $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 > > DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": -1, > > "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, "backup_gigabytes": > > 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, > > "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, > > "gigabytes_enc": -1, "backups": 10, "id": > > "b73b1b7e82a247038cd01a441ec5a806"}} > > > > "gigabytes_ceph" is in that response because there's a "ceph" volume > > type defined, same for "gigabytes_enc", etc. > > > > This puts this API alongside something more like listing volume types -- > > you get a list of what's defined on the deployment, not a pre-baked list > > of defined fields. > > > > Complaints about the fact that "groups" being added without a > > microversion imply that these other dynamic fields shouldn't be in this > > response either -- but this is how this API works. > > > > There's a lot of talk here about interoperability problems... what are > > those problems, exactly? If we ignore Ocata and just look at Train -- > > why is this API not problematic for interoperability there, when > > requests on different clouds would return different data, depending on > > how types are configured? > > > > It's not clear to me that rectifying the microversion concerns around > > the "groups" field is helpful without also understanding this piece, > > because if the concern is that different clouds return different fields > > for this API -- that will still happen. We need more detail to > > understand how to address this, and what the problem is that we are > > trying to solve exactly. > > There are two things here. > 1. API behaviour depends on backend. This has been discussed two years back also and Tempest team along with cinder team decided not to test the backend-specific behaviour in Tempest[1]. This is wrong. Nothing about what is happening in this API is backend-specific. > 2. API is changed without versioning. > > The second one is the issue here. If any API is changed without versioning cause the interoperability issue here. New field is being added for older microversion also for same backend. > If the concern is that different fields can be returned as part of quota info, it's worth understanding that fixing the Ocata tempest failures won't fix your concern, because this API still returns dynamic fields when the deployment is using per-type quotas, even on master. Are those considered "changes"? Need concrete details here. > *Why this is interoperability: > CloudA with same configuration and same backend is upgraded and have API return new field. I deploy my app on that cloud and use that field. Now CloudB with same configuration and same backend is not upgraded yet so does not have API return the new field added. Now I want to move my app from CloudA to CloudB and it will fail because CloudB API does not have that new field. And I cannot check what version it got added or there is no mechanism for app to discover that field as expected in which Cloud. > So this is a very clear case of interoperability. > > There is no way for end-user to discover the API change which is a real pain point for them. Note: same backend and same configuration cloud have different behaviour of API. > > We should consider the addition of new field same as delete or modify (name or type) any field in API. > This seems to imply that the whole Cinder per-type quota feature is invalid, or implemented in an invalid way. Is the concern about how things are expressed in the API, or the broader features? > > > > (Other than the problem that Tempest currently fails on Ocata. My > > inclination is still that the Tempest tests could just be wrong.) > > Ocata gate is going to be solved by https://review.opendev.org/#/c/681950/ > Fixing Ocata is great, but I'd like to settle the bigger questions about this API that you are raising. I'd prefer to not end up worrying about these same problems the next time someone writes tests for this API, or makes a change to it. What would be a valid way to design it that meet the concerns around interop? > -gmann > > [1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html > > > > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html > > > > [2] > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html > > > > [3] https://bugs.launchpad.net/tempest/+bug/1843762 https://review.opendev.org/#/c/439461/ > > > > [4] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > > > > > > > -gmann > > > > > > > > > > > > > > > > > From eharney at redhat.com Tue Sep 17 13:23:40 2019 From: eharney at redhat.com (Eric Harney) Date: Tue, 17 Sep 2019 09:23:40 -0400 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> Message-ID: <30dd4235-270b-1d39-fd35-5f032044d222@redhat.com> On 9/16/19 6:59 PM, Sean Mooney wrote: > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: >>> >>> Backend/type specific information leaking out of the API dynamically like >>> that is definitely an interoperability problem and as you said it sounds >>> like it's been that way for a long time. The compute servers diagnostics API >>> had a similar problem for a long time and the associated Tempest test for >>> that API was disabled for a long time because the response body was >>> hypervisor specific, so we eventually standardized it in a microversion so >>> it was driver agnostic. >>> >> >> Except this isn't backend specific information that is leaking. It's just >> reflecting the configuration of the system. > yes and config driven api behavior is also an iterop problem. > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > api responce at all. > > sure you might have a volume type call ceph and another called emc but both should be > report capasty in the same field with teh same unit. > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded > set. I think you are confusing types vs backends here. In my example, it was called "snapshots_ceph" because there was a type called "ceph". That's an admin choice, not a behavior of the API. From smooney at redhat.com Tue Sep 17 13:46:31 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 17 Sep 2019 14:46:31 +0100 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <30dd4235-270b-1d39-fd35-5f032044d222@redhat.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> <30dd4235-270b-1d39-fd35-5f032044d222@redhat.com> Message-ID: <77394851ad15eb8765b38facfdd7ffc665b01753.camel@redhat.com> On Tue, 2019-09-17 at 09:23 -0400, Eric Harney wrote: > On 9/16/19 6:59 PM, Sean Mooney wrote: > > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > > > Backend/type specific information leaking out of the API dynamically like > > > > that is definitely an interoperability problem and as you said it sounds > > > > like it's been that way for a long time. The compute servers diagnostics API > > > > had a similar problem for a long time and the associated Tempest test for > > > > that API was disabled for a long time because the response body was > > > > hypervisor specific, so we eventually standardized it in a microversion so > > > > it was driver agnostic. > > > > > > > > > > Except this isn't backend specific information that is leaking. It's just > > > reflecting the configuration of the system. > > > > yes and config driven api behavior is also an iterop problem. > > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > > api responce at all. > > > > sure you might have a volume type call ceph and another called emc but both should be > > report capasty in the same field with teh same unit. > > > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the > > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an > > ubounded > > set. > > I think you are confusing types vs backends here. In my example, it was > called "snapshots_ceph" because there was a type called "ceph". That's > an admin choice, not a behavior of the API. or it could have been express in the api with a dedicated type filed and so you would always have a snapshots filed regardless of the volume type but have a since type filed per quota set that identifed what type it applied too. From gmann at ghanshyammann.com Tue Sep 17 13:55:28 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 06:55:28 -0700 Subject: [all] stable/ocata gate failure In-Reply-To: <16d398187fb.f6c6bb5927020.4472876578749508969@ghanshyammann.com> References: <16d2e36161f.c54e46a1161326.4333158062553456987@ghanshyammann.com> <437ff66c-aa63-7bcc-d181-13ed1668ac76@gmail.com> <16d398187fb.f6c6bb5927020.4472876578749508969@ghanshyammann.com> Message-ID: <16d3f81b7ef.ce34ef6480668.7170574999184592336@ghanshyammann.com> ---- On Mon, 16 Sep 2019 02:57:33 -0700 Ghanshyam Mann wrote ---- > ---- On Sun, 15 Sep 2019 02:01:56 +0900 Matt Riedemann wrote ---- > > On 9/14/2019 12:19 AM, Ghanshyam Mann wrote: > > > If you have noticed that stable/ocata gate is blocked where 'legacy-tempest-dsvm-neutron-full/-*' job > > > is failing due to the latest Tempest changes. > > > > > > Tempest started the JSON schema strict validation for Volume APIs which caught the failure or you can say > > > Tempest master cannot be used in Ocata testing. More details-https://bugs.launchpad.net/tempest/+bug/1843762 > > > > > > As per the Tempest stable branch testing policy[1], Tempst does not support the stable/ocata (which is EM )in the > > > current development cycle. Current supported stable branches by Tempest are Queens, Rocky, Stein and Train-on-going. > > > We can keep using Tempest master on EM stable/branches as long as it run successfully and if it start failing which is current > > > case of stable/ocata then use Tempest tag to test that EM stable branch. > > > > > > To unblock the stable/ocata gate, I am trying to install the Tempest 20.0.0(compatible version for Ocata) in ocata gate > > > -https://review.opendev.org/#/c/681950/ > > > Fix is not working as of now (it still install Tempest master). I will debug that later (my current priority is for Train feature freeze). > > > > > > [1]https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html > > > > Thanks for the heads up. I agree that being able to continue to run > > tempest integration jobs on stable/ocata changes, even with a frozen > > tempest version, is better than not running integration testing on > > stable/ocata at all. When I was at IBM and we were supported branches > > downstream that were end of life upstream what I'd do was create an > > internal branch for tempest (stable/ocata in this case) so we'd run > > against that rather than master tempest, just in case we needed to make > > changes and couldn't use a tag (back then tags for tempest were also > > pretty new as I recall). I'm not advocating creating a stable/ocata > > branch for tempest upstream, I'm just giving an example of one > > downstream process for this sort of thing. > > Thanks for that information. I think creating stable/ocata in Tempest will face the maintenance issue. > Let's try with tag first if that work fine. I fixed it with refs instead of the tag. tag support is not there in git_clone() function for case RECYCLE=FALSE which we can add but that should be coming from master and then backport. Fix is working and merged now. - https://review.opendev.org/#/c/681950/ -gmann > > > > > > Alternatively Cinder could fix the API regression, but that would likely > > be a regression of its own at this point right? Meaning if they added > > something to an API response without a microversion and then removed it > > without a microversion, that's not really helping the situation. As it > > stands clients (in this case tempest) have to deal with the API change. > > I am on same page with you on this but there are different opinion on how to change API with microversion. > I have started a separate thread on this to discuss the correct way to change API > - http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009365.html > > -gmann > > > > > Another alternative would be putting some kind of compat code in tempest > > for this particular API breakage but if Tempest isn't going to gate on > > stable/ocata then that's not really the responsibility of the QA team to > > carry that compat code. > > Yeah, as per Extended Maintainance stable branch testing policy, Tempest would not be able > to maintain those code. It becomes difficult from maintenance as well as strict verification side also. > > -gmann > > > > > > > -- > > > > Thanks, > > > > Matt > > > > > From openstack at nemebean.com Tue Sep 17 13:56:46 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 17 Sep 2019 08:56:46 -0500 Subject: [oslo] Stepping down from core reviewer In-Reply-To: References: Message-ID: <3ed5c805-4413-7df8-d251-8fd5a6db38ed@nemebean.com> Sorry to hear that, but thanks for all of your contributions to Oslo over the years! Hope to see you back at some point. -Ben On 9/17/19 12:47 AM, ChangBo Guo wrote: > Hi folks, > > I no longer have the time to contribute to Oslo  in a meaningful way in > past few months, due to the company internal stuff,  and would like to > step down from core reviewer.  It was an honor to be one of the great > team since 4 years ago.  I still work on OpenStack, just have no enough > time to focus on Oslo. I hope have more time to contribute again in the > future :-) > > All the best! > > -- > ChangBo Guo(gcb) From gmann at ghanshyammann.com Tue Sep 17 14:05:57 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 07:05:57 -0700 Subject: [all][interop][cinder][qa] API changes with/withoutmicroversion and Tempest verification of API interoperability In-Reply-To: <201909171641237330226@zte.com.cn> References: 16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com, 16d3c8a4243.10a5c411e55700.6669950579531609398@ghanshyammann.com <201909171641237330226@zte.com.cn> Message-ID: <16d3f8b5065.ca731df681219.5952609739631660184@ghanshyammann.com> ---- On Tue, 17 Sep 2019 01:41:23 -0700 wrote ---- > Seems we can hardly reach an agreement about whether to use microverion for fields added in response, but, I think for tempest, things are simpler, we can add schema check according to the api-ref, and if some issues are found (like groups field) in older version, we can simply remove that field from required fields. That won't happen very often. I do not think we should do that. When I proposed the idea of doing the strict JSON schema validation for volume API, main goal was to block any backward compatible or non-interoperable changes in API strictly. Compute JSON schema strict validation is good example to show the benefits of doing that. Idea behind strict validation is we have a predefined schema for API response with its optional/mandatory fields, name, type, range etc and compare with the API actual response and see if any API field is added, removed, type changed, range changed etc. If we do not do 'required' field or AdditionalPropoerties=False then, there is no meaning of strict validation. I will leave those API from strict validation. I have commented the same in your all open patches also. JSON schema strict validation has to be completely strict validation or no validation. As you mentioned about the api-ref, I do not think we have well-defined api-ref for volume case. You can see that during Tempest schema implementation, you have fixed 18 bugs in api-ref[1]. I always go with what code return as code is what end user get response from. [1] https://review.opendev.org/#/q/project:openstack/cinder+branch:master+topic:bp/volume-response-schema-validation -gmann > > > Original MailSender: GhanshyamMann To: Sean Mooney ;CC: Sean McGinnis ;Matt Riedemann ;openstack-discuss ;Date: 2019/09/17 08:08Subject: Re: [all][interop][cinder][qa] API changes with/withoutmicroversion and Tempest verification of API interoperability ---- On Tue, 17 Sep 2019 07:59:19 +0900 Sean Mooney wrote ---- > > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > > > Backend/type specific information leaking out of the API dynamically like > > > > that is definitely an interoperability problem and as you said it sounds > > > > like it's been that way for a long time. The compute servers diagnostics API > > > > had a similar problem for a long time and the associated Tempest test for > > > > that API was disabled for a long time because the response body was > > > > hypervisor specific, so we eventually standardized it in a microversion so > > > > it was driver agnostic. > > > > > > > > > > Except this isn't backend specific information that is leaking. It's just > > > reflecting the configuration of the system. > > yes and config driven api behavior is also an iterop problem. > > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > > api responce at all. > > > > sure you might have a volume type call ceph and another called emc but both should be > > report capasty in the same field with teh same unit. > > > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the > > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded > > set. > > Yeah and this is real pain point for end-user or app using API directly. Dynamic API behaviour base don system configuration is interoperability issue. > In bug#1687538 case, new field is going to be reflected for the same backend and same configuration Cloud. Cloud provider upgrade their cloud from ocata->anything and user will start getting the new field without any mechanism to discover whether that field is expected to be present or not. > > -gmann > > > > > > > > > > > > > > From mordred at inaugust.com Tue Sep 17 14:15:11 2019 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 17 Sep 2019 16:15:11 +0200 Subject: [sdk][release][requirements] FFE requested for openstacksdk Message-ID: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> Heya, We’d like to cut an 0.35.1 bugfix release as part of train to grab https://review.opendev.org/#/c/680649/ and https://review.opendev.org/#/c/682454/. The first is landed and is in support of the Ironic Nova driver. The second is making its way through the gate right now and is in support of OSC v4. Thanks! Monty From mthode at mthode.org Tue Sep 17 14:16:02 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 17 Sep 2019 09:16:02 -0500 Subject: [os-win][requirements] FFE requested for os-win In-Reply-To: <64050966FCE0B948BCE2B28DB6E0B7D557ABC45A@CBSEX1.cloudbase.local> References: <64050966FCE0B948BCE2B28DB6E0B7D557ABC45A@CBSEX1.cloudbase.local> Message-ID: <20190917141602.2rfawxfppjllgavf@mthode.org> On 19-09-17 08:17:16, Lucian Petrut wrote: > Hi, > > I’d like to request a FFE for os-win. One important bug fix has missed the train (4.3.1) release, for which reason we’d need to have a subsequent one. > > The bug in question prevents Nova from starting after host reboots when using the Hyper-V driver on recent Windows Server 2019 builds. > > Thanks, > Lucian Petrut > Yep, you are good (approved). Thanks for checking. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Sep 17 14:19:44 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 07:19:44 -0700 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <16d3c861117.d3b1337055686.8802713726745370694@ghanshyammann.com> Message-ID: <16d3f97eeaf.daa841bd81863.8972709403321757519@ghanshyammann.com> ---- On Tue, 17 Sep 2019 06:22:09 -0700 Eric Harney wrote ---- > On 9/16/19 8:01 PM, Ghanshyam Mann wrote: > > ---- On Tue, 17 Sep 2019 02:40:36 +0900 Eric Harney wrote ---- > > > On 9/16/19 6:02 AM, Ghanshyam Mann wrote: > > > > ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann wrote ---- > > > > > Hello Everyone, > > > > > > > > > > As per discussion over ML, Tempest started the JSON schema strict validation for Volume APIs response [1]. > > > > > Because it may affect the interop certification, it was explained to the Interop team as well as in the Board of Director meeting[2]. > > > > > > > > > > In Train, Tempest started implementing the validation and found an API change where the new field was added in API response without versioning[3] (Cinder has API microversion mechanism). IMO, that was not the correct way to change the API and as per API-WG guidelines[4] any field added/modified/removed in API should be with microverison(means old versions/user should not be affected by that change) and must for API interoperability. > > > > > > > > > > With JSON schema validation, Tempest verifies the API interoperability recommended behaviour by API-WG. But as per IRC conversion with cinder team, we have different opinion on API interoperability and how API should be changed under microversion mechanism. I would like to have a conclusion on this so that Tempest can verify or leave the Volume API for strict validation. > > > > > > > > I found the same flow chart what Sean created in Nova about "when to bump microverison" in Cinder also which clearly say any addition to response need new microversion. > > > > - https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html > > > > > > > > -gmann > > > > > > > > > > I don't believe that it is clear that a microversion bump was required > > > for the "groups" response showing up in a GET quota-sets response, and > > > here's why: > > > > > > This API has, since at least Havana, returned dynamic fields based on > > > quotas that are assigned to volume types. i.e.: > > > > > > $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 > > > DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": -1, > > > "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, "backup_gigabytes": > > > 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, > > > "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, > > > "gigabytes_enc": -1, "backups": 10, "id": > > > "b73b1b7e82a247038cd01a441ec5a806"}} > > > > > > "gigabytes_ceph" is in that response because there's a "ceph" volume > > > type defined, same for "gigabytes_enc", etc. > > > > > > This puts this API alongside something more like listing volume types -- > > > you get a list of what's defined on the deployment, not a pre-baked list > > > of defined fields. > > > > > > Complaints about the fact that "groups" being added without a > > > microversion imply that these other dynamic fields shouldn't be in this > > > response either -- but this is how this API works. > > > > > > There's a lot of talk here about interoperability problems... what are > > > those problems, exactly? If we ignore Ocata and just look at Train -- > > > why is this API not problematic for interoperability there, when > > > requests on different clouds would return different data, depending on > > > how types are configured? > > > > > > It's not clear to me that rectifying the microversion concerns around > > > the "groups" field is helpful without also understanding this piece, > > > because if the concern is that different clouds return different fields > > > for this API -- that will still happen. We need more detail to > > > understand how to address this, and what the problem is that we are > > > trying to solve exactly. > > > > There are two things here. > > 1. API behaviour depends on backend. This has been discussed two years back also and Tempest team along with cinder team decided not to test the backend-specific behaviour in Tempest[1]. > > This is wrong. Nothing about what is happening in this API is > backend-specific. I agree and I am just giving the background information about backend specific features/API behaviour testing. > > > 2. API is changed without versioning. > > > > The second one is the issue here. If any API is changed without versioning cause the interoperability issue here. New field is being added for older microversion also for same backend. > > > > If the concern is that different fields can be returned as part of quota > info, it's worth understanding that fixing the Ocata tempest failures > won't fix your concern, because this API still returns dynamic fields > when the deployment is using per-type quotas, even on master. > > Are those considered "changes"? Need concrete details here. +1. ocata fix is separate from this discussion. That has to be done at some time whenever Tempest starts failing on Ocata for any reason. > > > *Why this is interoperability: > > CloudA with same configuration and same backend is upgraded and have API return new field. I deploy my app on that cloud and use that field. Now CloudB with same configuration and same backend is not upgraded yet so does not have API return the new field added. Now I want to move my app from CloudA to CloudB and it will fail because CloudB API does not have that new field. And I cannot check what version it got added or there is no mechanism for app to discover that field as expected in which Cloud. > > So this is a very clear case of interoperability. > > > > There is no way for end-user to discover the API change which is a real pain point for them. Note: same backend and same configuration cloud have different behaviour of API. > > > > We should consider the addition of new field same as delete or modify (name or type) any field in API. > > > > This seems to imply that the whole Cinder per-type quota feature is > invalid, or implemented in an invalid way. Is the concern about how > things are expressed in the API, or the broader features? I am also concern about the general rule for Cinder API which is important to consider for Tempest BP of strict volume validation[1] let me ask clearly: 1. does Cinder consider adding any new field to any API does not need microversion bump as opposit to this doc[2] ? 2. Or it is only quota API where additional to the new field is considered as no microversion bump? if so how many such dynamic APIs Cinder has ? [1] https://blueprints.launchpad.net/tempest/+spec/volume-response-schema-validation [2] https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html -gmann > > > > > > > (Other than the problem that Tempest currently fails on Ocata. My > > > inclination is still that the Tempest tests could just be wrong.) > > > > Ocata gate is going to be solved by https://review.opendev.org/#/c/681950/ > > > > Fixing Ocata is great, but I'd like to settle the bigger questions about > this API that you are raising. > > I'd prefer to not end up worrying about these same problems the next > time someone writes tests for this API, or makes a change to it. > > What would be a valid way to design it that meet the concerns around > interop? > > > -gmann > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html > > > > > > > > > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000358.html > > > > > [2] > > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003652.html > > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003655.html > > > > > [3] https://bugs.launchpad.net/tempest/+bug/1843762 https://review.opendev.org/#/c/439461/ > > > > > [4] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > From mthode at mthode.org Tue Sep 17 14:20:22 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 17 Sep 2019 09:20:22 -0500 Subject: [requirements] Issues while trying to bump-up ovsdbapp requirement for stable/queens In-Reply-To: References: Message-ID: <20190917142022.tphrgr3qeaageuj6@mthode.org> On 19-09-17 14:42:00, Maciej Jozefczyk wrote: > Hey, > > with patching global-requirements [0] our change now pass [1]. > > [0] https://review.opendev.org/#/c/682588/1 > [1] https://review.opendev.org/#/c/681562/4 > > Maciej > > On Tue, Sep 17, 2019 at 11:07 AM Maciej Jozefczyk > wrote: > > > Hello, > > > > I'm trying to bump-up ovsdbapp requirement in networking-ovn [0] from > > 0.8.0 [1] to 0.10.4 [2]. Those two are in the same stable/queens release > > and we need that change to merge some serious performance improvements to > > stable/queens. > > > > Unfortunately the requirements-check jobs fails on this change [3] with: > > > > Requirement for package ovsdbapp : Requirement(package=u'ovsdbapp', location='', specifiers='>=0.10.4', markers=u'', comment=u'# Apache-2.0', extras=frozenset([])) does not match openstack/requirements value : set([Requirement(package='ovsdbapp', location='', specifiers='>=0.8.0', markers='', comment='# Apache-2.0', extras=frozenset([]))]) > > > > The only place where >=0.8.0 is set is global-requirements [4]. Do we need > > to bump up it also there, even the upper-requirements bot proposal [5] has > > been merged? It is string match? > > > > I proposed a change to bump it in global-requirements [6]. > > > > Thanks, > > Maciej > > > > [0] https://review.opendev.org/#/c/681562/ > > [1] https://github.com/openstack/ovsdbapp/releases/tag/0.8.0 > > [2] https://github.com/openstack/ovsdbapp/releases/tag/0.10.4 > > [3] > > https://bb8048f0749367929365-38c02a6f4c2535c3f3f9bfdb5440d261.ssl.cf1.rackcdn.com/681562/3/check/requirements-check/84e1e97/job-output.txt > > [4] > > https://github.com/openstack/requirements/blob/stable/queens/global-requirements.txt#L402 > > [5] https://review.opendev.org/#/c/682323 > > [6] https://review.opendev.org/#/c/682588 > > Changing minimums is not allowed for stable releases (especially an older release like queens and especially for performance (even if severe). I do not see anything preventing you from using a newer version of ovsdbapp. You may be able to mask bad versions within the project as long as the upper-constraints version is not masked (using != version specifiers). -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Sep 17 14:26:36 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 07:26:36 -0700 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <77394851ad15eb8765b38facfdd7ffc665b01753.camel@redhat.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> <30dd4235-270b-1d39-fd35-5f032044d222@redhat.com> <77394851ad15eb8765b38facfdd7ffc665b01753.camel@redhat.com> Message-ID: <16d3f9e367f.e7766fab82199.66027046233260001@ghanshyammann.com> ---- On Tue, 17 Sep 2019 06:46:31 -0700 Sean Mooney wrote ---- > On Tue, 2019-09-17 at 09:23 -0400, Eric Harney wrote: > > On 9/16/19 6:59 PM, Sean Mooney wrote: > > > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > > > > > Backend/type specific information leaking out of the API dynamically like > > > > > that is definitely an interoperability problem and as you said it sounds > > > > > like it's been that way for a long time. The compute servers diagnostics API > > > > > had a similar problem for a long time and the associated Tempest test for > > > > > that API was disabled for a long time because the response body was > > > > > hypervisor specific, so we eventually standardized it in a microversion so > > > > > it was driver agnostic. > > > > > > > > > > > > > Except this isn't backend specific information that is leaking. It's just > > > > reflecting the configuration of the system. > > > > > > yes and config driven api behavior is also an iterop problem. > > > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > > > api responce at all. > > > > > > sure you might have a volume type call ceph and another called emc but both should be > > > report capasty in the same field with teh same unit. > > > > > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > > > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the > > > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an > > > ubounded > > > set. > > > > I think you are confusing types vs backends here. In my example, it was > > called "snapshots_ceph" because there was a type called "ceph". That's > > an admin choice, not a behavior of the API. > or it could have been express in the api with a dedicated type filed and > > so you would always have a snapshots filed regardless of the volume type but have a since > type filed per quota set that identifed what type it applied too. IMO, the best way is to make it in an array structure and volume_type specific quotas can be optional items in mandatory 'snapshots' array field. For example: { "quota_set": { . . "snapshots": { "total/project": 10, "ceph": -1, "lvm-thin": -1, "lvmdriver-1": -1, } } -gmann > > > From openstack at fried.cc Tue Sep 17 14:30:08 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 17 Sep 2019 09:30:08 -0500 Subject: [sdk][release][requirements] FFE requested for openstacksdk In-Reply-To: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> References: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> Message-ID: > We’d like to cut an 0.35.1 bugfix release as part of train to grab https://review.opendev.org/#/c/680649/ and . The first is landed and is in support of the Ironic Nova driver. Not to derail your train, but we worked around this issue in nova via [1], and are unlikely to attempt to land a new change picking up the sdk fix at this stage in the release. That said, I have no problem with your FFE, FWIW :) Thanks, efried [1] https://review.opendev.org/#/c/680542/ From mthode at mthode.org Tue Sep 17 14:31:18 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 17 Sep 2019 09:31:18 -0500 Subject: [sdk][release][requirements] FFE requested for openstacksdk In-Reply-To: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> References: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> Message-ID: <20190917143118.2oezd2kcbfn6exok@mthode.org> On 19-09-17 16:15:11, Monty Taylor wrote: > Heya, > > We’d like to cut an 0.35.1 bugfix release as part of train to grab https://review.opendev.org/#/c/680649/ and https://review.opendev.org/#/c/682454/. The first is landed and is in support of the Ironic Nova driver. The second is making its way through the gate right now and is in support of OSC v4. > Seems like a larger change than just those two patches if releasing from master (as there is no train branch). What possible projects could this require a re-release for? Below is a table showing what's running with a dependency on openstacksdk now. +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------+ | Repository | Filename | Line | Text | +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------+ | openstack/heat | requirements.txt | 15 | openstacksdk>=0.11.2 # Apache-2.0 | | openstack/ironic | requirements.txt | 50 | openstacksdk>=0.31.2 # Apache-2.0 | | openstack/ironic-inspector | requirements.txt | 20 | openstacksdk>=0.30.0 # Apache-2.0 | | openstack/kuryr-kubernetes | requirements.txt | 12 | openstacksdk>=0.13.0 # Apache-2.0 | | openstack/masakari-dashboard | requirements.txt | 14 | openstacksdk>=0.26.0 | | openstack/masakari-monitors | requirements.txt | 7 | openstacksdk>=0.13.0 # Apache-2.0 | | openstack/metalsmith | requirements.txt | 5 | openstacksdk>=0.29.0 # Apache-2.0 | | openstack/nova | requirements.txt | 74 | openstacksdk>=0.35.0 # Apache-2.0 | | openstack/octavia-dashboard | requirements.txt | 7 | openstacksdk>=0.24.0 # Apache-2.0 | | openstack/openstack-ansible | requirements.txt | 26 | openstacksdk>=0.14.0 # Apache-2.0 | | openstack/openstack-ansible-tests | test-requirements.txt | 44 | openstacksdk>=0.14.0 # Apache-2.0 | | openstack/os-client-config | requirements.txt | 4 | openstacksdk>=0.13.0 # Apache-2.0 | | openstack/osc-lib | requirements.txt | 10 | openstacksdk>=0.15.0 # Apache-2.0 | | openstack/python-masakariclient | requirements.txt | 5 | openstacksdk>=0.13.0 # Apache-2.0 | | openstack/python-novaclient | test-requirements.txt | 16 | openstacksdk>=0.11.2 # Apache-2.0 | | openstack/python-openstackclient | requirements.txt | 10 | openstacksdk>=0.17.0 # Apache-2.0 | | openstack/python-senlinclient | requirements.txt | 9 | openstacksdk>=0.24.0 # Apache-2.0 | | openstack/python-tempestconf | requirements.txt | 9 | openstacksdk>=0.11.3 # Apache-2.0 | | openstack/qinling | runtimes/python2/requirements.txt | 10 | openstacksdk>=0.9.19 | | openstack/qinling | runtimes/python3/requirements.txt | 10 | openstacksdk>=0.9.19 | | openstack/requirements | global-requirements.txt | 434 | openstacksdk # Apache-2.0 | | openstack/requirements | openstack_requirements/tests/files/upper-constraints.txt | 365 | openstacksdk===0.9.13 | | openstack/senlin | requirements.txt | 14 | openstacksdk>=0.27.0 # Apache-2.0 | | openstack/senlin-tempest-plugin | requirements.txt | 7 | openstacksdk>=0.24.0 # Apache-2.0 | | openstack/shade | requirements.txt | 6 | # shade depends on os-client-config in addition to openstacksdk so that it | | openstack/shade | requirements.txt | 9 | openstacksdk>=0.15.0 # Apache-2.0 | | openstack/sushy-tools | test-requirements.txt | 15 | openstacksdk>=0.11.2 # Apache-2.0 | | openstack/tenks | ansible/roles/ironic-enrolment/files/requirements.txt | 4 | openstacksdk>=0.17.2 # Apache | | openstack/tenks | ansible/roles/nova-flavors/files/requirements.txt | 4 | openstacksdk>=0.17.2 # Apache | | openstack/tripleo-common-tempest-plugin | requirements.txt | 7 | openstacksdk>=0.11.2 # Apache-2.0 | | openstack/upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 161 | openstacksdk==0.24.0 | | x/bilean | requirements.txt | 10 | openstacksdk>=0.7.4 # Apache-2.0 | | x/flame | requirements.txt | 6 | openstacksdk==0.17.2 | | x/monitorstack | requirements.txt | 4 | openstacksdk>=0.9.14 | | x/osops-tools-contrib | ansible_requirements.txt | 29 | openstacksdk==0.9.8 | | zuul/nodepool | requirements.txt | 9 | # openstacksdk before 0.27.0 is TaskManager based | | zuul/nodepool | requirements.txt | 12 | openstacksdk>=0.27.0,!=0.28.0,!=0.29.0,!=0.30.0,!=0.31.0,!=0.31.1,!=0.31.2 | | zuul/zuul-jobs | test-requirements.txt | 22 | openstacksdk>=0.17.1 | +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------+ -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Sep 17 14:32:08 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 07:32:08 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> Message-ID: <16d3fa3471d.ac5d84ad82476.7338406402960164895@ghanshyammann.com> ---- On Tue, 17 Sep 2019 03:12:22 -0700 Radosław Piliszek wrote ---- > Hiya, > Kolla is not going to get an IPv6-only job because it builds docker images and is not tested regarding networking (it does not do devstack/tempest either). > Kolla-Ansible, which does the deployment, is going to get some IPv6-only test jobs - https://review.opendev.org/681573We are testing CentOS and multinode and hence need overlay VXLAN to reach sensible levels of stability there - >https://review.opendev.org/670690The VXLAN patch is probably ready, awaiting review of independent cores. It will be refactored out later to put it in zuul plays as it might be useful to other projects as well.The IPv6 patch needs rebasing on VXLAN and >some small tweaks still. This is good news Radosław. Actually deployment projects deploying on IPv6 is out of the scope of this goal(I forgot to add it under do not need section) but this is something we wanted to do as next step. Starting it in kolla is really appreciated and great news. -gmann > Kind regards,Radek > wt., 17 wrz 2019 o 04:58 Ghanshyam Mann napisał(a): > Hello Everyone, > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > the only best way to do. > > Summary: > > The projects still need to prepare the IPv6 job: > * Ec2-Api > * Freezer > * Heat > * Ironic > * Karbor > * Kolla > * Kuryr > * Magnum > * Manila > * Masakari > * Mistral > * Murano > * Octavia > * Swift > > The projects waiting for IPv6 job patch to merge: > If patch is failing, help me to debug that otherwise review and merge. > * Barbican > * Blazar > * Cyborg > * Tricircle > * Vitrage > * Zaqar > * Cinder > * Glance > * Monasca > * Neutron > * Qinling > * Quality Assurance > * Sahara > * Searchlight > * Senlin > * Tacker > > The projects have merged the IPv6 jobs: > * Designate > * Murano > * Trove > * Cloudkitty > * Congress > * Horizon > * Keystone > * Nova > * Placement > * Solum > * Telemetry > * Watcher > * Zun > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > * Adjutant > * Documentation > * I18n > * Infrastructure > * Loci > * Openstack Charms > * Openstack-Chef > * Openstack-Helm > * Openstackansible > * Openstackclient > * Openstacksdk > * Oslo > * Packaging-Rpm > * Powervmstackers > * Puppet Openstack > * Rally > * Release Management > * Requirements > * Storlets > * Tripleo > * Winstackers > > > Storyboard: > ========= > - https://storyboard.openstack.org/#!/story/2005477 > > IPv6 missing support found: > ===================== > 1. https://review.opendev.org/#/c/673397/ > 2. https://review.opendev.org/#/c/673449/ > 3. https://review.opendev.org/#/c/677524/ > > How you can help: > ============== > - Each project needs to look for and review the ipv6 job patch. > - Verify it works fine on ipv6 and no ipv4 used in conf etc > - Any other specific scenario needs to be added as part of project IPv6 verification. > - Help on debugging and fix the bug in IPv6 job is failing. > > Everything related to this goal can be found under this topic: > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > How to define and run new IPv6 Job on project side: > ======================================= > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > Review suggestion: > ============== > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > that point of view. If anything missing, comment on patch. > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > setting. But if your project needs more specific verification then it can be added in project side job as post-run > playbooks as described in wiki page[1]. > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > -gmann > > > > From smooney at redhat.com Tue Sep 17 14:39:32 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 17 Sep 2019 15:39:32 +0100 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: <16d3f9e367f.e7766fab82199.66027046233260001@ghanshyammann.com> References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> <30dd4235-270b-1d39-fd35-5f032044d222@redhat.com> <77394851ad15eb8765b38facfdd7ffc665b01753.camel@redhat.com> <16d3f9e367f.e7766fab82199.66027046233260001@ghanshyammann.com> Message-ID: On Tue, 2019-09-17 at 07:26 -0700, Ghanshyam Mann wrote: > ---- On Tue, 17 Sep 2019 06:46:31 -0700 Sean Mooney wrote ---- > > On Tue, 2019-09-17 at 09:23 -0400, Eric Harney wrote: > > > On 9/16/19 6:59 PM, Sean Mooney wrote: > > > > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > > > > > > > Backend/type specific information leaking out of the API dynamically like > > > > > > that is definitely an interoperability problem and as you said it sounds > > > > > > like it's been that way for a long time. The compute servers diagnostics API > > > > > > had a similar problem for a long time and the associated Tempest test for > > > > > > that API was disabled for a long time because the response body was > > > > > > hypervisor specific, so we eventually standardized it in a microversion so > > > > > > it was driver agnostic. > > > > > > > > > > > > > > > > Except this isn't backend specific information that is leaking. It's just > > > > > reflecting the configuration of the system. > > > > > > > > yes and config driven api behavior is also an iterop problem. > > > > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > > > > api responce at all. > > > > > > > > sure you might have a volume type call ceph and another called emc but both should be > > > > report capasty in the same field with teh same unit. > > > > > > > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > > > > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on > the > > > > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an > > > > ubounded > > > > set. > > > > > > I think you are confusing types vs backends here. In my example, it was > > > called "snapshots_ceph" because there was a type called "ceph". That's > > > an admin choice, not a behavior of the API. > > or it could have been express in the api with a dedicated type filed and > > > > so you would always have a snapshots filed regardless of the volume type but have a since > > type filed per quota set that identifed what type it applied too. > > IMO, the best way is to make it in an array structure and volume_type specific quotas can be optional items in > mandatory 'snapshots' array field. > For example: > > { > "quota_set": { > . > . > "snapshots": { > "total/project": 10, > "ceph": -1, > "lvm-thin": -1, > "lvmdriver-1": -1, > } > } > well you can do it that way or invert it { "quota_set": { ceph:{snapshot:-1,gigabytpes:100 ...} lvm-1:{snapshot:-1,gigabytpes:100 ...} lvm-2:{snapshot:-1,gigabytpes:100 ...} project:{snapshot:-1,gigabytpes:100 ...} ... } } in either case the filed names remain the same with and the type is treated as an opaque sting that is decoupled the field names. > -gmann > > > > > > > > From smooney at redhat.com Tue Sep 17 14:43:53 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 17 Sep 2019 15:43:53 +0100 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <4c891e4a-84f6-f88c-08ca-c2563ed34bc7@gmail.com> <20190916221113.GA31638@sm-workstation> <792c4d6f9c6849831d29719e527de699b01026fd.camel@redhat.com> <30dd4235-270b-1d39-fd35-5f032044d222@redhat.com> <77394851ad15eb8765b38facfdd7ffc665b01753.camel@redhat.com> <16d3f9e367f.e7766fab82199.66027046233260001@ghanshyammann.com> Message-ID: <4ca7adf03d6608b24f9890ebf0f4a5d81a78e0a0.camel@redhat.com> On Tue, 2019-09-17 at 15:39 +0100, Sean Mooney wrote: > On Tue, 2019-09-17 at 07:26 -0700, Ghanshyam Mann wrote: > > ---- On Tue, 17 Sep 2019 06:46:31 -0700 Sean Mooney wrote ---- > > > On Tue, 2019-09-17 at 09:23 -0400, Eric Harney wrote: > > > > On 9/16/19 6:59 PM, Sean Mooney wrote: > > > > > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote: > > > > > > > > > > > > > > Backend/type specific information leaking out of the API dynamically like > > > > > > > that is definitely an interoperability problem and as you said it sounds > > > > > > > like it's been that way for a long time. The compute servers diagnostics API > > > > > > > had a similar problem for a long time and the associated Tempest test for > > > > > > > that API was disabled for a long time because the response body was > > > > > > > hypervisor specific, so we eventually standardized it in a microversion so > > > > > > > it was driver agnostic. > > > > > > > > > > > > > > > > > > > Except this isn't backend specific information that is leaking. It's just > > > > > > reflecting the configuration of the system. > > > > > > > > > > yes and config driven api behavior is also an iterop problem. > > > > > ideally you should not be able to tell if cinder is abcked by ceph or emc form the > > > > > api responce at all. > > > > > > > > > > sure you might have a volume type call ceph and another called emc but both should be > > > > > report capasty in the same field with teh same unit. > > > > > > > > > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types > > > > > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used > > on > > the > > > > > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an > > > > > ubounded > > > > > set. > > > > > > > > I think you are confusing types vs backends here. In my example, it was > > > > called "snapshots_ceph" because there was a type called "ceph". That's > > > > an admin choice, not a behavior of the API. > > > or it could have been express in the api with a dedicated type filed and > > > > > > so you would always have a snapshots filed regardless of the volume type but have a since > > > type filed per quota set that identifed what type it applied too. > > > > IMO, the best way is to make it in an array structure and volume_type specific quotas can be optional items in > > mandatory 'snapshots' array field. > > For example: > > > > { > > "quota_set": { > > . > > . > > "snapshots": { > > "total/project": 10, > > "ceph": -1, > > "lvm-thin": -1, > > "lvmdriver-1": -1, > > } > > } > > > > well you can do it that way or invert it > > { > "quota_set": { > ceph:{snapshot:-1,gigabytpes:100 ...} > lvm-1:{snapshot:-1,gigabytpes:100 ...} > lvm-2:{snapshot:-1,gigabytpes:100 ...} > project:{snapshot:-1,gigabytpes:100 ...} > ... > } > } > > i ment to say i was orginly think of it slitly differently by having a type colume in the and thje quota_set being a list { "quota_set": [ {snapshot:-1,gigabytpes:100 type:"ceph",...} {snapshot:-1,gigabytpes:100, type:"lvm-1", ...} {snapshot:-1,gigabytpes:100, type:"lvm-2" ...} {snapshot:-1,gigabytpes:100, type:"project" ...} ... ] } this is my prefered form of the 3 since you can validate the keys and values eaily with json schema and it maps nicely to a db schema. > in either case the filed names remain the same with and the type is treated as an opaque sting > that is decoupled the field names. > > > -gmann > > > > > > > > > > > > > > > From wang.ya at 99cloud.net Tue Sep 17 12:44:38 2019 From: wang.ya at 99cloud.net (wang.ya) Date: Tue, 17 Sep 2019 20:44:38 +0800 Subject: [nova] The test of NUMA aware live migration Message-ID: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> Hi: The main code of NUMA aware live migration was merged. I’m testing it recently. If only set NUMA property(‘hw:numa_nodes’, ‘hw:numa_cpus’, ‘hw:numa_mem’), it works well. But if add the property “hw:cpu_policy='dedicated'”, it will not correct after serval live migrations. Which means the live migrate can be success, but the vCPU pin are not correct(two instance have serval same vCPU pin on same host). Below is my test steps. env:        code: master branch (build on 16 September 2019, include the patches of NUMA aware live migration) three compute node: - s1:                  24C, 48G (2 NUMA nodes) - stein-2:         12C, 24G (2 NUMA nodes) - stein-3:         8C, 16G (2 NUMA nodes) flavor1 (2c2g): hw:cpu_policy='dedicated', hw:numa_cpus.0='0', hw:numa_cpus.1='1', hw:numa_mem.0='1024', hw:numa_mem.1='1024', hw:numa_nodes='2' flavor2 (4c4g): hw:cpu_policy='dedicated', hw:numa_cpus.0='0,1,2', hw:numa_cpus.1='3', hw:numa_mem.0='1024', hw:numa_mem.1='3072', hw:numa_nodes='2' image has no property. I create four instances(2*flavor1, 2* flavor2), then begin live migration on by one(one instance live migrate done, next instance begin live migrate) and check the vCPU pin whether is correct. After serval live migrations, the vCPU pin will not correct. (You can find full migration list in attached file). The last live migrate is: +-----+--------------------------------------+-------------+-----------+----------------+--------------+----------------+-----------+--------------------------------------+------------+------------+----------------------------+----------------------------+----------------+ | Id  | UUID                                 | Source Node | Dest Node | Source Compute | Dest Compute | Dest Host      | Status    | Instance UUID                        | Old Flavor | New Flavor | Created At                 | Updated At                 | Type           | +-----+--------------------------------------+-------------+-----------+----------------+--------------+----------------+-----------+--------------------------------------+------------+------------+----------------------------+----------------------------+----------------+ | 470 | 2a9ba183-4f91-4fbf-93cf-6f0e55cc085a | s1          | stein-3   | s1             | stein-3      | 172.16.130.153 | completed | bf0466f6-4815-4824-8586-899817207564 | 1          | 1          | 2019-09-17T10:28:46.000000 | 2019-09-17T10:29:09.000000 | live-migration | | 469 | c05ea0e8-f040-463e-8957-a59f70ed8bf6 | s1          | stein-3   | s1             | stein-3      | 172.16.130.153 | completed | a3ec7a29-80de-4541-989d-4b9f4377f0bd | 1          | 1          | 2019-09-17T10:28:21.000000 | 2019-09-17T10:28:45.000000 | live-migration | | 468 | cef4c609-157e-4b39-b6cc-f5528d49c75a | s1          | stein-2   | s1             | stein-2      | 172.16.130.152 | completed | 83dab721-3343-436d-bee7-f5ffc0d0d38d | 4          | 4          | 2019-09-17T10:27:57.000000 | 2019-09-17T10:28:21.000000 | live-migration | | 467 | 5471e441-2a50-465a-bb63-3fe1bb2e81b9 | s1          | stein-2   | s1             | stein-2      | 172.16.130.152 | completed | e3c19fbe-7b94-4a65-a803-51daa9934378 | 4          | 4          | 2019-09-17T10:27:32.000000 | 2019-09-17T10:27:57.000000 | live-migration | There are two instances land on stein-3, and the two instances have same vCPU pin: (nova-libvirt)[root at stein-3 /]# virsh list --all Id    Name                           State ---------------------------------------------------- 32    instance-00000025              running 33    instance-00000024              running (nova-libvirt)[root at stein-3 /]# virsh vcpupin 32 VCPU: CPU Affinity ----------------------------------    0: 2    1: 7 (nova-libvirt)[root at stein-3 /]# virsh vcpupin 33 VCPU: CPU Affinity ----------------------------------    0: 2    1: 7 I checked the nova compute’s log on stein-3(you can find the log in attached log), then I found ‘host_topology’ isn’t updated when ‘hardware.numa_fit_instance_to_host’ be called in claims. ‘host_topology’ is the property of ‘objects.ComputeNode’ and it’s cached in ‘ResourceTracker’, it will use cached ‘cn’ to build  ‘claim’ when ‘check_can_live_migrate_destination’ called. Therefore, I guess the cache was not updated or updated too late or some other reason. I also checked the database, the NUMA topology of  the two instances have same vCPU pin: “[0,2], [1,7]”, and the compute node: stein-3 only has vCPU pin: “[2], [7]”. Please correct me if there is something wrong :) Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: migration-list.log Type: application/octet-stream Size: 15344 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute.stein-3.log Type: application/octet-stream Size: 139018 bytes Desc: not available URL: From notartom at gmail.com Tue Sep 17 15:11:55 2019 From: notartom at gmail.com (Artom Lifshitz) Date: Tue, 17 Sep 2019 11:11:55 -0400 Subject: [nova] The test of NUMA aware live migration In-Reply-To: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> References: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> Message-ID: If you can post the full logs (in debug mode) somewhere I can have a look. Based on what you're saying, it looks like there might be a race between updating the host topology and another instance claiming resources - although claims are supposed to be race-free because they use the COMPUTE_RESOURCES_SEMAPHORE [1]. [1] https://github.com/openstack/nova/blob/082c91a9286ae55fd5eb6adeed52500dc75be5ce/nova/compute/resource_tracker.py#L257 On Tue, Sep 17, 2019 at 8:44 AM wang.ya wrote: > Hi: > > > > The main code of NUMA aware live migration was merged. I’m testing it > recently. > > > > If only set NUMA property(‘hw:numa_nodes’, ‘hw:numa_cpus’, ‘hw:numa_mem’), > it works well. But if add the property “hw:cpu_policy='dedicated'”, it will > not correct after serval live migrations. > > Which means the live migrate can be success, but the vCPU pin are not > correct(two instance have serval same vCPU pin on same host). > > > > Below is my test steps. > > env: > > code: master branch (build on 16 September 2019, include the > patches of NUMA aware live migration) > > three compute node: > > - s1: 24C, 48G (2 NUMA nodes) > > - stein-2: 12C, 24G (2 NUMA nodes) > > - stein-3: 8C, 16G (2 NUMA nodes) > > > > flavor1 (2c2g): hw:cpu_policy='dedicated', hw:numa_cpus.0='0', > hw:numa_cpus.1='1', hw:numa_mem.0='1024', hw:numa_mem.1='1024', > hw:numa_nodes='2' > > flavor2 (4c4g): hw:cpu_policy='dedicated', hw:numa_cpus.0='0,1,2', > hw:numa_cpus.1='3', hw:numa_mem.0='1024', hw:numa_mem.1='3072', > hw:numa_nodes='2' > > image has no property. > > > > > > I create four instances(2*flavor1, 2* flavor2), then begin live migration > on by one(one instance live migrate done, next instance begin live migrate) > and check the vCPU pin whether is correct. > > After serval live migrations, the vCPU pin will not correct. (You can find > full migration list in attached file). The last live migrate is: > > > > > *+-----+--------------------------------------+-------------+-----------+----------------+--------------+----------------+-----------+--------------------------------------+------------+------------+----------------------------+----------------------------+----------------+* > > *| Id | UUID | Source Node | Dest Node | > Source Compute | Dest Compute | Dest Host | Status | Instance > UUID | Old Flavor | New Flavor | Created > At | Updated At | Type |* > > > *+-----+--------------------------------------+-------------+-----------+----------------+--------------+----------------+-----------+--------------------------------------+------------+------------+----------------------------+----------------------------+----------------+* > > *| 470 | 2a9ba183-4f91-4fbf-93cf-6f0e55cc085a | s1 | stein-3 | > s1 | stein-3 | 172.16.130.153 | completed | > bf0466f6-4815-4824-8586-899817207564 | 1 | 1 | > 2019-09-17T10:28:46.000000 | 2019-09-17T10:29:09.000000 | live-migration |* > > *| 469 | c05ea0e8-f040-463e-8957-a59f70ed8bf6 | s1 | stein-3 | > s1 | stein-3 | 172.16.130.153 | completed | > a3ec7a29-80de-4541-989d-4b9f4377f0bd | 1 | 1 | > 2019-09-17T10:28:21.000000 | 2019-09-17T10:28:45.000000 | live-migration |* > > *| 468 | cef4c609-157e-4b39-b6cc-f5528d49c75a | s1 | stein-2 | > s1 | stein-2 | 172.16.130.152 | completed | > 83dab721-3343-436d-bee7-f5ffc0d0d38d | 4 | 4 | > 2019-09-17T10:27:57.000000 | 2019-09-17T10:28:21.000000 | live-migration |* > > *| 467 | 5471e441-2a50-465a-bb63-3fe1bb2e81b9 | s1 | stein-2 | > s1 | stein-2 | 172.16.130.152 | completed | > e3c19fbe-7b94-4a65-a803-51daa9934378 | 4 | 4 | > 2019-09-17T10:27:32.000000 | 2019-09-17T10:27:57.000000 | live-migration |* > > > > > > There are two instances land on stein-3, and the two instances have same > vCPU pin: > > > > *(nova-libvirt)[root at stein-3 /]# virsh list --all* > > * Id Name State* > > *----------------------------------------------------* > > * 32 instance-00000025 running* > > * 33 instance-00000024 running* > > > > *(nova-libvirt)[root at stein-3 /]# virsh vcpupin 32* > > *VCPU: CPU Affinity* > > *----------------------------------* > > * 0: 2* > > * 1: 7* > > > > *(nova-libvirt)[root at stein-3 /]# virsh vcpupin 33* > > *VCPU: CPU Affinity* > > *----------------------------------* > > * 0: 2* > > * 1: 7* > > > > > > I checked the nova compute’s log on stein-3(you can find the log in > attached log), then I found ‘host_topology’ isn’t updated when > ‘hardware.numa_fit_instance_to_host’ be called in claims. ‘host_topology’ > is the property of ‘objects.ComputeNode’ and it’s cached in > ‘ResourceTracker’, it will use cached ‘cn’ to build ‘claim’ when > ‘check_can_live_migrate_destination’ called. Therefore, I guess the cache > was not updated or updated too late or some other reason. > > I also checked the database, the NUMA topology of *the two instances* > have same vCPU pin: “[0,2], [1,7]”, and the *compute node: stein-*3 only > has vCPU pin: “[2], [7]”. > > > > Please correct me if there is something wrong :) > > > > Best Regards > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Sep 17 15:28:32 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 17 Sep 2019 17:28:32 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <16d3fa3471d.ac5d84ad82476.7338406402960164895@ghanshyammann.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <16d3fa3471d.ac5d84ad82476.7338406402960164895@ghanshyammann.com> Message-ID: Ah, great! We are trendsetters here then. 8-) Still, kolla can be crossed out from the current iteration because testing IPv6-only is not applicable to it. Kind regards, Radek wt., 17 wrz 2019 o 16:32 Ghanshyam Mann napisał(a): > ---- On Tue, 17 Sep 2019 03:12:22 -0700 Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote ---- > > Hiya, > > Kolla is not going to get an IPv6-only job because it builds docker > images and is not tested regarding networking (it does not do > devstack/tempest either). > > Kolla-Ansible, which does the deployment, is going to get some > IPv6-only test jobs - https://review.opendev.org/681573We are testing > CentOS and multinode and hence need overlay VXLAN to reach sensible levels > of stability there - >https://review.opendev.org/670690The VXLAN patch is > probably ready, awaiting review of independent cores. It will be refactored > out later to put it in zuul plays as it might be useful to other projects > as well.The IPv6 patch needs rebasing on VXLAN and >some small tweaks still. > > > This is good news Radosław. Actually deployment projects deploying on IPv6 > is out of the scope of this goal(I forgot to add it under do not need > section) but this is something we wanted to do as next step. Starting it in > kolla is really appreciated and great news. > > -gmann > > > Kind regards,Radek > > wt., 17 wrz 2019 o 04:58 Ghanshyam Mann > napisał(a): > > Hello Everyone, > > > > Below is the progress on Ipv6 goal during R6 week. I started the legacy > job for IPv6 deployment with duplicating the run.yaml which is > > the only best way to do. > > > > Summary: > > > > The projects still need to prepare the IPv6 job: > > * Ec2-Api > > * Freezer > > * Heat > > * Ironic > > * Karbor > > * Kolla > > * Kuryr > > * Magnum > > * Manila > > * Masakari > > * Mistral > > * Murano > > * Octavia > > * Swift > > > > The projects waiting for IPv6 job patch to merge: > > If patch is failing, help me to debug that otherwise review and merge. > > * Barbican > > * Blazar > > * Cyborg > > * Tricircle > > * Vitrage > > * Zaqar > > * Cinder > > * Glance > > * Monasca > > * Neutron > > * Qinling > > * Quality Assurance > > * Sahara > > * Searchlight > > * Senlin > > * Tacker > > > > The projects have merged the IPv6 jobs: > > * Designate > > * Murano > > * Trove > > * Cloudkitty > > * Congress > > * Horizon > > * Keystone > > * Nova > > * Placement > > * Solum > > * Telemetry > > * Watcher > > * Zun > > > > The projects do not need the IPv6 job (CLI, lib, deployment projects > etc ): > > If anything I missed and IPv6 job need, please reply otherwise I will > mark their task in storyboard as invalid. > > > > * Adjutant > > * Documentation > > * I18n > > * Infrastructure > > * Loci > > * Openstack Charms > > * Openstack-Chef > > * Openstack-Helm > > * Openstackansible > > * Openstackclient > > * Openstacksdk > > * Oslo > > * Packaging-Rpm > > * Powervmstackers > > * Puppet Openstack > > * Rally > > * Release Management > > * Requirements > > * Storlets > > * Tripleo > > * Winstackers > > > > > > Storyboard: > > ========= > > - https://storyboard.openstack.org/#!/story/2005477 > > > > IPv6 missing support found: > > ===================== > > 1. https://review.opendev.org/#/c/673397/ > > 2. https://review.opendev.org/#/c/673449/ > > 3. https://review.opendev.org/#/c/677524/ > > > > How you can help: > > ============== > > - Each project needs to look for and review the ipv6 job patch. > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > - Any other specific scenario needs to be added as part of project IPv6 > verification. > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > Everything related to this goal can be found under this topic: > > Topic: > https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > How to define and run new IPv6 Job on project side: > > ======================================= > > - I prepared a wiki page to describe this section - > https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > Review suggestion: > > ============== > > - Main goal of these jobs will be whether your service is able to > listen on IPv6 and can communicate to any > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. > So check your proposed job with > > that point of view. If anything missing, comment on patch. > > - One example was - I missed to configure novnc address to IPv6- > https://review.opendev.org/#/c/672493/ > > - base script as part of 'devstack-tempest-ipv6' will do basic checks > for endpoints on IPv6 and some devstack var > > setting. But if your project needs more specific verification then it > can be added in project side job as post-run > > playbooks as described in wiki page[1]. > > > > [1] > https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > -gmann > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Rajini.Karthik at Dell.com Tue Sep 17 15:37:34 2019 From: Rajini.Karthik at Dell.com (Rajini.Karthik at Dell.com) Date: Tue, 17 Sep 2019 15:37:34 +0000 Subject: [zuul3] zuul2 -> zuul3 migration In-Reply-To: <00048ac15af741ba854553c3d7e33678@AUSX13MPS308.AMER.DELL.COM> References: <00048ac15af741ba854553c3d7e33678@AUSX13MPS308.AMER.DELL.COM> Message-ID: https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3-3rd-party-ci.html From: Karthik, Rajini Sent: Monday, September 16, 2019 10:57 AM To: 'Lenny Verkhovsky'; openstack-discuss Subject: RE: [zuul3] zuul2 -> zuul3 migration We are looking for the same. Thanks Rajini From: Lenny Verkhovsky > Sent: Monday, September 16, 2019 3:50 AM To: openstack-discuss Subject: [zuul3] zuul2 -> zuul3 migration [EXTERNAL EMAIL] Hi Team, We would like to migrate our Third Party CI[1] from zuul2 to zuul3. We have a lot of Jenkins jobs initially based on infra project-config-example But I guess we need to re write all the jobs now to support ansible. Any guide/example/tips are highly appreciated. [1] https://wiki.openstack.org/wiki/ThirdPartySystems/Mellanox_CI [2] https://github.com/openstack-infra/project-config-example/tree/master/jenkins/jobs Best Regards Lenny Verkhovsky (aka lennyb) Mellanox Technologies office: +972 74 712 92 44 fax: +972 74 712 91 11 mobile: +972 54 554 02 33 irc: lennyb -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Sep 17 15:38:42 2019 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 17 Sep 2019 17:38:42 +0200 Subject: [sdk][release][requirements] FFE requested for openstacksdk In-Reply-To: <20190917143118.2oezd2kcbfn6exok@mthode.org> References: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> <20190917143118.2oezd2kcbfn6exok@mthode.org> Message-ID: > On Sep 17, 2019, at 4:31 PM, Matthew Thode wrote: > > On 19-09-17 16:15:11, Monty Taylor wrote: >> Heya, >> >> We’d like to cut an 0.35.1 bugfix release as part of train to grab https://review.opendev.org/#/c/680649/ and https://review.opendev.org/#/c/682454/. The first is landed and is in support of the Ironic Nova driver. The second is making its way through the gate right now and is in support of OSC v4. >> > > Seems like a larger change than just those two patches if releasing from > master (as there is no train branch). What possible projects could this > require a re-release for? Below is a table showing what's running with > a dependency on openstacksdk now Yes - I think it’ll need to be a 0.36.0. We looked at it and nothing should be breaking … and we haven’t cut a stable/train yet. So I think it should just be a constraints bump for folks. > +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------+ > | Repository | Filename | Line | Text | > +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------+ > | openstack/heat | requirements.txt | 15 | openstacksdk>=0.11.2 # Apache-2.0 | > | openstack/ironic | requirements.txt | 50 | openstacksdk>=0.31.2 # Apache-2.0 | > | openstack/ironic-inspector | requirements.txt | 20 | openstacksdk>=0.30.0 # Apache-2.0 | > | openstack/kuryr-kubernetes | requirements.txt | 12 | openstacksdk>=0.13.0 # Apache-2.0 | > | openstack/masakari-dashboard | requirements.txt | 14 | openstacksdk>=0.26.0 | > | openstack/masakari-monitors | requirements.txt | 7 | openstacksdk>=0.13.0 # Apache-2.0 | > | openstack/metalsmith | requirements.txt | 5 | openstacksdk>=0.29.0 # Apache-2.0 | > | openstack/nova | requirements.txt | 74 | openstacksdk>=0.35.0 # Apache-2.0 | > | openstack/octavia-dashboard | requirements.txt | 7 | openstacksdk>=0.24.0 # Apache-2.0 | > | openstack/openstack-ansible | requirements.txt | 26 | openstacksdk>=0.14.0 # Apache-2.0 | > | openstack/openstack-ansible-tests | test-requirements.txt | 44 | openstacksdk>=0.14.0 # Apache-2.0 | > | openstack/os-client-config | requirements.txt | 4 | openstacksdk>=0.13.0 # Apache-2.0 | > | openstack/osc-lib | requirements.txt | 10 | openstacksdk>=0.15.0 # Apache-2.0 | > | openstack/python-masakariclient | requirements.txt | 5 | openstacksdk>=0.13.0 # Apache-2.0 | > | openstack/python-novaclient | test-requirements.txt | 16 | openstacksdk>=0.11.2 # Apache-2.0 | > | openstack/python-openstackclient | requirements.txt | 10 | openstacksdk>=0.17.0 # Apache-2.0 | > | openstack/python-senlinclient | requirements.txt | 9 | openstacksdk>=0.24.0 # Apache-2.0 | > | openstack/python-tempestconf | requirements.txt | 9 | openstacksdk>=0.11.3 # Apache-2.0 | > | openstack/qinling | runtimes/python2/requirements.txt | 10 | openstacksdk>=0.9.19 | > | openstack/qinling | runtimes/python3/requirements.txt | 10 | openstacksdk>=0.9.19 | > | openstack/requirements | global-requirements.txt | 434 | openstacksdk # Apache-2.0 | > | openstack/requirements | openstack_requirements/tests/files/upper-constraints.txt | 365 | openstacksdk===0.9.13 | > | openstack/senlin | requirements.txt | 14 | openstacksdk>=0.27.0 # Apache-2.0 | > | openstack/senlin-tempest-plugin | requirements.txt | 7 | openstacksdk>=0.24.0 # Apache-2.0 | > | openstack/shade | requirements.txt | 6 | # shade depends on os-client-config in addition to openstacksdk so that it | > | openstack/shade | requirements.txt | 9 | openstacksdk>=0.15.0 # Apache-2.0 | > | openstack/sushy-tools | test-requirements.txt | 15 | openstacksdk>=0.11.2 # Apache-2.0 | > | openstack/tenks | ansible/roles/ironic-enrolment/files/requirements.txt | 4 | openstacksdk>=0.17.2 # Apache | > | openstack/tenks | ansible/roles/nova-flavors/files/requirements.txt | 4 | openstacksdk>=0.17.2 # Apache | > | openstack/tripleo-common-tempest-plugin | requirements.txt | 7 | openstacksdk>=0.11.2 # Apache-2.0 | > | openstack/upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 161 | openstacksdk==0.24.0 | > | x/bilean | requirements.txt | 10 | openstacksdk>=0.7.4 # Apache-2.0 | > | x/flame | requirements.txt | 6 | openstacksdk==0.17.2 | > | x/monitorstack | requirements.txt | 4 | openstacksdk>=0.9.14 | > | x/osops-tools-contrib | ansible_requirements.txt | 29 | openstacksdk==0.9.8 | > | zuul/nodepool | requirements.txt | 9 | # openstacksdk before 0.27.0 is TaskManager based | > | zuul/nodepool | requirements.txt | 12 | openstacksdk>=0.27.0,!=0.28.0,!=0.29.0,!=0.30.0,!=0.31.0,!=0.31.1,!=0.31.2 | > | zuul/zuul-jobs | test-requirements.txt | 22 | openstacksdk>=0.17.1 | > +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------+ > > -- > Matthew Thode From mthode at mthode.org Tue Sep 17 15:41:04 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 17 Sep 2019 10:41:04 -0500 Subject: [sdk][release][requirements] FFE requested for openstacksdk In-Reply-To: References: <2AC90410-7617-44BE-925B-60C35EB453BF@inaugust.com> <20190917143118.2oezd2kcbfn6exok@mthode.org> Message-ID: <20190917154104.iibywv46l2hx5fss@mthode.org> On 19-09-17 17:38:42, Monty Taylor wrote: > > > > On Sep 17, 2019, at 4:31 PM, Matthew Thode wrote: > > > > On 19-09-17 16:15:11, Monty Taylor wrote: > >> Heya, > >> > >> We’d like to cut an 0.35.1 bugfix release as part of train to grab https://review.opendev.org/#/c/680649/ and https://review.opendev.org/#/c/682454/. The first is landed and is in support of the Ironic Nova driver. The second is making its way through the gate right now and is in support of OSC v4. > >> > > > > Seems like a larger change than just those two patches if releasing from > > master (as there is no train branch). What possible projects could this > > require a re-release for? Below is a table showing what's running with > > a dependency on openstacksdk now > > Yes - I think it’ll need to be a 0.36.0. We looked at it and nothing should be breaking … and we haven’t cut a stable/train yet. So I think it should just be a constraints bump for folks. > Sounds good them (approving the release as it's just a constraint bump). -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Sep 17 15:42:48 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 08:42:48 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <16d3fa3471d.ac5d84ad82476.7338406402960164895@ghanshyammann.com> Message-ID: <16d3fe3fb77.d7b4096985394.567255039558713233@ghanshyammann.com> ---- On Tue, 17 Sep 2019 08:28:32 -0700 Radosław Piliszek wrote ---- > Ah, great!We are trendsetters here then. 8-) > Still, kolla can be crossed out from the current iteration because testing IPv6-only is not applicable to it. Sure, I will update the list in my next update. -gmann > Kind regards,Radek > wt., 17 wrz 2019 o 16:32 Ghanshyam Mann napisał(a): > ---- On Tue, 17 Sep 2019 03:12:22 -0700 Radosław Piliszek wrote ---- > > Hiya, > > Kolla is not going to get an IPv6-only job because it builds docker images and is not tested regarding networking (it does not do devstack/tempest either). > > Kolla-Ansible, which does the deployment, is going to get some IPv6-only test jobs - https://review.opendev.org/681573We are testing CentOS and multinode and hence need overlay VXLAN to reach sensible levels of stability there - >https://review.opendev.org/670690The VXLAN patch is probably ready, awaiting review of independent cores. It will be refactored out later to put it in zuul plays as it might be useful to other projects as well.The IPv6 patch needs rebasing on VXLAN and >some small tweaks still. > > > This is good news Radosław. Actually deployment projects deploying on IPv6 is out of the scope of this goal(I forgot to add it under do not need section) but this is something we wanted to do as next step. Starting it in kolla is really appreciated and great news. > > -gmann > > > Kind regards,Radek > > wt., 17 wrz 2019 o 04:58 Ghanshyam Mann napisał(a): > > Hello Everyone, > > > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > > the only best way to do. > > > > Summary: > > > > The projects still need to prepare the IPv6 job: > > * Ec2-Api > > * Freezer > > * Heat > > * Ironic > > * Karbor > > * Kolla > > * Kuryr > > * Magnum > > * Manila > > * Masakari > > * Mistral > > * Murano > > * Octavia > > * Swift > > > > The projects waiting for IPv6 job patch to merge: > > If patch is failing, help me to debug that otherwise review and merge. > > * Barbican > > * Blazar > > * Cyborg > > * Tricircle > > * Vitrage > > * Zaqar > > * Cinder > > * Glance > > * Monasca > > * Neutron > > * Qinling > > * Quality Assurance > > * Sahara > > * Searchlight > > * Senlin > > * Tacker > > > > The projects have merged the IPv6 jobs: > > * Designate > > * Murano > > * Trove > > * Cloudkitty > > * Congress > > * Horizon > > * Keystone > > * Nova > > * Placement > > * Solum > > * Telemetry > > * Watcher > > * Zun > > > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > > > * Adjutant > > * Documentation > > * I18n > > * Infrastructure > > * Loci > > * Openstack Charms > > * Openstack-Chef > > * Openstack-Helm > > * Openstackansible > > * Openstackclient > > * Openstacksdk > > * Oslo > > * Packaging-Rpm > > * Powervmstackers > > * Puppet Openstack > > * Rally > > * Release Management > > * Requirements > > * Storlets > > * Tripleo > > * Winstackers > > > > > > Storyboard: > > ========= > > - https://storyboard.openstack.org/#!/story/2005477 > > > > IPv6 missing support found: > > ===================== > > 1. https://review.opendev.org/#/c/673397/ > > 2. https://review.opendev.org/#/c/673449/ > > 3. https://review.opendev.org/#/c/677524/ > > > > How you can help: > > ============== > > - Each project needs to look for and review the ipv6 job patch. > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > - Any other specific scenario needs to be added as part of project IPv6 verification. > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > Everything related to this goal can be found under this topic: > > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > How to define and run new IPv6 Job on project side: > > ======================================= > > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > Review suggestion: > > ============== > > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > that point of view. If anything missing, comment on patch. > > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > setting. But if your project needs more specific verification then it can be added in project side job as post-run > > playbooks as described in wiki page[1]. > > > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > -gmann > > > > > > > > > > From marcin.juszkiewicz at linaro.org Tue Sep 17 16:12:53 2019 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 17 Sep 2019 18:12:53 +0200 Subject: [kolla] State of ppc64le support In-Reply-To: References: Message-ID: <8a238eb7-3e59-f808-a50d-bd190c60edc3@linaro.org> W dniu 14.09.2019 o 18:45, Marcin Juszkiewicz pisze: > From 3 distributions we target only Debian/source combo was buildable. > > CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) Sorted out in https://review.opendev.org/682618 From Albert.Braden at synopsys.com Tue Sep 17 16:36:36 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 17 Sep 2019 16:36:36 +0000 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: I thought I had figured out that the solution was to increase the MySQL wait_timeout so that it is longer than the nova (and glance, neutron, etc.) connection_recycle_time (3600). I increased my MySQL wait_timeout to 6000: root at us01odc-qa-ctrl1:~# mysqladmin variables|grep wait_timeout|grep -v _wait | wait_timeout | 6000 But I still see the MySQL errors. There's no LB; we are pointing to a single MySQL host. Sep 11 14:59:56 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:56 8016 [Warning] Aborted connection 8016 to db: 'nova' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8019 [Warning] Aborted connection 8019 to db: 'glance' user: 'glance' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8018 [Warning] Aborted connection 8018 to db: 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) Sep 11 15:00:50 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 15:00:50 8022 [Warning] Aborted connection 8022 to db: 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) The errors come from nova, neutron, glance and keystone; it appears that all default to 3600. So it appears that, even with wait_timeout > connection_recycle_time we still see mysql timeout errors. Just for fun I tried setting the MySQL wait_timeout to 86400 and restarting MySQL. I expected that this would pause the "Aborted connection" errors for 24 hours, but they started again after an hour. So it looks like my original assumption was incorrect. I thought nova was keeping connections open until the MySQL server timed them out, but now it appears that something else is happening. Has anyone successfully stopped these MySQL error messages? -----Original Message----- From: Ben Nemec Sent: Monday, September 9, 2019 9:50 AM To: Chris Hoge ; openstack-discuss at lists.openstack.org Subject: Re: [oslo][nova] Nova causes MySQL timeouts On 9/9/19 11:38 AM, Chris Hoge wrote: > In my personal experience, running Nova on a four core machine without > limiting the number of database connections will easily exhaust the > available connections to MySQL/MariaDB. Keep in mind that the limit > applies to every instance of a service, so if Nova starts 'm' services > replicated for 'n' cores with 'd' possible connections you'll be up to > ‘m x n x d' connections. It gets big fast. > > The default setting of '0' (that is, unlimited) does not make for a good > first-run experience, IMO. We don't default to 0. We default to 5: https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > This issue comes up every few years or so, and the consensus previously > is that 200-2000 connections is recommended based on your needs. Your > database has to be configured to handle the load and looking at the > configuration value across all your services and setting them > consistently and appropriately is important. > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2015-2DApril_061808.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=FGLfZK5eHj7z_xL-5DJsPgHkOt_T131ugvicMvcMDbc&e= Thanks, I did not recall that discussion. If I'm reading it correctly, Jay is suggesting that for MySQL we should just disable connection pooling. As I noted earlier, I don't think we expose the ability to do that in oslo.db (patches welcome!), but setting max_pool_size to 1 would get you pretty close. Maybe we should add that to the help text for the option in oslo.db? > >> On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: >> >> Tagging with oslo as this sounds related to oslo.db. >> >> On 9/5/19 7:37 PM, Albert Braden wrote: >>> After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. >>> Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? >> >> I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. >> >> It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. >> >> 0: https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= >> 1: https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.sqlalchemy.org_en_13_core_pooling.html-23sqlalchemy.pool.QueuePool.-5F-5Finit-5F-5F&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=_EIhQyyj1gSM0PrX7de3yJr8hNi7tD8-tnfPo2VV_LU&e= >> 2: https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.connection-5Frecycle-5Ftime&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=xDnj80EQrxXwenOLgmKEaJbF3VRIylapDgqyMs81pSY&e= >> >>> *From:* Albert Braden >>> *Sent:* Wednesday, September 4, 2019 10:19 AM >>> *To:* openstack-discuss at lists.openstack.org >>> *Cc:* Gaëtan Trellu >>> *Subject:* RE: Nova causes MySQL timeouts >>> We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_keystone_stein_configuration_config-2Doptions.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=NXcUpNTYGd6ZP-1oOUaQXsF7rHQ0mAt4e9uL8zzd0KA&e= >>> Document says: >>> [api_database] >>> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >>> max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. >>> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >>> [database] >>> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >>> min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. >>> max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. >>> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >>> If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? >>> My settings are default: >>> [api_database]: >>> #connection_recycle_time = 3600 >>> #max_overflow = >>> #max_pool_size = >>> [database]: >>> #connection_recycle_time = 3600 >>> #min_pool_size = 1 >>> #max_overflow = 50 >>> #max_pool_size = 5 >>> It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? >>> *From:* Gaëtan Trellu > >>> *Sent:* Tuesday, September 3, 2019 1:37 PM >>> *To:* Albert Braden > >>> *Cc:* openstack-discuss at lists.openstack.org >>> *Subject:* Re: Nova causes MySQL timeouts >>> Hi Albert, >>> It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. >>> Keep in mind than more workers you will have more connections will be opened on the database. >>> Gaetan (goldyfruit) >>> On Sep 3, 2019 4:31 PM, Albert Braden > wrote: >>> It looks like nova is keeping mysql connections open until they time >>> out. How are others responding to this issue? Do you just ignore the >>> mysql errors, or is it possible to change configuration so that nova >>> closes and reopens connections before they time out? Or is there a >>> way to stop mysql from logging these aborted connections without >>> hiding real issues? >>> Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' >>> (Got timeout reading communication packets) >> > > From gmann at ghanshyammann.com Tue Sep 17 16:45:27 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 09:45:27 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <309fd569-b9fa-ec1e-aa89-ecf78a53c608@redhat.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <309fd569-b9fa-ec1e-aa89-ecf78a53c608@redhat.com> Message-ID: <16d401d553b.ff19f73387353.8920034850675474828@ghanshyammann.com> ---- On Tue, 17 Sep 2019 02:54:53 -0700 Dmitry Tantsur wrote ---- > On 9/17/19 4:51 AM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > > the only best way to do. > > > > Summary: > > > > The projects still need to prepare the IPv6 job: > > * Ec2-Api > > * Freezer > > * Heat > > * Ironic > > We're hopelessly stuck with it. Finishing such a job in the Ussuri cycle would > be an achievement already IMO. I understand your concern, goal is to finish in Train. I prepared the IPv6 jobs for Ironic project: - Ironic - https://review.opendev.org/#/c/682692/ - Ironic-inspector - https://review.opendev.org/#/c/682691/ - networking-generic-switch - https://review.opendev.org/#/c/682690/ rest all repo does not need IPv6 job as such. let's see how they run. -gmann > > Dmitry > > > * Karbor > > * Kolla > > * Kuryr > > * Magnum > > * Manila > > * Masakari > > * Mistral > > * Murano > > * Octavia > > * Swift > > > > The projects waiting for IPv6 job patch to merge: > > If patch is failing, help me to debug that otherwise review and merge. > > * Barbican > > * Blazar > > * Cyborg > > * Tricircle > > * Vitrage > > * Zaqar > > * Cinder > > * Glance > > * Monasca > > * Neutron > > * Qinling > > * Quality Assurance > > * Sahara > > * Searchlight > > * Senlin > > * Tacker > > > > The projects have merged the IPv6 jobs: > > * Designate > > * Murano > > * Trove > > * Cloudkitty > > * Congress > > * Horizon > > * Keystone > > * Nova > > * Placement > > * Solum > > * Telemetry > > * Watcher > > * Zun > > > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > > > * Adjutant > > * Documentation > > * I18n > > * Infrastructure > > * Loci > > * Openstack Charms > > * Openstack-Chef > > * Openstack-Helm > > * Openstackansible > > * Openstackclient > > * Openstacksdk > > * Oslo > > * Packaging-Rpm > > * Powervmstackers > > * Puppet Openstack > > * Rally > > * Release Management > > * Requirements > > * Storlets > > * Tripleo > > * Winstackers > > > > > > Storyboard: > > ========= > > - https://storyboard.openstack.org/#!/story/2005477 > > > > IPv6 missing support found: > > ===================== > > 1. https://review.opendev.org/#/c/673397/ > > 2. https://review.opendev.org/#/c/673449/ > > 3. https://review.opendev.org/#/c/677524/ > > > > How you can help: > > ============== > > - Each project needs to look for and review the ipv6 job patch. > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > - Any other specific scenario needs to be added as part of project IPv6 verification. > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > Everything related to this goal can be found under this topic: > > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > How to define and run new IPv6 Job on project side: > > ======================================= > > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > Review suggestion: > > ============== > > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > that point of view. If anything missing, comment on patch. > > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > setting. But if your project needs more specific verification then it can be added in project side job as post-run > > playbooks as described in wiki page[1]. > > > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > -gmann > > > > > > > > > From gmann at ghanshyammann.com Tue Sep 17 16:49:06 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 09:49:06 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> Message-ID: <16d4020aea1.11749982a87432.4028522842882545383@ghanshyammann.com> ---- On Mon, 16 Sep 2019 23:28:53 -0700 Slawek Kaplonski wrote ---- > Hi Ghanshyam, > > > On 17 Sep 2019, at 04:51, Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > > the only best way to do. > > > > Summary: > > > > The projects still need to prepare the IPv6 job: > > * Ec2-Api > > * Freezer > > * Heat > > * Ironic > > * Karbor > > * Kolla > > * Kuryr > > * Magnum > > * Manila > > * Masakari > > * Mistral > > * Murano > > * Octavia > > * Swift > > > > The projects waiting for IPv6 job patch to merge: > > If patch is failing, help me to debug that otherwise review and merge. > > * Barbican > > * Blazar > > * Cyborg > > * Tricircle > > * Vitrage > > * Zaqar > > * Cinder > > * Glance > > * Monasca > > * Neutron > > I thought that Neutron is already done. Do You mean patches for some stadium projects which are still not merged? Can You give me links to such patches with failing job to make sure that I didn’t miss anything? Yeah, it is for neutron stadium projects, I am tracking them as neutron only. Few are up I think and other I need to prepare the jobs. I am doing that and let you know the link. -gmann > > > * Qinling > > * Quality Assurance > > * Sahara > > * Searchlight > > * Senlin > > * Tacker > > > > The projects have merged the IPv6 jobs: > > * Designate > > * Murano > > * Trove > > * Cloudkitty > > * Congress > > * Horizon > > * Keystone > > * Nova > > * Placement > > * Solum > > * Telemetry > > * Watcher > > * Zun > > > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > > > * Adjutant > > * Documentation > > * I18n > > * Infrastructure > > * Loci > > * Openstack Charms > > * Openstack-Chef > > * Openstack-Helm > > * Openstackansible > > * Openstackclient > > * Openstacksdk > > * Oslo > > * Packaging-Rpm > > * Powervmstackers > > * Puppet Openstack > > * Rally > > * Release Management > > * Requirements > > * Storlets > > * Tripleo > > * Winstackers > > > > > > Storyboard: > > ========= > > - https://storyboard.openstack.org/#!/story/2005477 > > > > IPv6 missing support found: > > ===================== > > 1. https://review.opendev.org/#/c/673397/ > > 2. https://review.opendev.org/#/c/673449/ > > 3. https://review.opendev.org/#/c/677524/ > > > > How you can help: > > ============== > > - Each project needs to look for and review the ipv6 job patch. > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > - Any other specific scenario needs to be added as part of project IPv6 verification. > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > Everything related to this goal can be found under this topic: > > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > How to define and run new IPv6 Job on project side: > > ======================================= > > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > Review suggestion: > > ============== > > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > that point of view. If anything missing, comment on patch. > > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > setting. But if your project needs more specific verification then it can be added in project side job as post-run > > playbooks as described in wiki page[1]. > > > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > -gmann > > > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > From smooney at redhat.com Tue Sep 17 16:50:26 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 17 Sep 2019 17:50:26 +0100 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> On Tue, 2019-09-17 at 16:36 +0000, Albert Braden wrote: > I thought I had figured out that the solution was to increase the MySQL wait_timeout so that it is longer than the > nova (and glance, neutron, etc.) connection_recycle_time (3600). I increased my MySQL wait_timeout to 6000: > > root at us01odc-qa-ctrl1:~# mysqladmin variables|grep wait_timeout|grep -v _wait > > wait_timeout | 6000 > > But I still see the MySQL errors. There's no LB; we are pointing to a single MySQL host. > > Sep 11 14:59:56 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:56 8016 [Warning] Aborted connection 8016 to db: > 'nova' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8019 [Warning] Aborted connection 8019 to db: > 'glance' user: 'glance' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8018 [Warning] Aborted connection 8018 to db: > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) > Sep 11 15:00:50 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 15:00:50 8022 [Warning] Aborted connection 8022 to db: > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) > > The errors come from nova, neutron, glance and keystone; it appears that all default to 3600. So it appears that, even > with wait_timeout > connection_recycle_time we still see mysql timeout errors. > > Just for fun I tried setting the MySQL wait_timeout to 86400 and restarting MySQL. I expected that this would pause > the "Aborted connection" errors for 24 hours, but they started again after an hour. So it looks like my original > assumption was incorrect. I thought nova was keeping connections open until the MySQL server timed them out, but now > it appears that something else is happening. > > Has anyone successfully stopped these MySQL error messages? could this be related to the eventlet heartbeat issue we see for rabbitmq when running the api under mod_wsgi/uwsgi? e.g. hav eyou confirmed that you wsgi serer is configure to use 1 thread and multiple processes for concurancy multiple thread in one process might have issues. > -----Original Message----- > From: Ben Nemec > Sent: Monday, September 9, 2019 9:50 AM > To: Chris Hoge ; openstack-discuss at lists.openstack.org > Subject: Re: [oslo][nova] Nova causes MySQL timeouts > > > > On 9/9/19 11:38 AM, Chris Hoge wrote: > > In my personal experience, running Nova on a four core machine without > > limiting the number of database connections will easily exhaust the > > available connections to MySQL/MariaDB. Keep in mind that the limit > > applies to every instance of a service, so if Nova starts 'm' services > > replicated for 'n' cores with 'd' possible connections you'll be up to > > ‘m x n x d' connections. It gets big fast. > > > > The default setting of '0' (that is, unlimited) does not make for a good > > first-run experience, IMO. > > We don't default to 0. We default to 5: > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > This issue comes up every few years or so, and the consensus previously > > is that 200-2000 connections is recommended based on your needs. Your > > database has to be configured to handle the load and looking at the > > configuration value across all your services and setting them > > consistently and appropriately is important. > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2015-2DApril_061808.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=FGLfZK5eHj7z_xL-5DJsPgHkOt_T131ugvicMvcMDbc&e= > > > > Thanks, I did not recall that discussion. > > If I'm reading it correctly, Jay is suggesting that for MySQL we should > just disable connection pooling. As I noted earlier, I don't think we > expose the ability to do that in oslo.db (patches welcome!), but setting > max_pool_size to 1 would get you pretty close. Maybe we should add that > to the help text for the option in oslo.db? > > > > > > On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: > > > > > > Tagging with oslo as this sounds related to oslo.db. > > > > > > On 9/5/19 7:37 PM, Albert Braden wrote: > > > > After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay > > > > open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the > > > > pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time > > > > and allow 10 temp. > > > > Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that > > > > a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if > > > > the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? > > > > > > I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to > > > minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open > > > connections frequently. > > > > > > It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support > > > that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I > > > seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path > > > between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening > > > intervening timeouts, might get rid of these disconnection messages. > > > > > > 0: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > 1: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.sqlalchemy.org_en_13_core_pooling.html-23sqlalchemy.pool.QueuePool.-5F-5Finit-5F-5F&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=_EIhQyyj1gSM0PrX7de3yJr8hNi7tD8-tnfPo2VV_LU&e= > > > > > > 2: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.connection-5Frecycle-5Ftime&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=xDnj80EQrxXwenOLgmKEaJbF3VRIylapDgqyMs81pSY&e= > > > > > > > > > > *From:* Albert Braden > > > > *Sent:* Wednesday, September 4, 2019 10:19 AM > > > > *To:* openstack-discuss at lists.openstack.org > > > > *Cc:* Gaëtan Trellu > > > > *Subject:* RE: Nova causes MySQL timeouts > > > > We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_keystone_stein_configuration_config-2Doptions.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=NXcUpNTYGd6ZP-1oOUaQXsF7rHQ0mAt4e9uL8zzd0KA&e= > > > > = > > > 2Doptions.html&d=DwMGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=3eF4Bv1HRQW6gl7 > > > > II12rTTSKj_A9_LDISS6hU0nP-R0&s=0EGWx9qW60G1cxoPFCIv_G1-iXX20jKcC5-AwlCWk8g&e=> > > > > Document says: > > > > [api_database] > > > > connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. > > > > max_overflow = None (Integer) If set, use this value for max_overflow with > > > > SQLAlchemy. > > > > max_pool_size = None (Integer) Maximum number of SQL connections to keep open > > > > in a pool. > > > > [database] > > > > connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. > > > > min_pool_size = 1 (Integer) Minimum number of SQL connections to keep > > > > open in a pool. > > > > max_overflow = 50 (Integer) If set, use this value for max_overflow > > > > with SQLAlchemy. > > > > max_pool_size = None (Integer) Maximum number of SQL connections to keep open > > > > in a pool. > > > > If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the > > > > recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” > > > > an acceptable setting? > > > > My settings are default: > > > > [api_database]: > > > > #connection_recycle_time = 3600 > > > > #max_overflow = > > > > #max_pool_size = > > > > [database]: > > > > #connection_recycle_time = 3600 > > > > #min_pool_size = 1 > > > > #max_overflow = 50 > > > > #max_pool_size = 5 > > > > It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? > > > > *From:* Gaëtan Trellu > > > > > *Sent:* Tuesday, September 3, 2019 1:37 PM > > > > *To:* Albert Braden > > > > > *Cc:* openstack-discuss at lists.openstack.org > > > > *Subject:* Re: Nova causes MySQL timeouts > > > > Hi Albert, > > > > It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. > > > > Keep in mind than more workers you will have more connections will be opened on the database. > > > > Gaetan (goldyfruit) > > > > On Sep 3, 2019 4:31 PM, Albert Braden > wrote: > > > > It looks like nova is keeping mysql connections open until they time > > > > out. How are others responding to this issue? Do you just ignore the > > > > mysql errors, or is it possible to change configuration so that nova > > > > closes and reopens connections before they time out? Or is there a > > > > way to stop mysql from logging these aborted connections without > > > > hiding real issues? > > > > Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' > > > > (Got timeout reading communication packets) > > > > > > From skaplons at redhat.com Tue Sep 17 16:58:05 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 Sep 2019 18:58:05 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <16d4020aea1.11749982a87432.4028522842882545383@ghanshyammann.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> <16d4020aea1.11749982a87432.4028522842882545383@ghanshyammann.com> Message-ID: <8907C14B-99D2-4F38-B6DC-5D49F8413576@redhat.com> Hi, > On 17 Sep 2019, at 18:49, Ghanshyam Mann wrote: > > ---- On Mon, 16 Sep 2019 23:28:53 -0700 Slawek Kaplonski wrote ---- >> Hi Ghanshyam, >> >>> On 17 Sep 2019, at 04:51, Ghanshyam Mann wrote: >>> >>> Hello Everyone, >>> >>> Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is >>> the only best way to do. >>> >>> Summary: >>> >>> The projects still need to prepare the IPv6 job: >>> * Ec2-Api >>> * Freezer >>> * Heat >>> * Ironic >>> * Karbor >>> * Kolla >>> * Kuryr >>> * Magnum >>> * Manila >>> * Masakari >>> * Mistral >>> * Murano >>> * Octavia >>> * Swift >>> >>> The projects waiting for IPv6 job patch to merge: >>> If patch is failing, help me to debug that otherwise review and merge. >>> * Barbican >>> * Blazar >>> * Cyborg >>> * Tricircle >>> * Vitrage >>> * Zaqar >>> * Cinder >>> * Glance >>> * Monasca >>> * Neutron >> >> I thought that Neutron is already done. Do You mean patches for some stadium projects which are still not merged? Can You give me links to such patches with failing job to make sure that I didn’t miss anything? > > Yeah, it is for neutron stadium projects, I am tracking them as neutron only. Few are up I think and other I need to prepare the jobs. I am doing that and let you know the link. Thx for confirmation :) > > -gmann > >> >>> * Qinling >>> * Quality Assurance >>> * Sahara >>> * Searchlight >>> * Senlin >>> * Tacker >>> >>> The projects have merged the IPv6 jobs: >>> * Designate >>> * Murano >>> * Trove >>> * Cloudkitty >>> * Congress >>> * Horizon >>> * Keystone >>> * Nova >>> * Placement >>> * Solum >>> * Telemetry >>> * Watcher >>> * Zun >>> >>> The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): >>> If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. >>> >>> * Adjutant >>> * Documentation >>> * I18n >>> * Infrastructure >>> * Loci >>> * Openstack Charms >>> * Openstack-Chef >>> * Openstack-Helm >>> * Openstackansible >>> * Openstackclient >>> * Openstacksdk >>> * Oslo >>> * Packaging-Rpm >>> * Powervmstackers >>> * Puppet Openstack >>> * Rally >>> * Release Management >>> * Requirements >>> * Storlets >>> * Tripleo >>> * Winstackers >>> >>> >>> Storyboard: >>> ========= >>> - https://storyboard.openstack.org/#!/story/2005477 >>> >>> IPv6 missing support found: >>> ===================== >>> 1. https://review.opendev.org/#/c/673397/ >>> 2. https://review.opendev.org/#/c/673449/ >>> 3. https://review.opendev.org/#/c/677524/ >>> >>> How you can help: >>> ============== >>> - Each project needs to look for and review the ipv6 job patch. >>> - Verify it works fine on ipv6 and no ipv4 used in conf etc >>> - Any other specific scenario needs to be added as part of project IPv6 verification. >>> - Help on debugging and fix the bug in IPv6 job is failing. >>> >>> Everything related to this goal can be found under this topic: >>> Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) >>> >>> How to define and run new IPv6 Job on project side: >>> ======================================= >>> - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing >>> >>> Review suggestion: >>> ============== >>> - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any >>> other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with >>> that point of view. If anything missing, comment on patch. >>> - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ >>> - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var >>> setting. But if your project needs more specific verification then it can be added in project side job as post-run >>> playbooks as described in wiki page[1]. >>> >>> [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing >>> >>> -gmann >>> >>> >>> >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat — Slawek Kaplonski Senior software engineer Red Hat From cboylan at sapwetik.org Tue Sep 17 17:17:00 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 17 Sep 2019 10:17:00 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> Message-ID: <64b36128-911f-4599-9ada-4773b1957077@www.fastmail.com> On Tue, Sep 17, 2019, at 3:12 AM, Radosław Piliszek wrote: > Hiya, > > Kolla is not going to get an IPv6-only job because it builds docker > images and is not tested regarding networking (it does not do > devstack/tempest either). > > Kolla-Ansible, which does the deployment, is going to get some > IPv6-only test jobs - https://review.opendev.org/681573 > We are testing CentOS and multinode and hence need overlay VXLAN to > reach sensible levels of stability there - > https://review.opendev.org/670690 > The VXLAN patch is probably ready, awaiting review of independent > cores. It will be refactored out later to put it in zuul plays as it > might be useful to other projects as well. > The IPv6 patch needs rebasing on VXLAN and some small tweaks still. It is worth noting that you could test with the existing overlay network tooling that the infra team provides. This has been proven to work over years of multinode testing. Then we could incrementally improve it to address some of the deficiencies you have pointed out with it. This was sort of what I was trying to get across on IRC. Rather than go and reinvent the wheel to the detriment of meeting this goal on time: instead use what is there and works. Then improve what is there over time. > > Kind regards, > Radek > From openstack at fried.cc Tue Sep 17 17:20:30 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 17 Sep 2019 12:20:30 -0500 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> Message-ID: Coincidentally, I'm trying [1] via [2] based on advice from zzzeek. efried [1] https://dba.stackexchange.com/questions/19135/mysql-error-reading-communication-packets/19139#19139 [2] https://review.opendev.org/#/c/682661/ From satish.txt at gmail.com Tue Sep 17 17:27:40 2019 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 17 Sep 2019 13:27:40 -0400 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> Message-ID: I don't want to boil the ocean but i had similar problem my nove was loosing mysql db connection and we should culprit was Load-balancer (BigIP F5) it has different tcp-timeout compare to whatever openstack provide. after adjusting timeout on F5 my issue got resolved. On Tue, Sep 17, 2019 at 12:56 PM Sean Mooney wrote: > > On Tue, 2019-09-17 at 16:36 +0000, Albert Braden wrote: > > I thought I had figured out that the solution was to increase the MySQL wait_timeout so that it is longer than the > > nova (and glance, neutron, etc.) connection_recycle_time (3600). I increased my MySQL wait_timeout to 6000: > > > > root at us01odc-qa-ctrl1:~# mysqladmin variables|grep wait_timeout|grep -v _wait > > > wait_timeout | 6000 > > > > But I still see the MySQL errors. There's no LB; we are pointing to a single MySQL host. > > > > Sep 11 14:59:56 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:56 8016 [Warning] Aborted connection 8016 to db: > > 'nova' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) > > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8019 [Warning] Aborted connection 8019 to db: > > 'glance' user: 'glance' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) > > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8018 [Warning] Aborted connection 8018 to db: > > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) > > Sep 11 15:00:50 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 15:00:50 8022 [Warning] Aborted connection 8022 to db: > > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) > > > > The errors come from nova, neutron, glance and keystone; it appears that all default to 3600. So it appears that, even > > with wait_timeout > connection_recycle_time we still see mysql timeout errors. > > > > Just for fun I tried setting the MySQL wait_timeout to 86400 and restarting MySQL. I expected that this would pause > > the "Aborted connection" errors for 24 hours, but they started again after an hour. So it looks like my original > > assumption was incorrect. I thought nova was keeping connections open until the MySQL server timed them out, but now > > it appears that something else is happening. > > > > Has anyone successfully stopped these MySQL error messages? > > could this be related to the eventlet heartbeat issue we see for rabbitmq when running the api under mod_wsgi/uwsgi? > > e.g. hav eyou confirmed that you wsgi serer is configure to use 1 thread and multiple processes for concurancy > multiple thread in one process might have issues. > > -----Original Message----- > > From: Ben Nemec > > Sent: Monday, September 9, 2019 9:50 AM > > To: Chris Hoge ; openstack-discuss at lists.openstack.org > > Subject: Re: [oslo][nova] Nova causes MySQL timeouts > > > > > > > > On 9/9/19 11:38 AM, Chris Hoge wrote: > > > In my personal experience, running Nova on a four core machine without > > > limiting the number of database connections will easily exhaust the > > > available connections to MySQL/MariaDB. Keep in mind that the limit > > > applies to every instance of a service, so if Nova starts 'm' services > > > replicated for 'n' cores with 'd' possible connections you'll be up to > > > ‘m x n x d' connections. It gets big fast. > > > > > > The default setting of '0' (that is, unlimited) does not make for a good > > > first-run experience, IMO. > > > > We don't default to 0. We default to 5: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > > > > > This issue comes up every few years or so, and the consensus previously > > > is that 200-2000 connections is recommended based on your needs. Your > > > database has to be configured to handle the load and looking at the > > > configuration value across all your services and setting them > > > consistently and appropriately is important. > > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2015-2DApril_061808.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=FGLfZK5eHj7z_xL-5DJsPgHkOt_T131ugvicMvcMDbc&e= > > > > > > > Thanks, I did not recall that discussion. > > > > If I'm reading it correctly, Jay is suggesting that for MySQL we should > > just disable connection pooling. As I noted earlier, I don't think we > > expose the ability to do that in oslo.db (patches welcome!), but setting > > max_pool_size to 1 would get you pretty close. Maybe we should add that > > to the help text for the option in oslo.db? > > > > > > > > > On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: > > > > > > > > Tagging with oslo as this sounds related to oslo.db. > > > > > > > > On 9/5/19 7:37 PM, Albert Braden wrote: > > > > > After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay > > > > > open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the > > > > > pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time > > > > > and allow 10 temp. > > > > > Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that > > > > > a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if > > > > > the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? > > > > > > > > I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to > > > > minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open > > > > connections frequently. > > > > > > > > It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support > > > > that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I > > > > seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path > > > > between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening > > > > intervening timeouts, might get rid of these disconnection messages. > > > > > > > > 0: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > > > 1: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.sqlalchemy.org_en_13_core_pooling.html-23sqlalchemy.pool.QueuePool.-5F-5Finit-5F-5F&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=_EIhQyyj1gSM0PrX7de3yJr8hNi7tD8-tnfPo2VV_LU&e= > > > > > > > > 2: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.connection-5Frecycle-5Ftime&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=xDnj80EQrxXwenOLgmKEaJbF3VRIylapDgqyMs81pSY&e= > > > > > > > > > > > > > *From:* Albert Braden > > > > > *Sent:* Wednesday, September 4, 2019 10:19 AM > > > > > *To:* openstack-discuss at lists.openstack.org > > > > > *Cc:* Gaëtan Trellu > > > > > *Subject:* RE: Nova causes MySQL timeouts > > > > > We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: > > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_keystone_stein_configuration_config-2Doptions.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=NXcUpNTYGd6ZP-1oOUaQXsF7rHQ0mAt4e9uL8zzd0KA&e= > > > > > = > > > > 2Doptions.html&d=DwMGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=3eF4Bv1HRQW6gl7 > > > > > II12rTTSKj_A9_LDISS6hU0nP-R0&s=0EGWx9qW60G1cxoPFCIv_G1-iXX20jKcC5-AwlCWk8g&e=> > > > > > Document says: > > > > > [api_database] > > > > > connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. > > > > > max_overflow = None (Integer) If set, use this value for max_overflow with > > > > > SQLAlchemy. > > > > > max_pool_size = None (Integer) Maximum number of SQL connections to keep open > > > > > in a pool. > > > > > [database] > > > > > connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. > > > > > min_pool_size = 1 (Integer) Minimum number of SQL connections to keep > > > > > open in a pool. > > > > > max_overflow = 50 (Integer) If set, use this value for max_overflow > > > > > with SQLAlchemy. > > > > > max_pool_size = None (Integer) Maximum number of SQL connections to keep open > > > > > in a pool. > > > > > If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the > > > > > recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” > > > > > an acceptable setting? > > > > > My settings are default: > > > > > [api_database]: > > > > > #connection_recycle_time = 3600 > > > > > #max_overflow = > > > > > #max_pool_size = > > > > > [database]: > > > > > #connection_recycle_time = 3600 > > > > > #min_pool_size = 1 > > > > > #max_overflow = 50 > > > > > #max_pool_size = 5 > > > > > It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? > > > > > *From:* Gaëtan Trellu > > > > > > *Sent:* Tuesday, September 3, 2019 1:37 PM > > > > > *To:* Albert Braden > > > > > > *Cc:* openstack-discuss at lists.openstack.org > > > > > *Subject:* Re: Nova causes MySQL timeouts > > > > > Hi Albert, > > > > > It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. > > > > > Keep in mind than more workers you will have more connections will be opened on the database. > > > > > Gaetan (goldyfruit) > > > > > On Sep 3, 2019 4:31 PM, Albert Braden > wrote: > > > > > It looks like nova is keeping mysql connections open until they time > > > > > out. How are others responding to this issue? Do you just ignore the > > > > > mysql errors, or is it possible to change configuration so that nova > > > > > closes and reopens connections before they time out? Or is there a > > > > > way to stop mysql from logging these aborted connections without > > > > > hiding real issues? > > > > > Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' > > > > > (Got timeout reading communication packets) > > > > > > > > > > > > From Albert.Braden at synopsys.com Tue Sep 17 17:51:36 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 17 Sep 2019 17:51:36 +0000 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> Message-ID: I had not heard about the eventlet heartbeat issue. Where can I read more about it? The [wsgi] section of my nova.conf is default; nothing is uncommented. -----Original Message----- From: Sean Mooney Sent: Tuesday, September 17, 2019 9:50 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Cc: Ben Nemec ; Chris Hoge Subject: Re: [oslo][nova] Nova causes MySQL timeouts On Tue, 2019-09-17 at 16:36 +0000, Albert Braden wrote: > I thought I had figured out that the solution was to increase the MySQL wait_timeout so that it is longer than the > nova (and glance, neutron, etc.) connection_recycle_time (3600). I increased my MySQL wait_timeout to 6000: > > root at us01odc-qa-ctrl1:~# mysqladmin variables|grep wait_timeout|grep -v _wait > > wait_timeout | 6000 > > But I still see the MySQL errors. There's no LB; we are pointing to a single MySQL host. > > Sep 11 14:59:56 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:56 8016 [Warning] Aborted connection 8016 to db: > 'nova' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8019 [Warning] Aborted connection 8019 to db: > 'glance' user: 'glance' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 8018 [Warning] Aborted connection 8018 to db: > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got timeout reading communication packets) > Sep 11 15:00:50 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 15:00:50 8022 [Warning] Aborted connection 8022 to db: > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl1.internal.synopsys.com' (Got timeout reading communication packets) > > The errors come from nova, neutron, glance and keystone; it appears that all default to 3600. So it appears that, even > with wait_timeout > connection_recycle_time we still see mysql timeout errors. > > Just for fun I tried setting the MySQL wait_timeout to 86400 and restarting MySQL. I expected that this would pause > the "Aborted connection" errors for 24 hours, but they started again after an hour. So it looks like my original > assumption was incorrect. I thought nova was keeping connections open until the MySQL server timed them out, but now > it appears that something else is happening. > > Has anyone successfully stopped these MySQL error messages? could this be related to the eventlet heartbeat issue we see for rabbitmq when running the api under mod_wsgi/uwsgi? e.g. hav eyou confirmed that you wsgi serer is configure to use 1 thread and multiple processes for concurancy multiple thread in one process might have issues. > -----Original Message----- > From: Ben Nemec > Sent: Monday, September 9, 2019 9:50 AM > To: Chris Hoge ; openstack-discuss at lists.openstack.org > Subject: Re: [oslo][nova] Nova causes MySQL timeouts > > > > On 9/9/19 11:38 AM, Chris Hoge wrote: > > In my personal experience, running Nova on a four core machine without > > limiting the number of database connections will easily exhaust the > > available connections to MySQL/MariaDB. Keep in mind that the limit > > applies to every instance of a service, so if Nova starts 'm' services > > replicated for 'n' cores with 'd' possible connections you'll be up to > > ‘m x n x d' connections. It gets big fast. > > > > The default setting of '0' (that is, unlimited) does not make for a good > > first-run experience, IMO. > > We don't default to 0. We default to 5: > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > This issue comes up every few years or so, and the consensus previously > > is that 200-2000 connections is recommended based on your needs. Your > > database has to be configured to handle the load and looking at the > > configuration value across all your services and setting them > > consistently and appropriately is important. > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2015-2DApril_061808.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=FGLfZK5eHj7z_xL-5DJsPgHkOt_T131ugvicMvcMDbc&e= > > > > Thanks, I did not recall that discussion. > > If I'm reading it correctly, Jay is suggesting that for MySQL we should > just disable connection pooling. As I noted earlier, I don't think we > expose the ability to do that in oslo.db (patches welcome!), but setting > max_pool_size to 1 would get you pretty close. Maybe we should add that > to the help text for the option in oslo.db? > > > > > > On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: > > > > > > Tagging with oslo as this sounds related to oslo.db. > > > > > > On 9/5/19 7:37 PM, Albert Braden wrote: > > > > After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay > > > > open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the > > > > pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time > > > > and allow 10 temp. > > > > Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that > > > > a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if > > > > the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? > > > > > > I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to > > > minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open > > > connections frequently. > > > > > > It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support > > > that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I > > > seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path > > > between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening > > > intervening timeouts, might get rid of these disconnection messages. > > > > > > 0: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > 1: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.sqlalchemy.org_en_13_core_pooling.html-23sqlalchemy.pool.QueuePool.-5F-5Finit-5F-5F&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=_EIhQyyj1gSM0PrX7de3yJr8hNi7tD8-tnfPo2VV_LU&e= > > > > > > 2: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.connection-5Frecycle-5Ftime&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=xDnj80EQrxXwenOLgmKEaJbF3VRIylapDgqyMs81pSY&e= > > > > > > > > > > *From:* Albert Braden > > > > *Sent:* Wednesday, September 4, 2019 10:19 AM > > > > *To:* openstack-discuss at lists.openstack.org > > > > *Cc:* Gaëtan Trellu > > > > *Subject:* RE: Nova causes MySQL timeouts > > > > We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_keystone_stein_configuration_config-2Doptions.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=NXcUpNTYGd6ZP-1oOUaQXsF7rHQ0mAt4e9uL8zzd0KA&e= > > > > = > > > 2Doptions.html&d=DwMGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=3eF4Bv1HRQW6gl7 > > > > II12rTTSKj_A9_LDISS6hU0nP-R0&s=0EGWx9qW60G1cxoPFCIv_G1-iXX20jKcC5-AwlCWk8g&e=> > > > > Document says: > > > > [api_database] > > > > connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. > > > > max_overflow = None (Integer) If set, use this value for max_overflow with > > > > SQLAlchemy. > > > > max_pool_size = None (Integer) Maximum number of SQL connections to keep open > > > > in a pool. > > > > [database] > > > > connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. > > > > min_pool_size = 1 (Integer) Minimum number of SQL connections to keep > > > > open in a pool. > > > > max_overflow = 50 (Integer) If set, use this value for max_overflow > > > > with SQLAlchemy. > > > > max_pool_size = None (Integer) Maximum number of SQL connections to keep open > > > > in a pool. > > > > If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the > > > > recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” > > > > an acceptable setting? > > > > My settings are default: > > > > [api_database]: > > > > #connection_recycle_time = 3600 > > > > #max_overflow = > > > > #max_pool_size = > > > > [database]: > > > > #connection_recycle_time = 3600 > > > > #min_pool_size = 1 > > > > #max_overflow = 50 > > > > #max_pool_size = 5 > > > > It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? > > > > *From:* Gaëtan Trellu > > > > > *Sent:* Tuesday, September 3, 2019 1:37 PM > > > > *To:* Albert Braden > > > > > *Cc:* openstack-discuss at lists.openstack.org > > > > *Subject:* Re: Nova causes MySQL timeouts > > > > Hi Albert, > > > > It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. > > > > Keep in mind than more workers you will have more connections will be opened on the database. > > > > Gaetan (goldyfruit) > > > > On Sep 3, 2019 4:31 PM, Albert Braden > wrote: > > > > It looks like nova is keeping mysql connections open until they time > > > > out. How are others responding to this issue? Do you just ignore the > > > > mysql errors, or is it possible to change configuration so that nova > > > > closes and reopens connections before they time out? Or is there a > > > > way to stop mysql from logging these aborted connections without > > > > hiding real issues? > > > > Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' > > > > (Got timeout reading communication packets) > > > > > > From radoslaw.piliszek at gmail.com Tue Sep 17 18:03:52 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 17 Sep 2019 20:03:52 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <64b36128-911f-4599-9ada-4773b1957077@www.fastmail.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <64b36128-911f-4599-9ada-4773b1957077@www.fastmail.com> Message-ID: Hi Clark, no problem - Mark laid the ground already before we learnt about the Zuul roles - having learnt they had issues with ipv6 we decided to foster our approach. Remember we don't use OVS for that, so it's not entirely reinventing the wheel. So far we used multinode without the overlay network - it worked fine with IPv4 as long as there was private addressing. ;-) IPv6 has this issue that CentOS does not pick up IPv6 addresses properly - and they are public and not guaranteed anyway. wt., 17 wrz 2019 o 19:23 Clark Boylan napisał(a): > On Tue, Sep 17, 2019, at 3:12 AM, Radosław Piliszek wrote: > > Hiya, > > > > Kolla is not going to get an IPv6-only job because it builds docker > > images and is not tested regarding networking (it does not do > > devstack/tempest either). > > > > Kolla-Ansible, which does the deployment, is going to get some > > IPv6-only test jobs - https://review.opendev.org/681573 > > We are testing CentOS and multinode and hence need overlay VXLAN to > > reach sensible levels of stability there - > > https://review.opendev.org/670690 > > The VXLAN patch is probably ready, awaiting review of independent > > cores. It will be refactored out later to put it in zuul plays as it > > might be useful to other projects as well. > > The IPv6 patch needs rebasing on VXLAN and some small tweaks still. > > It is worth noting that you could test with the existing overlay network > tooling that the infra team provides. This has been proven to work over > years of multinode testing. Then we could incrementally improve it to > address some of the deficiencies you have pointed out with it. > > This was sort of what I was trying to get across on IRC. Rather than go > and reinvent the wheel to the detriment of meeting this goal on time: > instead use what is there and works. Then improve what is there over time. > > > > > Kind regards, > > Radek > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 17 18:23:08 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 17 Sep 2019 14:23:08 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # New projects - charm-cinder-purestorage (under charm-cinder) # Retired projects - networking-generic-switch-tempest-plugin (under ironic) # General changes - Date changes, role resets and member additions to the OpenStack Governance member list: https://review.opendev.org/#/c/680356/ - Changes in the OpenStack Governance house-rules. TC members will be able to approve fast-tracked changes: https://review.opendev.org/#/c/678212/ - Update elections results: https://review.opendev.org/#/c/680507/ Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From gmann at ghanshyammann.com Tue Sep 17 18:32:44 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 11:32:44 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <8907C14B-99D2-4F38-B6DC-5D49F8413576@redhat.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> <16d4020aea1.11749982a87432.4028522842882545383@ghanshyammann.com> <8907C14B-99D2-4F38-B6DC-5D49F8413576@redhat.com> Message-ID: <16d407f8f8e.cc6cb29889596.3955287633502787392@ghanshyammann.com> ---- On Tue, 17 Sep 2019 09:58:05 -0700 Slawek Kaplonski wrote ---- > Hi, > > > On 17 Sep 2019, at 18:49, Ghanshyam Mann wrote: > > > > ---- On Mon, 16 Sep 2019 23:28:53 -0700 Slawek Kaplonski wrote ---- > >> Hi Ghanshyam, > >> > >>> On 17 Sep 2019, at 04:51, Ghanshyam Mann wrote: > >>> > >>> Hello Everyone, > >>> > >>> Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > >>> the only best way to do. > >>> > >>> Summary: > >>> > >>> The projects still need to prepare the IPv6 job: > >>> * Ec2-Api > >>> * Freezer > >>> * Heat > >>> * Ironic > >>> * Karbor > >>> * Kolla > >>> * Kuryr > >>> * Magnum > >>> * Manila > >>> * Masakari > >>> * Mistral > >>> * Murano > >>> * Octavia > >>> * Swift > >>> > >>> The projects waiting for IPv6 job patch to merge: > >>> If patch is failing, help me to debug that otherwise review and merge. > >>> * Barbican > >>> * Blazar > >>> * Cyborg > >>> * Tricircle > >>> * Vitrage > >>> * Zaqar > >>> * Cinder > >>> * Glance > >>> * Monasca > >>> * Neutron > >> > >> I thought that Neutron is already done. Do You mean patches for some stadium projects which are still not merged? Can You give me links to such patches with failing job to make sure that I didn’t miss anything? > > > > Yeah, it is for neutron stadium projects, I am tracking them as neutron only. Few are up I think and other I need to prepare the jobs. I am doing that and let you know the link. > > Thx for confirmation :) I have prepared the jobs for all projects you listed here[1]. - networking-midonet: https://review.opendev.org/#/c/682707/ - networking-bgpvpn: https://review.opendev.org/#/c/682710/ - networking-bagpipe: https://review.opendev.org/#/c/682709/ - neutron-dynamic-routing: https://review.opendev.org/#/c/682700/ - networking-odl: https://review.opendev.org/#/c/673501/ - networking-ovn: https://review.opendev.org/#/c/673488/2 [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008084.html -gmann > > > > > -gmann > > > >> > >>> * Qinling > >>> * Quality Assurance > >>> * Sahara > >>> * Searchlight > >>> * Senlin > >>> * Tacker > >>> > >>> The projects have merged the IPv6 jobs: > >>> * Designate > >>> * Murano > >>> * Trove > >>> * Cloudkitty > >>> * Congress > >>> * Horizon > >>> * Keystone > >>> * Nova > >>> * Placement > >>> * Solum > >>> * Telemetry > >>> * Watcher > >>> * Zun > >>> > >>> The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > >>> If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > >>> > >>> * Adjutant > >>> * Documentation > >>> * I18n > >>> * Infrastructure > >>> * Loci > >>> * Openstack Charms > >>> * Openstack-Chef > >>> * Openstack-Helm > >>> * Openstackansible > >>> * Openstackclient > >>> * Openstacksdk > >>> * Oslo > >>> * Packaging-Rpm > >>> * Powervmstackers > >>> * Puppet Openstack > >>> * Rally > >>> * Release Management > >>> * Requirements > >>> * Storlets > >>> * Tripleo > >>> * Winstackers > >>> > >>> > >>> Storyboard: > >>> ========= > >>> - https://storyboard.openstack.org/#!/story/2005477 > >>> > >>> IPv6 missing support found: > >>> ===================== > >>> 1. https://review.opendev.org/#/c/673397/ > >>> 2. https://review.opendev.org/#/c/673449/ > >>> 3. https://review.opendev.org/#/c/677524/ > >>> > >>> How you can help: > >>> ============== > >>> - Each project needs to look for and review the ipv6 job patch. > >>> - Verify it works fine on ipv6 and no ipv4 used in conf etc > >>> - Any other specific scenario needs to be added as part of project IPv6 verification. > >>> - Help on debugging and fix the bug in IPv6 job is failing. > >>> > >>> Everything related to this goal can be found under this topic: > >>> Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > >>> > >>> How to define and run new IPv6 Job on project side: > >>> ======================================= > >>> - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > >>> > >>> Review suggestion: > >>> ============== > >>> - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > >>> other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > >>> that point of view. If anything missing, comment on patch. > >>> - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > >>> - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > >>> setting. But if your project needs more specific verification then it can be added in project side job as post-run > >>> playbooks as described in wiki page[1]. > >>> > >>> [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > >>> > >>> -gmann > >>> > >>> > >>> > >> > >> — > >> Slawek Kaplonski > >> Senior software engineer > >> Red Hat > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > From mnaser at vexxhost.com Tue Sep 17 18:37:26 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 17 Sep 2019 14:37:26 -0400 Subject: [ansible-sig] weekly meeting summary Message-ID: Hi everyone, On Friday, September 13th we had our first ever openstack-ansible-sig meeting and here’s a summary of what we talked about. The goal of this group would be to collaborate on points that will lower the tech-debt of all our projects. We think there are a few things we can work on like spanning roles, modules, plugins, patterns, etc. We mentioned wanting to work on the os-tempest role, the config-template, the connection plugin or mitogen if we can extract only the connection stacking parts. We’ll start working on the connection plugins and try to upstream when it’s ready. owalsh has been working on the docker plugin, so he should have his docker bits in a repository soon so we can push them into the forked connection plugin. We discussed getting Kolla into working on tempest things alongside OSA and TripleO in order to make it better. We’ll also start working on the Ansible RFE regarding connection plugins.. Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mjturek at linux.vnet.ibm.com Tue Sep 17 18:43:36 2019 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Tue, 17 Sep 2019 14:43:36 -0400 Subject: [kolla] State of ppc64le support In-Reply-To: <29ed66c8-3f49-8e24-ccca-ccb73bc33374@linaro.org> References: <7f3bf5dc-3f1e-6369-4c55-eb9780f05eda@linux.vnet.ibm.com> <29ed66c8-3f49-8e24-ccca-ccb73bc33374@linaro.org> Message-ID: On 9/16/19 6:07 PM, Marcin Juszkiewicz wrote: > W dniu 16.09.2019 o 18:45, Michael Turek pisze: >> Hey all, >> >> We do use kolla. Let me see if I can shed some light here. > I have to admit that I wanted to check is there anyone using Kolla on > ppc64le. Thanks for replies. > >>>         CentOS builds lack 'rabbitmq' 3.7.10 (we use external repo) and >>>         'gnocchi' binary images are not buildable due to lack of some >>>         packages >>>         (issue already reported to CentOS by TripleO team). >>> >> We should be getting the gnocchi issue fixed. See this thread >> https://lists.centos.org/pipermail/centos-devel/2019-September/017721.html > I am in that thread ;) Indeed you are! Thanks for the reply :) > >> The rabbitmq issue is confusing me. The version provided for x86_64 >> seems to be the same one provided for ppc64le, but maybe I'm missing >> something. If there's a package we need to get published, I can >> investigate. > External repo is used and we install "rabbitmq-3.7.10". One 'if ppc64le' > check and will work by using in-centos-repo version. Thank you for the patch here! > >>>         Ubuntu builds lack MariaDB 10.3 because upstream repo is broken. >>>         Packages index is provided for 'ppc64le' but no packages so we >>>         get 404 >>>         errors. >>> >> Unfortunately I'm not well versed on the gaps in Ubuntu. > I am fine with it. No one noticed == no one uses. > >> The question I have is, what do you need to maintain support? I can join >> this week's IRC meeting if that would be helpful. > For me a knowledge that someone is using is enough to keep it available. > Would not call it 'maintaining support' as I do builds on ppc64le once > per cycle (if at all per cycle). Understood! > >> Also, last week mnasiadka reached out to me asking if we might be able >> to turn on kolla jobs in pkvmci (our third party CI - >> https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI ). I >> plan to talk to our CI folks this week to see if we have capacity for this. > Some kind of CI job would be great. Even simple 'centos/source' combo. Sounds good, I'm talking with our CI folks and will keep kolla updated. > > I have two patches adding AArch64 CI but we (Linaro) have to fix our > OpenStack cluster first. All Ceph nodes use hard drives only and > probably not configured optimally. As a result we are unable to fit in > three hours required by Zuul. > From skaplons at redhat.com Tue Sep 17 19:20:19 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 17 Sep 2019 21:20:19 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <16d407f8f8e.cc6cb29889596.3955287633502787392@ghanshyammann.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> <16d4020aea1.11749982a87432.4028522842882545383@ghanshyammann.com> <8907C14B-99D2-4F38-B6DC-5D49F8413576@redhat.com> <16d407f8f8e.cc6cb29889596.3955287633502787392@ghanshyammann.com> Message-ID: <20190917192019.GD6865@t440s> Hi, On Tue, Sep 17, 2019 at 11:32:44AM -0700, Ghanshyam Mann wrote: > ---- On Tue, 17 Sep 2019 09:58:05 -0700 Slawek Kaplonski wrote ---- > > Hi, > > > > > On 17 Sep 2019, at 18:49, Ghanshyam Mann wrote: > > > > > > ---- On Mon, 16 Sep 2019 23:28:53 -0700 Slawek Kaplonski wrote ---- > > >> Hi Ghanshyam, > > >> > > >>> On 17 Sep 2019, at 04:51, Ghanshyam Mann wrote: > > >>> > > >>> Hello Everyone, > > >>> > > >>> Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > > >>> the only best way to do. > > >>> > > >>> Summary: > > >>> > > >>> The projects still need to prepare the IPv6 job: > > >>> * Ec2-Api > > >>> * Freezer > > >>> * Heat > > >>> * Ironic > > >>> * Karbor > > >>> * Kolla > > >>> * Kuryr > > >>> * Magnum > > >>> * Manila > > >>> * Masakari > > >>> * Mistral > > >>> * Murano > > >>> * Octavia > > >>> * Swift > > >>> > > >>> The projects waiting for IPv6 job patch to merge: > > >>> If patch is failing, help me to debug that otherwise review and merge. > > >>> * Barbican > > >>> * Blazar > > >>> * Cyborg > > >>> * Tricircle > > >>> * Vitrage > > >>> * Zaqar > > >>> * Cinder > > >>> * Glance > > >>> * Monasca > > >>> * Neutron > > >> > > >> I thought that Neutron is already done. Do You mean patches for some stadium projects which are still not merged? Can You give me links to such patches with failing job to make sure that I didn’t miss anything? > > > > > > Yeah, it is for neutron stadium projects, I am tracking them as neutron only. Few are up I think and other I need to prepare the jobs. I am doing that and let you know the link. > > > > Thx for confirmation :) > > I have prepared the jobs for all projects you listed here[1]. > > - networking-midonet: https://review.opendev.org/#/c/682707/ > - networking-bgpvpn: https://review.opendev.org/#/c/682710/ > - networking-bagpipe: https://review.opendev.org/#/c/682709/ > - neutron-dynamic-routing: https://review.opendev.org/#/c/682700/ > - networking-odl: https://review.opendev.org/#/c/673501/ > - networking-ovn: https://review.opendev.org/#/c/673488/2 > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008084.html Thx for this list. I will check all of them this week. > > -gmann > > > > > > > > > > -gmann > > > > > >> > > >>> * Qinling > > >>> * Quality Assurance > > >>> * Sahara > > >>> * Searchlight > > >>> * Senlin > > >>> * Tacker > > >>> > > >>> The projects have merged the IPv6 jobs: > > >>> * Designate > > >>> * Murano > > >>> * Trove > > >>> * Cloudkitty > > >>> * Congress > > >>> * Horizon > > >>> * Keystone > > >>> * Nova > > >>> * Placement > > >>> * Solum > > >>> * Telemetry > > >>> * Watcher > > >>> * Zun > > >>> > > >>> The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > > >>> If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > >>> > > >>> * Adjutant > > >>> * Documentation > > >>> * I18n > > >>> * Infrastructure > > >>> * Loci > > >>> * Openstack Charms > > >>> * Openstack-Chef > > >>> * Openstack-Helm > > >>> * Openstackansible > > >>> * Openstackclient > > >>> * Openstacksdk > > >>> * Oslo > > >>> * Packaging-Rpm > > >>> * Powervmstackers > > >>> * Puppet Openstack > > >>> * Rally > > >>> * Release Management > > >>> * Requirements > > >>> * Storlets > > >>> * Tripleo > > >>> * Winstackers > > >>> > > >>> > > >>> Storyboard: > > >>> ========= > > >>> - https://storyboard.openstack.org/#!/story/2005477 > > >>> > > >>> IPv6 missing support found: > > >>> ===================== > > >>> 1. https://review.opendev.org/#/c/673397/ > > >>> 2. https://review.opendev.org/#/c/673449/ > > >>> 3. https://review.opendev.org/#/c/677524/ > > >>> > > >>> How you can help: > > >>> ============== > > >>> - Each project needs to look for and review the ipv6 job patch. > > >>> - Verify it works fine on ipv6 and no ipv4 used in conf etc > > >>> - Any other specific scenario needs to be added as part of project IPv6 verification. > > >>> - Help on debugging and fix the bug in IPv6 job is failing. > > >>> > > >>> Everything related to this goal can be found under this topic: > > >>> Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > >>> > > >>> How to define and run new IPv6 Job on project side: > > >>> ======================================= > > >>> - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > >>> > > >>> Review suggestion: > > >>> ============== > > >>> - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > >>> other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > >>> that point of view. If anything missing, comment on patch. > > >>> - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > >>> - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > >>> setting. But if your project needs more specific verification then it can be added in project side job as post-run > > >>> playbooks as described in wiki page[1]. > > >>> > > >>> [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > >>> > > >>> -gmann > > >>> > > >>> > > >>> > > >> > > >> — > > >> Slawek Kaplonski > > >> Senior software engineer > > >> Red Hat > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > > -- Slawek Kaplonski Senior software engineer Red Hat From feilong at catalyst.net.nz Tue Sep 17 19:36:40 2019 From: feilong at catalyst.net.nz (feilong) Date: Wed, 18 Sep 2019 07:36:40 +1200 Subject: [magnum] New weekly meeting time Message-ID: Hi team, As we discussed on IRC, the new Magnum weekly meeting time will be each Wednesday 9:00AM UTC on #openstack-containers channel. -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ From mnaser at vexxhost.com Tue Sep 17 20:23:39 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 17 Sep 2019 16:23:39 -0400 Subject: [openstack-ansible] office hours Message-ID: Hi everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. The upgrade jobs are going well and passing. CentOD is taking longer but we can reduce the time it takes if we skip tempest tests after Stein deployment (by changing the order of passed arguments to the ansible-playbook). Milestone was updated but we’re waiting on the Keystone bug to get resolved before merging it. We’ll have to work on Tempest in order to get that fixed with the proper testing. For Swift, we talked about editing our job templates to remove testing as we transition to Python 3. We’ll propose a patch and a revert at the same time to not forget about fixing it later. Neutron and Calico are having problems with Python3 and we discussed going through distro variables to make sure Ubuntu is using Python3 everywhere. Bind-to-mgmt needs a little more work until we can land it. We need to cut back on variables because right now there are too many. We also want to add a global variable in each role corresponding to the mgmt-addr. We think we’re getting closer to knowing what is breaking with CentOS 7.7. As for Galera, we abandoned the patch and replaced the bind addrs but we still need to figure out why it’s still breaking on the integrated repository. Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From whayutin at redhat.com Tue Sep 17 20:24:29 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 17 Sep 2019 14:24:29 -0600 Subject: [tripleo][ci] gate jobs killed / reset Message-ID: Greetings, The zuul jobs in the TripleO gate queue were put out of their misery approximately at 20:14 UTC Sept 17 2019. The TripleO jobs were timing out [1] and causing the gate queue to be delayed about 24 hours [2]. We are hoping a revert [3] will restore TripleO jobs back to their usual run times. Please hold off on any rechecks or workflowing patches until [3] is merged and the status on #tripleo is no longer "RED" We appreciate your patience while we work through this issue, the jobs that were in the gate will be restored once we have confirmed and verified the solution. Thank you [1] https://bugs.launchpad.net/tripleo/+bug/1844446 [2] http://dashboard-ci.tripleo.org/d/YRJtmtNWk/cockpit?orgId=1&fullscreen&panelId=398 [3] https://review.opendev.org/#/c/682729/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 17 20:47:50 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Sep 2019 13:47:50 -0700 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update-2 Message-ID: <16d40fb3d1c.b6af782691765.7854634751581180193@ghanshyammann.com> Hello Everyone, Below is the latest updated on IPv6 goal. All the projects have the patch proposed now. Next step is to review then as per mentioned guidelines below or help in debugging the failure if any. Summary: The projects still need to prepare the IPv6 job: * None The projects waiting for IPv6 job patch to merge: If patch is failing, help me to debug that otherwise review and merge. * Barbican * Blazar * Cyborg * Tricircle * Vitrage * Zaqar * Cinder * Glance * Monasca * Neutron * Qinling * Sahara * Searchlight * Senlin * Tacker * Ec2-Api * Freezer * Heat * Ironic * Karbor * Kuryr * Magnum * Manila * Masakari * Mistral * Murano * Octavia (johnsom is working on this and will take over the base patch) * Swift The projects have merged the IPv6 jobs: * Designate * Murano * Trove * Cloudkitty * Congress * Horizon * Keystone * Nova * Placement * Solum * Telemetry * Watcher * Zun The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): I have marked the tasks for below project as invalid. * Adjutant * Documentation * I18n * Infrastructure * Kolla * Loci * Openstack Charms * Openstack-Chef * Openstack-Helm * Openstackansible * Openstackclient * Openstacksdk * Oslo * Packaging-Rpm * Powervmstackers * Puppet Openstack * Rally * Release Management * Requirements * Storlets * Tripleo * Winstackers Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/677524/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific verification then it can be added in project side job as post-run playbooks as described in wiki page[1]. [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing From ken1ohmichi at gmail.com Tue Sep 17 22:35:15 2019 From: ken1ohmichi at gmail.com (Kenichi Omichi) Date: Tue, 17 Sep 2019 15:35:15 -0700 Subject: [all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability In-Reply-To: References: <16d397e41f7.12873dbb126838.8168349135797367489@ghanshyammann.com> <16d398641dd.ee09dee227347.1935004124034589982@ghanshyammann.com> <16d3c861117.d3b1337055686.8802713726745370694@ghanshyammann.com> Message-ID: 2019年9月17日(火) 6:23 Eric Harney : > On 9/16/19 8:01 PM, Ghanshyam Mann wrote: > > ---- On Tue, 17 Sep 2019 02:40:36 +0900 Eric Harney < > eharney at redhat.com> wrote ---- > > > On 9/16/19 6:02 AM, Ghanshyam Mann wrote: > > > > ---- On Mon, 16 Sep 2019 18:53:58 +0900 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > > > > Hello Everyone, > > > > > > > > > > As per discussion over ML, Tempest started the JSON schema > strict validation for Volume APIs response [1]. > > > > > Because it may affect the interop certification, it was > explained to the Interop team as well as in the Board of Director > meeting[2]. > > > > > > > > > > In Train, Tempest started implementing the validation and > found an API change where the new field was added in API response without > versioning[3] (Cinder has API microversion mechanism). IMO, that was not > the correct way to change the API and as per API-WG guidelines[4] any field > added/modified/removed in API should be with microverison(means old > versions/user should not be affected by that change) and must for API > interoperability. > > > > > > > > > > With JSON schema validation, Tempest verifies the API > interoperability recommended behaviour by API-WG. But as per IRC > conversion with cinder team, we have different opinion on API > interoperability and how API should be changed under microversion > mechanism. I would like to have a conclusion on this so that Tempest can > verify or leave the Volume API for strict validation. > > > > > > > > I found the same flow chart what Sean created in Nova about "when > to bump microverison" in Cinder also which clearly say any addition to > response need new microversion. > > > > - > https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html > > > > > > > > -gmann > > > > > > > > > > I don't believe that it is clear that a microversion bump was > required > > > for the "groups" response showing up in a GET quota-sets response, > and > > > here's why: > > > > > > This API has, since at least Havana, returned dynamic fields based on > > > quotas that are assigned to volume types. i.e.: > > > > > > $ cinder --debug quota-show b73b1b7e82a247038cd01a441ec5a806 > > > DEBUG:keystoneauth:RESP BODY: {"quota_set": {"per_volume_gigabytes": > -1, > > > "volumes_ceph": -1, "groups": 10, "gigabytes": 1000, > "backup_gigabytes": > > > 1000, "snapshots": 10, "volumes_enc": -1, "snapshots_enc": -1, > > > "snapshots_ceph": -1, "gigabytes_ceph": -1, "volumes": 10, > > > "gigabytes_enc": -1, "backups": 10, "id": > > > "b73b1b7e82a247038cd01a441ec5a806"}} > > > > > > "gigabytes_ceph" is in that response because there's a "ceph" volume > > > type defined, same for "gigabytes_enc", etc. > > > > > > This puts this API alongside something more like listing volume > types -- > > > you get a list of what's defined on the deployment, not a pre-baked > list > > > of defined fields. > > > > > > Complaints about the fact that "groups" being added without a > > > microversion imply that these other dynamic fields shouldn't be in > this > > > response either -- but this is how this API works. > > > > > > There's a lot of talk here about interoperability problems... what > are > > > those problems, exactly? If we ignore Ocata and just look at Train > -- > > > why is this API not problematic for interoperability there, when > > > requests on different clouds would return different data, depending > on > > > how types are configured? > > > > > > It's not clear to me that rectifying the microversion concerns around > > > the "groups" field is helpful without also understanding this piece, > > > because if the concern is that different clouds return different > fields > > > for this API -- that will still happen. We need more detail to > > > understand how to address this, and what the problem is that we are > > > trying to solve exactly. > > > > There are two things here. > > 1. API behaviour depends on backend. This has been discussed two years > back also and Tempest team along with cinder team decided not to test the > backend-specific behaviour in Tempest[1]. > > This is wrong. Nothing about what is happening in this API is > backend-specific. > > > 2. API is changed without versioning. > > > > The second one is the issue here. If any API is changed without > versioning cause the interoperability issue here. New field is being added > for older microversion also for same backend. > > > > If the concern is that different fields can be returned as part of quota > info, it's worth understanding that fixing the Ocata tempest failures > won't fix your concern, because this API still returns dynamic fields > when the deployment is using per-type quotas, even on master. > > Are those considered "changes"? Need concrete details here. > It is not difficult to answer this question. The Cinder official API document[1] says the field "groups" is always returned(not optional) in the API response. IIUC the above dynamic fields are not written on the document, right? In addition, Cinder implements microversions which clearly controls the addition of API field as [2]. But actually the field "groups" has been added without bumping a microversion against the above [2]. Then Ghanshyam raises the interoperability concern. What is the interoperability concern? The concern is if writing an application based on the API official document, the application can work on newer clouds(Pike+ in this case) but it cannot work on older clouds(Ocata-). Actually we can consider Tempest is one of applications by consuming OpenStack APIs and we implemented JSON-Schema validation based on the API official document. Then Tempest could not work on older clouds(stable/ocata) and it is clearly an interoperability issue. (We Tempest reviewers read the API document carefully for reviewing JSON-Schema patches, then we found Cinder document issues and fixed them :-) BTW IMO backwards compatibility is more important than interoperability. So we should not remove the field "groups" from the base microversion anyways. Thanks Kenichi Omichi --- [1]: https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=show-quotas-for-a-project-detail#show-quotas-for-a-project [2]: https://docs.openstack.org/cinder/latest/contributor/api_microversion_dev.html#when-do-i-need-a-new-microversion (the part of " the list of attributes and data structures returned") -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Sep 17 22:40:03 2019 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 17 Sep 2019 18:40:03 -0400 Subject: [tripleo][ci] gate jobs killed / reset In-Reply-To: References: Message-ID: Note that I also cleared the check for tripleo projects to accelerate the testing of our potential fixes. Hopefully we can resolve the situation really soon. On Tue, Sep 17, 2019 at 4:29 PM Wesley Hayutin wrote: > Greetings, > > The zuul jobs in the TripleO gate queue were put out of their misery > approximately at 20:14 UTC Sept 17 2019. The TripleO jobs were timing out > [1] and causing the gate queue to be delayed about 24 hours [2]. > > We are hoping a revert [3] will restore TripleO jobs back to their usual > run times. Please hold off on any rechecks or workflowing patches until > [3] is merged and the status on #tripleo is no longer "RED" > > We appreciate your patience while we work through this issue, the jobs > that were in the gate will be restored once we have confirmed and verified > the solution. > > Thank you > > > [1] https://bugs.launchpad.net/tripleo/+bug/1844446 > [2] > http://dashboard-ci.tripleo.org/d/YRJtmtNWk/cockpit?orgId=1&fullscreen&panelId=398 > [3] https://review.opendev.org/#/c/682729/ > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From logan at protiumit.com Tue Sep 17 22:47:39 2019 From: logan at protiumit.com (Logan V.) Date: Tue, 17 Sep 2019 17:47:39 -0500 Subject: [openstack-ansible] office hours In-Reply-To: References: Message-ID: I pinged the Calico team on their Slack, but have not heard back yet. However, it looks like Neil has been working on getting the py3 issues resolved in https://review.opendev.org/#/c/682338/, so hopefully that will get ironed out soon! (Thanks Neil!) If it is blocking patches in os_neutron, I think we should set the calico job to non-voting temporarily until a fix is merged to networking-calico. Thanks, Logan On Tue, Sep 17, 2019 at 3:25 PM Mohammed Naser wrote: > Neutron and Calico are having problems with Python3 and we discussed > going through distro variables to make sure Ubuntu is using Python3 > everywhere. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Sep 18 01:44:35 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 18 Sep 2019 10:44:35 +0900 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> Message-ID: Thanks for the feedback. I proposed a patch to stop gerrit notifications to #openstack-fwaas, #openstack-vpnaas and #networking-sfc based on this thread. https://review.opendev.org/682771 On Tue, Sep 17, 2019 at 9:45 PM Bernard Cafarelli wrote: > > On Tue, 17 Sep 2019 at 08:28, Slawek Kaplonski wrote: >> >> Hi, >> >> > On 17 Sep 2019, at 07:11, Akihiro Motoki wrote: >> > >> > On Tue, Sep 17, 2019 at 7:20 AM Jeremy Stanley wrote: >> >> >> >> Freenode imposes a hard limit of 120 simultaneously joined channels >> >> for any single account. We've once again reached that limit with our >> >> channel-logging meetbot. As a quick measure, I've proposed a bit of >> >> cleanup: https://review.opendev.org/682500 >> >> >> >> Analysis of IRC channel logs indicates the following have seen 5 or >> >> fewer non-bot comments posted in the past 12 months and are likely >> >> of no value to continue logging: >> >> >> >> 5 #openstack-vpnaas >> > >> > I would like to add the following channels to this list in addition to >> > #openstack-vpnaas. >> > This is what I think recently but I haven't discussed it yet with the team. >> > >> > - openstack-fwaas >> > - networking-sfc >> >> Ha, I didn’t even know that such channels exists. And from what I can say, if there are any topics related to such stadium projects, we are discussing them on #openstack-neutron channel usually. >> IMHO we can remove them too. > > Yes #networking-sfc was created almost 3 years ago when activity was higher, it was also used at some point for IRC meetings. These meetings have stopped and the channel is really quiet now. > So it sounds like a good time to formalize the folding back in neutron chan >> >> >> > >> > I see only 5~20 members in these channels constantly. >> > Developments in FWaaS and SFC are not so active, so I don't see a good >> > reason to have a separate channel. >> > They can be merged into the main neutron channel #openstack-neutron. >> > >> > Is there any guideline on how to guide users to migrate a channel to >> > another channel? >> > >> > Thanks, >> > Akihiro >> > >> > >> >> 2 #swift3 >> >> 2 #openstack-ko >> >> 1 #openstack-deployment >> >> 1 #midonet >> >> 0 #openstack-valet >> >> 0 #openstack-swg >> >> 0 #openstack-mogan >> >> >> >> Please let me know either here on the ML or with a comment on the >> >> review linked above if you have a reason to continue logging any of >> >> these channels. I'd like to merge it later this week if possible. >> >> Thanks! >> >> -- >> >> Jeremy Stanley >> > >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> > > > -- > Bernard Cafarelli From katonalala at gmail.com Wed Sep 18 06:02:20 2019 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 18 Sep 2019 08:02:20 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <5FC0E68C-6020-416B-89CF-9D077C8726B9@redhat.com> Message-ID: Hi, For networking-odl the patch (https://review.opendev.org/673501) is still waiting for other things to be merged. We are struggling to make tempest working with the latest ODL releases, and passing again with networking-odl and ODL. Regards Lajos Slawek Kaplonski ezt írta (időpont: 2019. szept. 17., K, 8:31): > > Hi Ghanshyam, > > > On 17 Sep 2019, at 04:51, Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Below is the progress on Ipv6 goal during R6 week. I started the legacy job for IPv6 deployment with duplicating the run.yaml which is > > the only best way to do. > > > > Summary: > > > > The projects still need to prepare the IPv6 job: > > * Ec2-Api > > * Freezer > > * Heat > > * Ironic > > * Karbor > > * Kolla > > * Kuryr > > * Magnum > > * Manila > > * Masakari > > * Mistral > > * Murano > > * Octavia > > * Swift > > > > The projects waiting for IPv6 job patch to merge: > > If patch is failing, help me to debug that otherwise review and merge. > > * Barbican > > * Blazar > > * Cyborg > > * Tricircle > > * Vitrage > > * Zaqar > > * Cinder > > * Glance > > * Monasca > > * Neutron > > I thought that Neutron is already done. Do You mean patches for some stadium projects which are still not merged? Can You give me links to such patches with failing job to make sure that I didn’t miss anything? > > > * Qinling > > * Quality Assurance > > * Sahara > > * Searchlight > > * Senlin > > * Tacker > > > > The projects have merged the IPv6 jobs: > > * Designate > > * Murano > > * Trove > > * Cloudkitty > > * Congress > > * Horizon > > * Keystone > > * Nova > > * Placement > > * Solum > > * Telemetry > > * Watcher > > * Zun > > > > The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): > > If anything I missed and IPv6 job need, please reply otherwise I will mark their task in storyboard as invalid. > > > > * Adjutant > > * Documentation > > * I18n > > * Infrastructure > > * Loci > > * Openstack Charms > > * Openstack-Chef > > * Openstack-Helm > > * Openstackansible > > * Openstackclient > > * Openstacksdk > > * Oslo > > * Packaging-Rpm > > * Powervmstackers > > * Puppet Openstack > > * Rally > > * Release Management > > * Requirements > > * Storlets > > * Tripleo > > * Winstackers > > > > > > Storyboard: > > ========= > > - https://storyboard.openstack.org/#!/story/2005477 > > > > IPv6 missing support found: > > ===================== > > 1. https://review.opendev.org/#/c/673397/ > > 2. https://review.opendev.org/#/c/673449/ > > 3. https://review.opendev.org/#/c/677524/ > > > > How you can help: > > ============== > > - Each project needs to look for and review the ipv6 job patch. > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > - Any other specific scenario needs to be added as part of project IPv6 verification. > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > Everything related to this goal can be found under this topic: > > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > How to define and run new IPv6 Job on project side: > > ======================================= > > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > Review suggestion: > > ============== > > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > that point of view. If anything missing, comment on patch. > > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > setting. But if your project needs more specific verification then it can be added in project side job as post-run > > playbooks as described in wiki page[1]. > > > > [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > -gmann > > > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From aj at suse.com Wed Sep 18 06:17:52 2019 From: aj at suse.com (Andreas Jaeger) Date: Wed, 18 Sep 2019 08:17:52 +0200 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> Message-ID: On 18/09/2019 03.44, Akihiro Motoki wrote: > Thanks for the feedback. > I proposed a patch to stop gerrit notifications to #openstack-fwaas, > #openstack-vpnaas and #networking-sfc based on this thread. > https://review.opendev.org/682771 Do you want to retire the channels completely? Then remove from logging - a change to opendev/system-config - and remove accessbot as well. Then, you can follow https://docs.openstack.org/infra/system-config/irc.html#renaming-an-irc-channel Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From amotoki at gmail.com Wed Sep 18 08:00:13 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 18 Sep 2019 17:00:13 +0900 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> Message-ID: On Wed, Sep 18, 2019 at 3:17 PM Andreas Jaeger wrote: > > On 18/09/2019 03.44, Akihiro Motoki wrote: > > Thanks for the feedback. > > I proposed a patch to stop gerrit notifications to #openstack-fwaas, > > #openstack-vpnaas and #networking-sfc based on this thread. > > https://review.opendev.org/682771 > > Do you want to retire the channels completely? > > Then remove from logging - a change to opendev/system-config - and > remove accessbot as well. > > Then, you can follow > https://docs.openstack.org/infra/system-config/irc.html#renaming-an-irc-channel Yes, we can retire these channels. My understanding on the step of this retirement is: - the first step is to stop notifications and my patch will address it. - Regarding the logging and accessbot, I think we can cover it by Jeremy's change. (I think this is what you mentioned on opendev/system-config in my patch) - The renaming process can be done only by the infra root, so what I can do is to ask it to the infra root. All logins to these three channels can be redirected to #openstack-neutron channel. I don't think we need to rush it. I just try to move this forward gradually. Akihiro Motoki (amotoki) > > Andreas > -- > Andreas Jaeger aj at suse.com Twitter: jaegerandi > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg > GF: Felix Imendörffer; HRB 247165 (AG München) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From hberaud at redhat.com Wed Sep 18 08:05:07 2019 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 18 Sep 2019 10:05:07 +0200 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> Message-ID: Le mar. 17 sept. 2019 à 19:55, Albert Braden a écrit : > I had not heard about the eventlet heartbeat issue. Where can I read more > about it? > Under apache and mod_wsgi eventlet green thread doesn't work properly. Nova faced this issue few months ago through the use of oslo.messaging and especially through the heartbeat's rabbitmq driver. The heartbeat was runned by using a green thread under apache and mod_wsgi, so after few secondes/minutes the heartbeat thread became idle and so the connection with the rabbitmq server was closed and re-opened etc... Hence, that introduced a lot of connections opened and closed between the client and the server. You can find more discuss about there: - http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005822.html And the oslo.messaging fix related to this issue : - https://github.com/openstack/oslo.messaging/commit/22f240b82fffbd62be8568a7d0d3369134596ace > > The [wsgi] section of my nova.conf is default; nothing is uncommented. > > -----Original Message----- > From: Sean Mooney > Sent: Tuesday, September 17, 2019 9:50 AM > To: Albert Braden ; > openstack-discuss at lists.openstack.org > Cc: Ben Nemec ; Chris Hoge > Subject: Re: [oslo][nova] Nova causes MySQL timeouts > > On Tue, 2019-09-17 at 16:36 +0000, Albert Braden wrote: > > I thought I had figured out that the solution was to increase the MySQL > wait_timeout so that it is longer than the > > nova (and glance, neutron, etc.) connection_recycle_time (3600). I > increased my MySQL wait_timeout to 6000: > > > > root at us01odc-qa-ctrl1:~# mysqladmin variables|grep wait_timeout|grep -v > _wait > > > wait_timeout | 6000 > > > > But I still see the MySQL errors. There's no LB; we are pointing to a > single MySQL host. > > > > Sep 11 14:59:56 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:56 > 8016 [Warning] Aborted connection 8016 to db: > > 'nova' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' (Got > timeout reading communication packets) > > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 > 8019 [Warning] Aborted connection 8019 to db: > > 'glance' user: 'glance' host: 'us01odc-qa-ctrl1.internal.synopsys.com' > (Got timeout reading communication packets) > > Sep 11 14:59:57 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 14:59:57 > 8018 [Warning] Aborted connection 8018 to db: > > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl2.internal.synopsys.com' > (Got timeout reading communication packets) > > Sep 11 15:00:50 us01odc-qa-ctrl1 mysqld[1052956]: 2019-09-11 15:00:50 > 8022 [Warning] Aborted connection 8022 to db: > > 'nova_api' user: 'nova' host: 'us01odc-qa-ctrl1.internal.synopsys.com' > (Got timeout reading communication packets) > > > > The errors come from nova, neutron, glance and keystone; it appears that > all default to 3600. So it appears that, even > > with wait_timeout > connection_recycle_time we still see mysql timeout > errors. > > > > Just for fun I tried setting the MySQL wait_timeout to 86400 and > restarting MySQL. I expected that this would pause > > the "Aborted connection" errors for 24 hours, but they started again > after an hour. So it looks like my original > > assumption was incorrect. I thought nova was keeping connections open > until the MySQL server timed them out, but now > > it appears that something else is happening. > > > > Has anyone successfully stopped these MySQL error messages? > > could this be related to the eventlet heartbeat issue we see for rabbitmq > when running the api under mod_wsgi/uwsgi? > > e.g. hav eyou confirmed that you wsgi serer is configure to use 1 thread > and multiple processes for concurancy > multiple thread in one process might have issues. > > -----Original Message----- > > From: Ben Nemec > > Sent: Monday, September 9, 2019 9:50 AM > > To: Chris Hoge ; > openstack-discuss at lists.openstack.org > > Subject: Re: [oslo][nova] Nova causes MySQL timeouts > > > > > > > > On 9/9/19 11:38 AM, Chris Hoge wrote: > > > In my personal experience, running Nova on a four core machine without > > > limiting the number of database connections will easily exhaust the > > > available connections to MySQL/MariaDB. Keep in mind that the limit > > > applies to every instance of a service, so if Nova starts 'm' services > > > replicated for 'n' cores with 'd' possible connections you'll be up to > > > ‘m x n x d' connections. It gets big fast. > > > > > > The default setting of '0' (that is, unlimited) does not make for a > good > > > first-run experience, IMO. > > > > We don't default to 0. We default to 5: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > > > > > This issue comes up every few years or so, and the consensus previously > > > is that 200-2000 connections is recommended based on your needs. Your > > > database has to be configured to handle the load and looking at the > > > configuration value across all your services and setting them > > > consistently and appropriately is important. > > > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_pipermail_openstack-2Ddev_2015-2DApril_061808.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=FGLfZK5eHj7z_xL-5DJsPgHkOt_T131ugvicMvcMDbc&e= > > > > > > > Thanks, I did not recall that discussion. > > > > If I'm reading it correctly, Jay is suggesting that for MySQL we should > > just disable connection pooling. As I noted earlier, I don't think we > > expose the ability to do that in oslo.db (patches welcome!), but setting > > max_pool_size to 1 would get you pretty close. Maybe we should add that > > to the help text for the option in oslo.db? > > > > > > > > > On Sep 6, 2019, at 7:34 AM, Ben Nemec > wrote: > > > > > > > > Tagging with oslo as this sounds related to oslo.db. > > > > > > > > On 9/5/19 7:37 PM, Albert Braden wrote: > > > > > After more googling it appears that max_pool_size is a maximum > limit on the number of connections that can stay > > > > > open, and max_overflow is a maximum limit on the number of > connections that can be temporarily opened when the > > > > > pool has been consumed. It looks like the defaults are 5 and 10 > which would keep 5 connections open all the time > > > > > and allow 10 temp. > > > > > Do I need to set max_pool_size to 0 and max_overflow to the number > of connections that I want to allow? Is that > > > > > a reasonable and correct configuration? Intuitively that doesn't > seem right, to have a pool size of 0, but if > > > > > the "pool" is a group of connections that will remain open until > they time out, then maybe 0 is correct? > > > > > > > > I don't think so. According to [0] and [1], a pool_size of 0 means > unlimited. You could probably set it to 1 to > > > > minimize the number of connections kept open, but then I expect > you'll have overhead from having to re-open > > > > connections frequently. > > > > > > > > It sounds like you could use a NullPool to eliminate connection > pooling entirely, but I don't think we support > > > > that in oslo.db. Based on the error message you're seeing, I would > take a look at connection_recycle_time[2]. I > > > > seem to recall seeing a comment that the recycle time needs to be > shorter than any of the timeouts in the path > > > > between the service and the db (so anything like haproxy or mysql > itself). Shortening that, or lengthening > > > > intervening timeouts, might get rid of these disconnection messages. > > > > > > > > 0: > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.max-5Fpool-5Fsize&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=p7bBYcuhnDR_J08MWFBj8XLiRUUV8JfruAIcl0zF234&e= > > > > > > > > 1: > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.sqlalchemy.org_en_13_core_pooling.html-23sqlalchemy.pool.QueuePool.-5F-5Finit-5F-5F&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=_EIhQyyj1gSM0PrX7de3yJr8hNi7tD8-tnfPo2VV_LU&e= > > > > > > > > 2: > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_oslo.db_stein_reference_opts.html-23database.connection-5Frecycle-5Ftime&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=xDnj80EQrxXwenOLgmKEaJbF3VRIylapDgqyMs81pSY&e= > > > > > > > > > > > > > *From:* Albert Braden > > > > > *Sent:* Wednesday, September 4, 2019 10:19 AM > > > > > *To:* openstack-discuss at lists.openstack.org > > > > > *Cc:* Gaëtan Trellu > > > > > *Subject:* RE: Nova causes MySQL timeouts > > > > > We’re not setting max_pool_size nor max_overflow option presently. > I googled around and found this document: > > > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_keystone_stein_configuration_config-2Doptions.html&d=DwIDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=W7apBhYbgfvGgB46HWLe-By9d_MYg6RB_eU3C2mARRY&s=NXcUpNTYGd6ZP-1oOUaQXsF7rHQ0mAt4e9uL8zzd0KA&e= > > > > > = < > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_keystone_stein_configuration_config- > > > > > > 2Doptions.html&d=DwMGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=3eF4Bv1HRQW6gl7 > > > > > > II12rTTSKj_A9_LDISS6hU0nP-R0&s=0EGWx9qW60G1cxoPFCIv_G1-iXX20jKcC5-AwlCWk8g&e=> > > > > > Document says: > > > > > [api_database] > > > > > connection_recycle_time = 3600 (Integer) Timeout > before idle SQL connections are reaped. > > > > > max_overflow = None (Integer) If > set, use this value for max_overflow with > > > > > SQLAlchemy. > > > > > max_pool_size = None (Integer) > Maximum number of SQL connections to keep open > > > > > in a pool. > > > > > [database] > > > > > connection_recycle_time = 3600 (Integer) Timeout > before idle SQL connections are reaped. > > > > > min_pool_size = 1 > (Integer) Minimum number of SQL connections to keep > > > > > open in a pool. > > > > > max_overflow = 50 > (Integer) If set, use this value for max_overflow > > > > > with SQLAlchemy. > > > > > max_pool_size = None (Integer) > Maximum number of SQL connections to keep open > > > > > in a pool. > > > > > If min_pool_size is >0, would that cause at least 1 connection to > remain open until it times out? What are the > > > > > recommended values for these, to allow unused connections to close > before they time out? Is “min_pool_size = 0” > > > > > an acceptable setting? > > > > > My settings are default: > > > > > [api_database]: > > > > > #connection_recycle_time = 3600 > > > > > #max_overflow = > > > > > #max_pool_size = > > > > > [database]: > > > > > #connection_recycle_time = 3600 > > > > > #min_pool_size = 1 > > > > > #max_overflow = 50 > > > > > #max_pool_size = 5 > > > > > It’s not obvious what max_overflow does. Where can I find a > document that explains more about these settings? > > > > > *From:* Gaëtan Trellu gaetan.trellu at incloudus.com>> > > > > > *Sent:* Tuesday, September 3, 2019 1:37 PM > > > > > *To:* Albert Braden albertb at synopsys.com>> > > > > > *Cc:* openstack-discuss at lists.openstack.org openstack-discuss at lists.openstack.org> > > > > > *Subject:* Re: Nova causes MySQL timeouts > > > > > Hi Albert, > > > > > It is a configuration issue, have a look to max_pool_size and > max_overflow options under [database] section. > > > > > Keep in mind than more workers you will have more connections will > be opened on the database. > > > > > Gaetan (goldyfruit) > > > > > On Sep 3, 2019 4:31 PM, Albert Braden > wrote: > > > > > It looks like nova is keeping mysql connections open until > they time > > > > > out. How are others responding to this issue? Do you just > ignore the > > > > > mysql errors, or is it possible to change configuration so > that nova > > > > > closes and reopens connections before they time out? Or is > there a > > > > > way to stop mysql from logging these aborted connections > without > > > > > hiding real issues? > > > > > Aborted connection 10726 to db: 'nova' user: 'nova' host: > 'asdf' > > > > > (Got timeout reading communication packets) > > > > > > > > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marek.lycka at ultimum.io Wed Sep 18 08:38:05 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Wed, 18 Sep 2019 10:38:05 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: Hello, > You seem to be missing the point. The alternative to Angular is not JQuery, it's a proper implementation on the server-side, > both in the APIs (for pagination and filtering) and Horizon itself. Slapping JavaScript on top of it, whether it's Angular or > JQuery, won't improve the situation. No, I understand what you're saying, I just don't think it's very realistic. Exclusively using server side templating has some issues in terms of usability and performance which might not have satisfactory solutions. For example: - Any interaction causes a roundtrip to the web server and the backend API, which reduces performance, especially with large data sets. - In some cases, datasets are composed of multiple API calls with disjunct results (see Networks [1] for an example). Paging and sorting over these datasets requires for them to be reconstructed on the webserver for every paging or sorting request and that means a full data load. For any at least somewhat sizeable datasets, this creates performance issues. - UI loses reactivity. Interfield interactions, for example, are basically impossible because of web server centered re-rendering, which reduces user comfort and usability. Moving logic to the client using JavaScript does improve the situation: - Data can be loaded in chunks asynchronously in the background without impeding the rest of Horizon - Cached data can be operated on (paged, sorted...) without additional requests to the web server and backend - Data can be shared between different horizon elements without the need for a reload - e.g. loaded networks can be used in the network index table, the Launch instance dialog and Network Topology all without loading it separately for each. Additionally, JavaScript also allows for more advanced tools, such as the Network Topology views (admittedly, they also require a fair amount of work). These are just some examples to illustrate the point. I'm sure other use cases could be found. [1] https://github.com/openstack/horizon/blob/fa804370b11519fc261f73fa90177368fde679df/openstack_dashboard/api/neutron.py#L1066 po 16. 9. 2019 v 10:43 odesílatel Radomir Dopieralski < openstack at sheep.art.pl> napsal: > You seem to be missing the point. The alternative to Angular is not > JQuery, it's a proper implementation on the server-side, both in the APIs > (for pagination and filtering) and Horizon itself. Slapping JavaScript on > top of it, whether it's Angular or JQuery, won't improve the situation. > > On Wed, Sep 11, 2019 at 5:21 PM Marek Lyčka > wrote: > >> Hi all, >> >> > We can't review your patches, because we don't understand them. For the >> patches to be merged, we >> > need more than one person, so that they can review each other's patches. >> >> Well, yes. That's what I'm trying to address. Even if another person >> appeared to review >> javascript code, it wouldn't change anything unless he had +2 and +W >> rights though. And even then, >> it wouldn't be enough, because two +2 are currently expected for the CR >> process to go ahead. >> >> > JavaScript is fine. We all know how to write and how to review >> JavaScript code, and there doesn't >> > have to be much of it — Horizon is not the kind of tool that has to bee >> all shiny and animated. It's a tool >> > for getting work done. >> >> This isn't about being shiny and animated though. This is about basic >> functionality, usability and performance. >> I did some stress testing with large datasets [1], and the >> non-angularized versions of basic functionality like >> sorting, paging and filtering in table panels are either non-existent, >> not working at all or basically unusable >> (for a multitude of reasons). >> >> Removing them would force reimplementations in pure JQuery and I strongly >> suspect that those >> implementations would be much messier and cost a considerable amount of >> time and effort. >> >> >AngularJS is a problem, because you can't tell what the code does just >> by looking >> >at the code, and so you can neither review nor fix it. >> >> This is clearly a matter of opinion. I find Angular code easier to deal >> with than JQuery spaghetti. >> >> > There has been a lot of work put into mixing Horizon with Angular, but >> I disagree that it has solved problems, >> > and in fact it has introduced a lot of regressions. >> >> I'm not saying the NG implementations are perfect, but they mostly work >> where it counts and can be improved >> where they do not. >> >> > Just to take a simple example, the translations are currently broken >> for en.AU and en.GB languages, >> > and date display is not localized. And nobody cares. >> >> It's difficult for me to judge which features are broken in NG and how >> much interest there is in having them >> fixed, but they can be fixed once reported. What I can say for sure is >> that I keep hitting this issue >> because of actual feature requests from actual users. See [2] for an >> example. I'm not sure implementing >> that in pure JQuery would be nearly as simple as it was in Angular. >> >> > We had automated tests before Angular. There weren't many of them, >> because we also didn't have much >> > JavaScript code. If I remember correctly, those tests were ripped out >> during the Angularization. >> >> Fair enough. >> >> > Arguably, improvements are, on average, impossible to add to Angular >> >> I disagree. Yes, pure JQuery is probably easier when dealing with very >> simple things, but once feature >> complexity increases beyond the basics, you'll very quickly find the >> features offered by the framework >> relevant - things like MVC decoupling, browser-side templating, reusable >> components, functionality injection etc. >> Again, see [2] for an example. >> >> On a side note, some horizon plugins (such as octavia-dashboard) use >> Angular extensively. Removing it >> would at the very least break them. >> >> Whatever the community decision is though, I feel like it needs to be >> made so that related issues >> can be addressed with a reasonable expectation of being reviewed and >> merged. >> >> [1] Networks, Roles and Images in the low thousands >> [2] https://review.opendev.org/#/c/618173/ >> >> pá 6. 9. 2019 v 18:44 odesílatel Dale Bewley napsal: >> >>> As an uninformed user I would just like to say Horizon is seen _as_ >>> Openstack to new users and I appreciate ever effort to improve it. >>> >>> Without discounting past work, the Horizon experience leaves much to be >>> desired and it colors the perspective on the entire platform. >>> >>> On Fri, Sep 6, 2019 at 05:01 Radomir Dopieralski >>> wrote: >>> >>>> >>>> >>>> On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> > we need people familiar with Angular and Horizon's ways of using >>>>> Angular (which seem to be very >>>>> > non-standard) that would be willing to write and review code. >>>>> Unfortunately the people who originally >>>>> > introduced Angular in Horizon and designed how it is used are no >>>>> longer interested in contributing, >>>>> > and there don't seem to be any new people able to handle this. >>>>> >>>>> I've been working with Horizon's Angular for quite some time and don't >>>>> mind keeping at it, but >>>>> it's useless unless I can get my code merged, hence my original >>>>> message. >>>>> >>>>> As far as attracting new developers goes, I think that removing some >>>>> barriers to entry couldn't hurt - >>>>> seeing commits simply lost to time being one of them. I can see it as >>>>> being fairly demoralizing. >>>>> >>>> >>>> We can't review your patches, because we don't understand them. For the >>>> patches to be merged, we >>>> need more than one person, so that they can review each other's patches. >>>> >>>> >>>>> > Personally, I think that a better long-time strategy would be to >>>>> remove all >>>>> > Angular-based views from Horizon, and focus on maintaining one >>>>> language and one set of tools. >>>>> >>>>> Removing AngularJS wouldn't remove JavaScript from horizon. We'd still >>>>> be left with a home-brewish >>>>> framework (which is buggy as is). I don't think removing js completely >>>>> is realistic either: we'd lose >>>>> functionality and worsen user experience. I think that keeping Angular >>>>> is the better alternative: >>>>> >>>>> 1) A lot of work has already been put into Angularization, solving >>>>> many problems >>>>> 2) Unlike legacy js, Angular code is covered by automated tests >>>>> 3) Arguably, improvments are, on average, easier to add to Angular >>>>> than pure js implementations >>>>> >>>>> Whatever reservations there may be about the current implementation >>>>> can be identified and addressed, but >>>>> all in all, I think removing it at this point would be >>>>> counterproductive. >>>>> >>>> >>>> JavaScript is fine. We all know how to write and how to review >>>> JavaScript code, and there doesn't >>>> have to be much of it — Horizon is not the kind of tool that has to bee >>>> all shiny and animated. It's a tool >>>> for getting work done. AngularJS is a problem, because you can't tell >>>> what the code does just by looking >>>> at the code, and so you can neither review nor fix it. >>>> >>>> There has been a lot of work put into mixing Horizon with Angular, but >>>> I disagree that it has solved problems, >>>> and in fact it has introduced a lot of regressions. Just to take a >>>> simple example, the translations are currently >>>> broken for en.AU and en.GB languages, and date display is not >>>> localized. And nobody cares. >>>> >>>> We had automated tests before Angular. There weren't many of them, >>>> because we also didn't have much JavaScript code. >>>> If I remember correctly, those tests were ripped out during the >>>> Angularization. >>>> >>>> Arguably, improvements are, on average, impossible to add to Angular, >>>> because the code makes no sense on its own. >>>> >>>> >>>> >> >> -- >> Marek Lyčka >> Linux Developer >> >> Ultimum Technologies s.r.o. >> Na Poříčí 1047/26, 11000 Praha 1 >> Czech Republic >> >> marek.lycka at ultimum.io >> *https://ultimum.io * >> > -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Wed Sep 18 08:43:29 2019 From: elfosardo at gmail.com (Riccardo Pittau) Date: Wed, 18 Sep 2019 10:43:29 +0200 Subject: [neutron][drivers][ironic] FFE request - Use openstacksdk for ironic notifiers Message-ID: Hello fellow openstackers, I'd like to open an FFE request to convert the ironic events notifier from the current ironicclient to openstacksdk with the change https://review.opendev.org/682040 This requires also a bump of the versions of mock and openstacksdk in lower-constraints, included in the CR. The change has been tested in a devstack environment and passes the zuul checks. Thank you! Riccardo Pittau rpittau||elfosardo From openstack at sheep.art.pl Wed Sep 18 09:00:07 2019 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Wed, 18 Sep 2019 11:00:07 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: Those things surely are nice to have, but the health and stability of the project come first. It's a tool, people use it to get their work done, and the main goal of the project is to let them get that work done. They won't be able to do that if we continue to use a framework for which we have no support, and no people willing to do maintenance work and bug fixing. The main priority is to have a working, stable, maintained tool, and we currently don't have the resources to do that with Angular, and very little hope for the situation magically improving any time soon. (Also, downloading data about all table entries just so that you can do pagination/filtering on the client side is wasteful and doesn't scale, but that is a separate discussion.) On Wed, Sep 18, 2019 at 10:38 AM Marek Lyčka wrote: > Hello, > > > You seem to be missing the point. The alternative to Angular is not > JQuery, it's a proper implementation on the server-side, > > both in the APIs (for pagination and filtering) and Horizon itself. > Slapping JavaScript on top of it, whether it's Angular or > > JQuery, won't improve the situation. > > No, I understand what you're saying, I just don't think it's very > realistic. Exclusively using server side templating has some issues > in terms of usability and performance which might not have satisfactory > solutions. For example: > > - Any interaction causes a roundtrip to the web server and the backend > API, which reduces performance, especially with large data sets. > - In some cases, datasets are composed of multiple API calls with disjunct > results (see Networks [1] for an example). Paging and sorting over these > datasets > requires for them to be reconstructed on the webserver for every paging > or sorting request and that means a full data load. For any at least > somewhat sizeable > datasets, this creates performance issues. > - UI loses reactivity. Interfield interactions, for example, are basically > impossible because of web server centered re-rendering, which reduces user > comfort and usability. > > Moving logic to the client using JavaScript does improve the situation: > > - Data can be loaded in chunks asynchronously in the background without > impeding the rest of Horizon > - Cached data can be operated on (paged, sorted...) without additional > requests to the web server and backend > - Data can be shared between different horizon elements without the need > for a reload - e.g. loaded networks > can be used in the network index table, the Launch instance dialog and > Network Topology all without loading it > separately for each. > > Additionally, JavaScript also allows for more advanced tools, such as the > Network Topology views (admittedly, they also require > a fair amount of work). > > These are just some examples to illustrate the point. I'm sure other use > cases could be found. > > [1] > https://github.com/openstack/horizon/blob/fa804370b11519fc261f73fa90177368fde679df/openstack_dashboard/api/neutron.py#L1066 > > po 16. 9. 2019 v 10:43 odesílatel Radomir Dopieralski < > openstack at sheep.art.pl> napsal: > >> You seem to be missing the point. The alternative to Angular is not >> JQuery, it's a proper implementation on the server-side, both in the APIs >> (for pagination and filtering) and Horizon itself. Slapping JavaScript on >> top of it, whether it's Angular or JQuery, won't improve the situation. >> >> On Wed, Sep 11, 2019 at 5:21 PM Marek Lyčka >> wrote: >> >>> Hi all, >>> >>> > We can't review your patches, because we don't understand them. For >>> the patches to be merged, we >>> > need more than one person, so that they can review each other's >>> patches. >>> >>> Well, yes. That's what I'm trying to address. Even if another person >>> appeared to review >>> javascript code, it wouldn't change anything unless he had +2 and +W >>> rights though. And even then, >>> it wouldn't be enough, because two +2 are currently expected for the CR >>> process to go ahead. >>> >>> > JavaScript is fine. We all know how to write and how to review >>> JavaScript code, and there doesn't >>> > have to be much of it — Horizon is not the kind of tool that has to >>> bee all shiny and animated. It's a tool >>> > for getting work done. >>> >>> This isn't about being shiny and animated though. This is about basic >>> functionality, usability and performance. >>> I did some stress testing with large datasets [1], and the >>> non-angularized versions of basic functionality like >>> sorting, paging and filtering in table panels are either non-existent, >>> not working at all or basically unusable >>> (for a multitude of reasons). >>> >>> Removing them would force reimplementations in pure JQuery and I >>> strongly suspect that those >>> implementations would be much messier and cost a considerable amount of >>> time and effort. >>> >>> >AngularJS is a problem, because you can't tell what the code does just >>> by looking >>> >at the code, and so you can neither review nor fix it. >>> >>> This is clearly a matter of opinion. I find Angular code easier to deal >>> with than JQuery spaghetti. >>> >>> > There has been a lot of work put into mixing Horizon with Angular, but >>> I disagree that it has solved problems, >>> > and in fact it has introduced a lot of regressions. >>> >>> I'm not saying the NG implementations are perfect, but they mostly work >>> where it counts and can be improved >>> where they do not. >>> >>> > Just to take a simple example, the translations are currently broken >>> for en.AU and en.GB languages, >>> > and date display is not localized. And nobody cares. >>> >>> It's difficult for me to judge which features are broken in NG and how >>> much interest there is in having them >>> fixed, but they can be fixed once reported. What I can say for sure is >>> that I keep hitting this issue >>> because of actual feature requests from actual users. See [2] for an >>> example. I'm not sure implementing >>> that in pure JQuery would be nearly as simple as it was in Angular. >>> >>> > We had automated tests before Angular. There weren't many of them, >>> because we also didn't have much >>> > JavaScript code. If I remember correctly, those tests were ripped out >>> during the Angularization. >>> >>> Fair enough. >>> >>> > Arguably, improvements are, on average, impossible to add to Angular >>> >>> I disagree. Yes, pure JQuery is probably easier when dealing with very >>> simple things, but once feature >>> complexity increases beyond the basics, you'll very quickly find the >>> features offered by the framework >>> relevant - things like MVC decoupling, browser-side templating, reusable >>> components, functionality injection etc. >>> Again, see [2] for an example. >>> >>> On a side note, some horizon plugins (such as octavia-dashboard) use >>> Angular extensively. Removing it >>> would at the very least break them. >>> >>> Whatever the community decision is though, I feel like it needs to be >>> made so that related issues >>> can be addressed with a reasonable expectation of being reviewed and >>> merged. >>> >>> [1] Networks, Roles and Images in the low thousands >>> [2] https://review.opendev.org/#/c/618173/ >>> >>> pá 6. 9. 2019 v 18:44 odesílatel Dale Bewley napsal: >>> >>>> As an uninformed user I would just like to say Horizon is seen _as_ >>>> Openstack to new users and I appreciate ever effort to improve it. >>>> >>>> Without discounting past work, the Horizon experience leaves much to be >>>> desired and it colors the perspective on the entire platform. >>>> >>>> On Fri, Sep 6, 2019 at 05:01 Radomir Dopieralski < >>>> openstack at sheep.art.pl> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> > we need people familiar with Angular and Horizon's ways of using >>>>>> Angular (which seem to be very >>>>>> > non-standard) that would be willing to write and review code. >>>>>> Unfortunately the people who originally >>>>>> > introduced Angular in Horizon and designed how it is used are no >>>>>> longer interested in contributing, >>>>>> > and there don't seem to be any new people able to handle this. >>>>>> >>>>>> I've been working with Horizon's Angular for quite some time and >>>>>> don't mind keeping at it, but >>>>>> it's useless unless I can get my code merged, hence my original >>>>>> message. >>>>>> >>>>>> As far as attracting new developers goes, I think that removing some >>>>>> barriers to entry couldn't hurt - >>>>>> seeing commits simply lost to time being one of them. I can see it as >>>>>> being fairly demoralizing. >>>>>> >>>>> >>>>> We can't review your patches, because we don't understand them. For >>>>> the patches to be merged, we >>>>> need more than one person, so that they can review each other's >>>>> patches. >>>>> >>>>> >>>>>> > Personally, I think that a better long-time strategy would be to >>>>>> remove all >>>>>> > Angular-based views from Horizon, and focus on maintaining one >>>>>> language and one set of tools. >>>>>> >>>>>> Removing AngularJS wouldn't remove JavaScript from horizon. We'd >>>>>> still be left with a home-brewish >>>>>> framework (which is buggy as is). I don't think removing js >>>>>> completely is realistic either: we'd lose >>>>>> functionality and worsen user experience. I think that keeping >>>>>> Angular is the better alternative: >>>>>> >>>>>> 1) A lot of work has already been put into Angularization, solving >>>>>> many problems >>>>>> 2) Unlike legacy js, Angular code is covered by automated tests >>>>>> 3) Arguably, improvments are, on average, easier to add to Angular >>>>>> than pure js implementations >>>>>> >>>>>> Whatever reservations there may be about the current implementation >>>>>> can be identified and addressed, but >>>>>> all in all, I think removing it at this point would be >>>>>> counterproductive. >>>>>> >>>>> >>>>> JavaScript is fine. We all know how to write and how to review >>>>> JavaScript code, and there doesn't >>>>> have to be much of it — Horizon is not the kind of tool that has to >>>>> bee all shiny and animated. It's a tool >>>>> for getting work done. AngularJS is a problem, because you can't tell >>>>> what the code does just by looking >>>>> at the code, and so you can neither review nor fix it. >>>>> >>>>> There has been a lot of work put into mixing Horizon with Angular, but >>>>> I disagree that it has solved problems, >>>>> and in fact it has introduced a lot of regressions. Just to take a >>>>> simple example, the translations are currently >>>>> broken for en.AU and en.GB languages, and date display is not >>>>> localized. And nobody cares. >>>>> >>>>> We had automated tests before Angular. There weren't many of them, >>>>> because we also didn't have much JavaScript code. >>>>> If I remember correctly, those tests were ripped out during the >>>>> Angularization. >>>>> >>>>> Arguably, improvements are, on average, impossible to add to Angular, >>>>> because the code makes no sense on its own. >>>>> >>>>> >>>>> >>> >>> -- >>> Marek Lyčka >>> Linux Developer >>> >>> Ultimum Technologies s.r.o. >>> Na Poříčí 1047/26, 11000 Praha 1 >>> Czech Republic >>> >>> marek.lycka at ultimum.io >>> *https://ultimum.io * >>> >> > > -- > Marek Lyčka > Linux Developer > > Ultimum Technologies s.r.o. > Na Poříčí 1047/26, 11000 Praha 1 > Czech Republic > > marek.lycka at ultimum.io > *https://ultimum.io * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Sep 18 09:55:12 2019 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 18 Sep 2019 10:55:12 +0100 Subject: [goals][IPv6-Only Deployments and Testing] Week R-4 Update In-Reply-To: <64b36128-911f-4599-9ada-4773b1957077@www.fastmail.com> References: <16d3d2203c8.b47bfe5156036.4862537349817585954@ghanshyammann.com> <64b36128-911f-4599-9ada-4773b1957077@www.fastmail.com> Message-ID: On Tue, 17 Sep 2019 at 18:18, Clark Boylan wrote: > > On Tue, Sep 17, 2019, at 3:12 AM, Radosław Piliszek wrote: > > Hiya, > > > > Kolla is not going to get an IPv6-only job because it builds docker > > images and is not tested regarding networking (it does not do > > devstack/tempest either). > > > > Kolla-Ansible, which does the deployment, is going to get some > > IPv6-only test jobs - https://review.opendev.org/681573 > > We are testing CentOS and multinode and hence need overlay VXLAN to > > reach sensible levels of stability there - > > https://review.opendev.org/670690 > > The VXLAN patch is probably ready, awaiting review of independent > > cores. It will be refactored out later to put it in zuul plays as it > > might be useful to other projects as well. > > The IPv6 patch needs rebasing on VXLAN and some small tweaks still. > > It is worth noting that you could test with the existing overlay network tooling that the infra team provides. This has been proven to work over years of multinode testing. Then we could incrementally improve it to address some of the deficiencies you have pointed out with it. > > This was sort of what I was trying to get across on IRC. Rather than go and reinvent the wheel to the detriment of meeting this goal on time: instead use what is there and works. Then improve what is there over time. The reason I used a different approach is that we run a containerised Open vSwitch - using this would not be compatible with the multinode-bridge role, and would also introduce a chicken and egg. I did study the role before reinventing the wheel, and I think the result will benefit everyone if we can push it back to zuul, since it avoids the potential double hop. > > > > > Kind regards, > > Radek > > > From gr at ham.ie Wed Sep 18 12:14:38 2019 From: gr at ham.ie (Graham Hayes) Date: Wed, 18 Sep 2019 13:14:38 +0100 Subject: [tc] Results of the two TC CIVS polls Message-ID: <8d42c44c-b8cc-9120-d0a0-2b70348cbe47@ham.ie> Hi All, We had two votes running for the TC - V->Z naming, and the TC Chair. The results are as follows: Release Naming: 1. Tied: Do not change current model of geographic names - No review Name releases after major cities - https://review.opendev.org/#/c/677745/ 3. Name releases after the ICAO alphabet - https://review.opendev.org/#/c/677746/ Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1&id=E_0a9abdadec887a6b&algorithm=beatpath TC Chair: 1. JP Evrard - https://review.opendev.org/#/c/681285/ 2. Mohammed Naser - https://review.opendev.org/#/c/680414/ Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1&id=E_e6963bddd2f49d57&algorithm=beatpath Thanks all for taking the time to vote, and for the candidates and people who put forward the options for the naming poll. Thanks, Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From navidsakhawat at gmail.com Wed Sep 18 12:49:26 2019 From: navidsakhawat at gmail.com (Navid Bin Sakhawat) Date: Wed, 18 Sep 2019 18:49:26 +0600 Subject: regarding rabbitmq suspicious processes. Message-ID: Hi! We are getting below status from rabbitmq. [root at controller1 ~]# rabbitmqctl eval 'rabbit_diagnostics:maybe_stuck().' 2019-09-18 18 :08:55 There are 12374 processes. 2019-09-18 18 :08:55 Investigated 2 processes this round, 5000ms to go. 2019-09-18 18 :08:56 Investigated 2 processes this round, 4500ms to go. 2019-09-18 18 :08:56 Investigated 2 processes this round, 4000ms to go. 2019-09-18 18 :08:57 Investigated 2 processes this round, 3500ms to go. 2019-09-18 18 :08:57 Investigated 2 processes this round, 3000ms to go. 2019-09-18 18 :08:58 Investigated 2 processes this round, 2500ms to go. 2019-09-18 18 :08:58 Investigated 2 processes this round, 2000ms to go. 2019-09-18 18 :08:59 Investigated 2 processes this round, 1500ms to go. 2019-09-18 18 :08:59 Investigated 2 processes this round, 1000ms to go. 2019-09-18 18 :09:00 Investigated 2 processes this round, 500ms to go. 2019-09-18 18 :09:00 Found 2 suspicious processes. 2019-09-18 18 :09:00 [{pid,<10698.8499.10 >}, {registered_name,[]}, {current_stacktrace, [{timer,sleep,1,[{file,"timer.erl"},{line,153}]}, {rabbit_amqqueue,'-with/4-fun-0-',5, [{file,"src/rabbit_amqqueue.erl"},{line,469}]}, {rabbit_channel,handle_method,3, [{file,"src/rabbit_channel.erl"},{line,1323}]}, {rabbit_channel,handle_cast,2, [{file,"src/rabbit_channel.erl"},{line,470}]}, {gen_server2,handle_msg,2, [{file,"src/gen_server2.erl"},{line,1050}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,247}]}]}, {initial_call,{proc_lib,init_p,5}}, {message_queue_len,2}, {links,[<10698.8496.10 >]}, {monitors, [{process,<10876.617.0>}, {process,<10698.3225.0 >}, {process,<10877.3228.0 >}]}, {monitored_by, [<10698.3225.0 >,<10877.293.0>,<10876.293.0>, <10698.8491.10 >,< 10698.21564.0 >,<10698.8670.0 >]}, {heap_size,4185}] 2019-09-18 18 :09:00 [{pid,<10698.8521.10 >}, {registered_name,[]}, {current_stacktrace, [{timer,sleep,1,[{file,"timer.erl"},{line,153}]}, {rabbit_amqqueue,'-with/4-fun-0-',5, [{file,"src/rabbit_amqqueue.erl"},{line,469}]}, {rabbit_channel,handle_method,3, [{file,"src/rabbit_channel.erl"},{line,1323}]}, {rabbit_channel,handle_cast,2, [{file,"src/rabbit_channel.erl"},{line,470}]}, {gen_server2,handle_msg,2, [{file,"src/gen_server2.erl"},{line,1050}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,247}]}]}, {initial_call,{proc_lib,init_p,5}}, {message_queue_len,2}, {links,[<10698.8518.10 >]}, {monitors, [{process,<10698.3225.0 >}, {process,<10876.617.0>}, {process,<10877.3228.0 >}]}, {monitored_by, [<10698.8513.10 >,< 10698.3225.0 >,<10877.293.0>, <10876.293.0>,<10698.8670.0 >,<10698.21564.0 >]}, {heap_size,4185}] Regards, *Navid Bin Sakhawat* Former Manager IT E.G Salary Ltd Mobile +880161559955 *5th Floor BGMEA Complex, Kawran Bazar, Dhaka. * *Former Engineer NOC* Mos 5 Tel Ltd. Plot-59, 61, Lotus Kamal Tower-2, Level-11, Gulshan-1, Dhaka-1212. *www.mos5tel.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From Aditi.Dukle1 at ibm.com Wed Sep 18 12:55:22 2019 From: Aditi.Dukle1 at ibm.com (Aditi Dukle1) Date: Wed, 18 Sep 2019 18:25:22 +0530 Subject: [zuul3] zuul2 -> zuul3 migration Message-ID: Hi Lenny, Even I am working on zuulv2 to v3 migration for IBM PowerKVM - https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI and so far we have setup a test VM wherein we have installed zuulv3 and Nodepool version: 3.7.2 and have migrated 4 of our jenkins jobs to ansible to work with zuulv3. Zuulv3 doesn't have much info online apart from the official documentation but I found the following blogs quite useful while doing the migration: https://zuul-ci.org/docs/zuul/admin/quick-start.html - This is a very good installation tutorial to get a hang of zuulv3 that needs a single VM and it installs different components in docker containers. I started with this which helped me to understand how different components are configured. Also, the following articles are good read. 1. https://www.softwarefactory-project.io/cicd-workflow-offered-by-zuulnodepool-on-software-factory.html 2. https://www.softwarefactory-project.io/zuul-hands-on-part-2-your-first-gated-patch-with-zuul.html 3. https://www.softwarefactory-project.io/zuul-hands-on-part-3-use-the-zuul-jobs-library.html Openstack has migrated to zuulv3, so you can also check out their project configuration here - https://opendev.org/openstack/project-config I am planning to write my experience of migration in a blog which I will let you know once complete. Let me know if you got any doubts while setting up zuulv3 Thanks and Regards, Aditi Dukle -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Sep 18 13:56:01 2019 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 18 Sep 2019 14:56:01 +0100 Subject: [kolla] Kayobe Train planning meeting Message-ID: Hi, Just as the rest of the world is wrapping up the Train release, we find ourselves having just released Stein and starting on Train development. Given the timing of CentOS 8 which we intend to support in the kolla Train release, we have some time to do some feature development in kayobe. I've set up a Doodle poll [1] with two hour slots next week to plan the next release. [1] https://doodle.com/poll/y7fakbhbkfx5hqyk Cheers, Mark From mnaser at vexxhost.com Wed Sep 18 14:23:27 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 18 Sep 2019 10:23:27 -0400 Subject: [tc] new chair Message-ID: Hi everyone, With the new TC roster, JP and I both volunteered to be chairs of the TC which resulted in a CIVS vote between the TC, where the outcome was that JP won: http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009486.html Thanks for letting me serve for the past 6 months. Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From samueldmq at gmail.com Wed Sep 18 15:09:12 2019 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Wed, 18 Sep 2019 12:09:12 -0300 Subject: Outreachy Application Deadline - Call for mentors and projects Message-ID: Hi everyone! Outreachy helps people from underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream communities. OpenStack is a participating organization in the Outreachy Dec 2019 to Mar 2020 round. If you're interested to be a mentor, please register as a mentor in the Outreachy website and publish your project ideas. According to this round's schedule , the initial application is due next week: - *Sept. 24, 2019 at 4pm UTC Initial application deadline * - *Nov. 5, 2019 at 4pm UTC Final application deadline* It is important to get projects submitted *as soon as possible* so that applicants can sign up before the *Sept. 24 deadline*. Once signed up, they will have between *Oct. 1, 2019 to Nov. 5, 2019 to contribute to the projects*. If you have any questions about becoming a mentor or want to sponsor an intern, please contact me (samueldmq at gmail.com) or Mahati Chamarthy ( mahati.chamarthy at gmail.com). Thank you, Samuel de Medeiros Queiroz -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Sep 18 15:10:32 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 18 Sep 2019 11:10:32 -0400 Subject: [ops] Shanghai Forum / Ops Day sessions Message-ID: Greetings! We are coming up on the Shanghai Summit and need to plan out a few sessions for forum submissions (yeah I know, late as always). We are also trying to see if there's enough traction to do an Ops day on Thursday after the summit. This is a bit freeform, but if there are enough attendees interested, we can make it happen. Please visit the following etherpad and suggest topics for both. +1 those you like the most. We will submit the forum sessions on Friday so there's not a lot of time for that part. Things for the Ops day can go on being entered until that day. https://etherpad.openstack.org/p/PVG-OPS-Forum-Brainstorming Thanks, Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Sep 18 15:22:55 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 18 Sep 2019 10:22:55 -0500 Subject: [tc] Results of the two TC CIVS polls In-Reply-To: <8d42c44c-b8cc-9120-d0a0-2b70348cbe47@ham.ie> References: <8d42c44c-b8cc-9120-d0a0-2b70348cbe47@ham.ie> Message-ID: <20190918152255.GB31404@sm-workstation> On Wed, Sep 18, 2019 at 01:14:38PM +0100, Graham Hayes wrote: > Hi All, > > We had two votes running for the TC - V->Z naming, and the TC Chair. > > The results are as follows: > > Release Naming: > > 1. Tied: > Do not change current model of geographic names - No review > Name releases after major cities - https://review.opendev.org/#/c/677745/ > 3. > Name releases after the ICAO alphabet - > https://review.opendev.org/#/c/677746/ > > Full results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1&id=E_0a9abdadec887a6b&algorithm=beatpath > The scope of this vote was not clear. This was only as far as changing the naming scheme for the remaining V-X, right? Or was it meant for anything past Ussuri? If it's the former, then I am very happy to see there was not strong support to changing the current naming scheme mid-alphabet. I believe that would have been confusing (or just out right odd) to our end users. I do hope to see a plan for what we would like to see happen once we cycle back to A though. Sean From openstack at sheep.art.pl Wed Sep 18 15:26:09 2019 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Wed, 18 Sep 2019 17:26:09 +0200 Subject: [horizon] exception for the allow-users-change-expired-password blueprint Message-ID: Hello y'all, I would like to ask for an FEE for the allow-users-change-expired-password blueprint ( https://blueprints.launchpad.net/horizon/+spec/allow-users-change-expired-password). The feature is important for users, and the code is reviewed and ready to merge. There are three patches, of which one, containing the form itself, is already merged, one contains the redirect on login failure, and one contains the documentation: https://review.opendev.org/#/c/672315/ https://review.opendev.org/#/c/676167/ Thank you, -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 18 15:42:13 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Sep 2019 10:42:13 -0500 Subject: Outreachy Application Deadline - Call for mentors and projects In-Reply-To: References: Message-ID: On 9/18/2019 10:09 AM, Samuel de Medeiros Queiroz wrote: > Hi everyone! > > Outreachy  helps people from > underrepresented groups get involved in free and open source software by > matching interns with established mentors in the upstream communities. > > OpenStack is a participating organization in the Outreachy Dec 2019 to > Mar 2020 round. If you're interested to be a mentor, please register as > a mentor in the Outreachy website and publish your project ideas. > > According to this round's schedule > , the > initial application is due next week: > > * *Sept. 24, 2019 at 4pm UTC Initial application deadline > * > * *Nov. 5, 2019 at 4pm UTC Final application deadline* > > It is important to get projects submitted *as soon as possible* so that > applicants can sign up before the *Sept. 24 deadline*. > > Once signed up, they will have between *Oct. 1, 2019 to Nov. 5, 2019 to > contribute to the projects*. > > If you have any questions about becoming a mentor or want to sponsor an > intern, please contact me (samueldmq at gmail.com > ) or Mahati Chamarthy > (mahati.chamarthy at gmail.com ). > > Thank you, > Samuel de Medeiros Queiroz For anyone that was paying attention to the unified CLI gaps closure thread before I had proposed a mentoring project for some US universities, the abstract is here [1]. The project was not accepted for whatever reason (it's not clear if there was not enough interest or not enough people with the proper skills), but if anyone is interested in picking that up for Outreachy I think it would still be useful for the community (devs, users and operators) and a mentee working on it to get experience hacking on an open source project. That said I don't think I'll have the time to dedicate to that now but could help as a backup/side mentor if someone else is interested in taking the primary mentoring duty. As for the number of mentees listed on that abstract that could be 1 or 2, the 2-5 range was just something that specific program was suggesting. Otherwise even if you have interns or new developers at your company and you'd like to get their feet wet working upstream on OpenStack this is a good way to do that as well. [1] https://docs.google.com/document/d/1Punt4597VtAndhkwDbG-XrfBQUcXJE-Jmzg2vMIW8Ws/edit -- Thanks, Matt From openstack at fried.cc Wed Sep 18 16:04:54 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Sep 2019 11:04:54 -0500 Subject: [neutron][drivers][ironic] FFE request - Use openstacksdk for ironic notifiers In-Reply-To: References: Message-ID: <7354478b-a71d-0538-c903-de90128e5b2f@fried.cc> > I'd like to open an FFE request to convert the ironic events notifier > from the current ironicclient to openstacksdk with the change > https://review.opendev.org/682040 This is kind of none of my business, but since the existing ironic stuff was only introduced in Train [1], IMO it is important to allow this FFE so neutron doesn't have to go through the pain of supporting and deprecating the conf options (e.g. `ironic_url`) and code paths through python-ironicclient. efried [1] https://review.opendev.org/#/c/658787/ From fungi at yuggoth.org Wed Sep 18 16:46:00 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 18 Sep 2019 16:46:00 +0000 Subject: [i18n][mogan][neutron][swift][tc][valet] Cleaning up IRC logging for defunct channels In-Reply-To: References: <20190916221822.o5diqcqzgyvqevi4@yuggoth.org> <3003512C-E1CD-4BDE-B173-D4FF2DA0FC6B@redhat.com> Message-ID: <20190918164600.ygb4ascec4k2oyfc@yuggoth.org> On 2019-09-18 08:17:52 +0200 (+0200), Andreas Jaeger wrote: [...] > remove accessbot as well. [...] As we discussed in IRC, I don't think removal from accessbot is especially necessary. The accessbot simply evaluates chanserv permissions and then updates them if needed (it never actually joins any channels), so having extra channels in its configuration isn't really a problem and can simplify things if situations change and people decide they want to go back to using those channels in the future. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jungleboyj at gmail.com Wed Sep 18 17:37:26 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 18 Sep 2019 12:37:26 -0500 Subject: [cinder][FFE] Feature Freeze Exceptions agreed to in Weekly Meeting .... Message-ID: <43d85198-72fa-08ea-f63e-a32a4c0a0029@gmail.com> All, In our weekly meeting we discussed patches that required an FFE and decided to approve them there.  This note is just to document that the FFE's were approved. The patches: * https://review.opendev.org/#/c/668825/(ZhengMa) - Leverage hw accelerator in image compression * https://review.opendev.org/#/c/673013   PowerMax Driver - Volume & Snapshot Metadata * https://review.opendev.org/#/c/677945/3Par -- Add Peer Persistence Support Please let us know if there are any concerns with these planned FFEs. Thanks! Jay (irc: jungleboyj) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Sep 18 18:53:48 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 18 Sep 2019 18:53:48 +0000 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> <15ed8e56b8c8eaa3d44e1364d67b7f8f72f46728.camel@redhat.com> Message-ID: I was hopeful that this might be our issue, but we already have max_allowed_packet = 256M root at us01odc-dev2-ctrl1:~# mysqladmin variables|grep allowed | max_allowed_packet | 268435456 -----Original Message----- From: Eric Fried Sent: Tuesday, September 17, 2019 10:21 AM To: openstack-discuss at lists.openstack.org Subject: Re: [oslo][nova] Nova causes MySQL timeouts Coincidentally, I'm trying [1] via [2] based on advice from zzzeek. efried [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__dba.stackexchange.com_questions_19135_mysql-2Derror-2Dreading-2Dcommunication-2Dpackets_19139-2319139&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WhqrFWiq68cDG65DEDJIVr7qQiBSBt8bwY70zlZPx5Y&s=E_872jzry6thItjApAzjBLumvxG2WIAOqA2CnGEhiU4&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_682661_&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WhqrFWiq68cDG65DEDJIVr7qQiBSBt8bwY70zlZPx5Y&s=wYVKBtvmm1jv4bd0xtP7fsALj2SgyPnYrWSTkAtpG9U&e= From samueldmq at gmail.com Tue Sep 17 16:32:15 2019 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Tue, 17 Sep 2019 13:32:15 -0300 Subject: Outreachy Application Deadline - Call for mentors and projects Message-ID: Hi everyone! Outreachy helps people from underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream communities. OpenStack is a participating organization in the Outreachy Dec 2019 to Mar 2020 round. If you're interested to be a mentor, please register as a mentor in the Outreachy website and publish your project ideas. According to this round's schedule , the initial application is due next week: - *Sept. 24, 2019 at 4pm UTC Initial application deadline * - *Nov. 5, 2019 at 4pm UTC Final application deadline* It is important to get projects submitted *as soon as possible* so that applicants can sign up before the *Sept. 24 deadline*. Once signed up, they will have between *Oct. 1, 2019 to Nov. 5, 2019 to contribute to the projects*. If you have any questions about becoming a mentor or want to sponsor an intern, please contact me (samueldmq at gmail.com) or Mahati Chamarthy ( mahati.chamarthy at gmail.com). Thank you, Samuel de Medeiros Queiroz -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Sep 17 19:48:35 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 17 Sep 2019 14:48:35 -0500 Subject: [PTLs] [TC] OpenStack User Survey - PTL & TC Feedback Message-ID: <5D813893.9020902@openstack.org> Hi everyone - It's that time again where we share the project specific feedback from the user survey directly with you, the community There are two CSVs, one with all of the feedback[1] and another with a question key[2] so you have some context for the answers. The data in these spreadsheets is 100% anonymized and contains NO information around the actual deployments. These are simply answers to the project-specific and TC questions that were added to the end of the survey. If you are a PTL or TC member and would like specific/expanded information around a particular comment, please contact allison at openstack.org or jimmy at openstack.org. We will do our best to accommodate, provided it doesn't violate the privacy of the submitter. Please let me know if you have questions/concerns. Cheers, Jimmy [1] 2019 User Survey Data - Deployment PTL Comments.csv [2] 2019 User Survey Data - PTL Answer Key.csv -------------- next part -------------- A non-text attachment was scrubbed... Name: 2019 User Survey Data - Deployment PTL Comments.csv Type: text/csv Size: 207022 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2019 User Survey Data - PTL Answer Key.csv Type: text/csv Size: 3659 bytes Desc: not available URL: From bansalnehal26 at gmail.com Wed Sep 18 14:59:58 2019 From: bansalnehal26 at gmail.com (Nehal Bansal) Date: Wed, 18 Sep 2019 20:29:58 +0530 Subject: [Tacker] {TOSCA] Regarding attaching persistent volume to a VDU in Tosca Template Message-ID: Hi, I want to keep a backup of the data that my VNF creates. For this, I wanted to create a persistent volume such that if a VNF gets deleted and a new one gets launched, it can access the data created by the previous VNF. Is there a way to do this? Thanks. Nehal -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Sep 18 19:45:22 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 18 Sep 2019 21:45:22 +0200 Subject: [neutron][drivers][ironic] FFE request - Use openstacksdk for ironic notifiers In-Reply-To: <7354478b-a71d-0538-c903-de90128e5b2f@fried.cc> References: <7354478b-a71d-0538-c903-de90128e5b2f@fried.cc> Message-ID: <20190918194522.GB9740@t440s> Hi, Personally I think we can go with this is You will implement it now. As per discussion on IRC, Ironic code which will use those notifications isn't really ready yet, and will not be for Train. So even if something would possible go wrong (but won't for sure ;)) we shouldn't break Ironic. On Wed, Sep 18, 2019 at 11:04:54AM -0500, Eric Fried wrote: > > I'd like to open an FFE request to convert the ironic events notifier > > from the current ironicclient to openstacksdk with the change > > https://review.opendev.org/682040 > > This is kind of none of my business, but since the existing ironic stuff > was only introduced in Train [1], IMO it is important to allow this FFE > so neutron doesn't have to go through the pain of supporting and > deprecating the conf options (e.g. `ironic_url`) and code paths through > python-ironicclient. Thx. I agree. That's another good point to accept this FFE. > > efried > > [1] https://review.opendev.org/#/c/658787/ > -- Slawek Kaplonski Senior software engineer Red Hat From gr at ham.ie Wed Sep 18 19:48:37 2019 From: gr at ham.ie (Graham Hayes) Date: Wed, 18 Sep 2019 20:48:37 +0100 Subject: [tc] Results of the two TC CIVS polls In-Reply-To: <20190918152255.GB31404@sm-workstation> References: <8d42c44c-b8cc-9120-d0a0-2b70348cbe47@ham.ie> <20190918152255.GB31404@sm-workstation> Message-ID: On 18/09/2019 16:22, Sean McGinnis wrote: > On Wed, Sep 18, 2019 at 01:14:38PM +0100, Graham Hayes wrote: >> Hi All, >> >> We had two votes running for the TC - V->Z naming, and the TC Chair. >> >> The results are as follows: >> >> Release Naming: >> >> 1. Tied: >> Do not change current model of geographic names - No review >> Name releases after major cities - https://review.opendev.org/#/c/677745/ >> 3. >> Name releases after the ICAO alphabet - >> https://review.opendev.org/#/c/677746/ >> >> Full results: >> https://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1&id=E_0a9abdadec887a6b&algorithm=beatpath >> > > The scope of this vote was not clear. This was only as far as changing the > naming scheme for the remaining V-X, right? Or was it meant for anything past > Ussuri? OpenStack Release Naming for V-Z was the title of the poll. The intent was we would sort out V-Z, and then worry about the post Z names. This is proving to be harder than expected as you can see - so I am not sure what the route forward is. > > If it's the former, then I am very happy to see there was not strong support to > changing the current naming scheme mid-alphabet. I believe that would have been > confusing (or just out right odd) to our end users. Yes, this was driven by the lack of people willing to commit to running the poll, and people who had issues with how the TC ran the last one. Personally, I don't see a huge issue with it, (barring the issue with potentially not having a venue city to choose from for one of the 2020 and one 2021 release). > I do hope to see a plan for what we would like to see happen once we cycle back > to A though. Yes - I think that is an interesting discussion, and something that should start now - we do need to know what to expect :) > Sean > Graham From sean.mcginnis at gmx.com Wed Sep 18 20:57:55 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 18 Sep 2019 15:57:55 -0500 Subject: [tc] Results of the two TC CIVS polls In-Reply-To: References: <8d42c44c-b8cc-9120-d0a0-2b70348cbe47@ham.ie> <20190918152255.GB31404@sm-workstation> Message-ID: <20190918205755.GB11893@sm-workstation> > > > > If it's the former, then I am very happy to see there was not strong support to > > changing the current naming scheme mid-alphabet. I believe that would have been > > confusing (or just out right odd) to our end users. > > Yes, this was driven by the lack of people willing to commit to running > the poll, and people who had issues with how the TC ran the last one. > > Personally, I don't see a huge issue with it, (barring the issue with > potentially not having a venue city to choose from for one of the 2020 > and one 2021 release). > FWIW, I do think that was a little of an exceptional circumstance that can easily be addressed by writing down a process for when we need to make sure to get specific steps done in order to be ready in time. And have safeguards in place for someone else to step in if the current person leading it gets pulled away by outside factors. The U lettering aligning with China was just an unfortunate circumstance. But I think there's also another question that is not asked/answered by this poll. Assuming we keep with the naming scheme, that doesn't mean we need to choose those names in the same fashion we have been doing it. I'd propose we just get a collection of place names that have some kind (any kind) of meaning to folks in the community, all the way through Z. Then for each cycle, let the community vote on which names for the given letter they prefer. Even if we don't change though, I'm a little optimistic that we won't have as much trouble picking potential names like we did trying to map U to Chinese names. Anyway, my 2 yuan. Sean From whayutin at redhat.com Wed Sep 18 21:19:22 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 18 Sep 2019 15:19:22 -0600 Subject: [tripleo][ci] gate jobs killed / reset In-Reply-To: References: Message-ID: On Tue, Sep 17, 2019 at 4:40 PM Emilien Macchi wrote: > Note that I also cleared the check for tripleo projects to accelerate the > testing of our potential fixes. > Hopefully we can resolve the situation really soon. > > On Tue, Sep 17, 2019 at 4:29 PM Wesley Hayutin > wrote: > >> Greetings, >> >> The zuul jobs in the TripleO gate queue were put out of their misery >> approximately at 20:14 UTC Sept 17 2019. The TripleO jobs were timing out >> [1] and causing the gate queue to be delayed about 24 hours [2]. >> >> We are hoping a revert [3] will restore TripleO jobs back to their usual >> run times. Please hold off on any rechecks or workflowing patches until >> [3] is merged and the status on #tripleo is no longer "RED" >> >> We appreciate your patience while we work through this issue, the jobs >> that were in the gate will be restored once we have confirmed and verified >> the solution. >> >> Thank you >> >> >> [1] https://bugs.launchpad.net/tripleo/+bug/1844446 >> [2] >> http://dashboard-ci.tripleo.org/d/YRJtmtNWk/cockpit?orgId=1&fullscreen&panelId=398 >> [3] https://review.opendev.org/#/c/682729/ >> > > > -- > Emilien Macchi > Thanks for your continued patience re: the tripleo gate. We're currently waiting on a couple patches to land. https://review.opendev.org/#/c/682905/ https://review.opendev.org/#/c/682731 or https://review.opendev.org/#/c/682945/ Also.. fyi, one can clearly see the performance regression here [1] [1] http://dashboard-ci.tripleo.org/d/si1tipHZk/jobs-exploration?orgId=1&from=now-90d&to=now&fullscreen&panelId=16 -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Sep 18 21:30:42 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 18 Sep 2019 16:30:42 -0500 Subject: [neutron][drivers][ironic] FFE request - Use openstacksdk for ironic notifiers In-Reply-To: <20190918194522.GB9740@t440s> References: <7354478b-a71d-0538-c903-de90128e5b2f@fried.cc> <20190918194522.GB9740@t440s> Message-ID: Hi, This FFE is approved Thanks On Wed, Sep 18, 2019 at 2:45 PM Slawek Kaplonski wrote: > Hi, > > Personally I think we can go with this is You will implement it now. > As per discussion on IRC, Ironic code which will use those notifications > isn't > really ready yet, and will not be for Train. So even if something would > possible > go wrong (but won't for sure ;)) we shouldn't break Ironic. > > On Wed, Sep 18, 2019 at 11:04:54AM -0500, Eric Fried wrote: > > > I'd like to open an FFE request to convert the ironic events notifier > > > from the current ironicclient to openstacksdk with the change > > > https://review.opendev.org/682040 > > > > This is kind of none of my business, but since the existing ironic stuff > > was only introduced in Train [1], IMO it is important to allow this FFE > > so neutron doesn't have to go through the pain of supporting and > > deprecating the conf options (e.g. `ironic_url`) and code paths through > > python-ironicclient. > > Thx. I agree. That's another good point to accept this FFE. > > > > > efried > > > > [1] https://review.opendev.org/#/c/658787/ > > > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinzs2048 at gmail.com Wed Sep 18 22:59:57 2019 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Thu, 19 Sep 2019 06:59:57 +0800 Subject: [neutron]IPv6 Prefix Delegation could not be activated in newest version Neutron In-Reply-To: References: Message-ID: any update? Shuai Zhao 于 2019年9月17日周二 上午9:24写道: > Hi All, > I'm working on validate the IPv6 PD in newest Neutron. > What I want is to offer the Global Unified address to the VM and I find PD > is the good solutions for me. > > I follow the guide *https://docs.openstack.org/neutron/latest/admin/config-ipv6.html > *to > setup PD and dibbler-server and devstack, but I find I could not to trigger > the PD process. > *The Dibbler server print nothing when attach the Subnet to router has > external gateway.* All procedure has recorded to the bug: > https://bugs.launchpad.net/neutron/+bug/1844123. > > Thanks for your action and help in advance! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Thu Sep 19 01:34:42 2019 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 19 Sep 2019 01:34:42 +0000 Subject: [ironic] Tips on testing custom hardware manager? Message-ID: <06723f39-ec67-c98b-9e2d-c9b375d568e8@uchicago.edu> Hi all, I am hoping to get some tips on how to test out a custom hardware manager. One of my colleagues is working on a project that involves implementing a custom in-band cleaning step, which we are implementing by creating our own ramdisk image that includes an extra library, which is necessary for the clean step. We already have created the image and ensured it has IPA installed and that all seems to work fine (in that, it executes on the node and we see our code running--and failing!) The issue we are having is that we encounter some issues in our fully integrated environment (such as the provisioning network having different networking rules) and replicating this environment in some local development context is very difficult. Right now our workflow is really onerous as a result: my colleague has to rebuild the ramdisk image, re-upload it to Glance, update the test Ironic node to reference that image, then perform a rebuild. One cycle of this takes a while as you can imagine. I was wondering: is it possible to somehow interrupt or give a larger window for some interactive debugging? The amount of time we have to run some quick tests/debugging is small because the deploy will time out and cancel itself or it will proceed and fail. Thusfar I haven't found any documentation or written experience on this admittedly niche task. Perhaps somebody has already gone down this road and can advise on some tips? It would be much appreciated! Cheers, -- Jason Anderson Chameleon DevOps Lead Consortium for Advanced Science and Engineering, The University of Chicago Mathematics & Computer Science Division, Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Sep 19 02:48:09 2019 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 18 Sep 2019 22:48:09 -0400 Subject: [tripleo][ci] gate jobs killed / reset In-Reply-To: References: Message-ID: Status: We have identified that the 2 major issues are: - Inflight validations taking too much time. They were enabled by default, we changed that: https://review.opendev.org/#/c/683001/ https://review.opendev.org/#/c/682905/ https://review.opendev.org/#/c/682943 They are now disabled by default and also disabled in tripleo-ci-centos-7-containers-multinode - tripleo-container-image-prepare now takes 20 min instead of 10 min before, because of the re-authentication logic that was introduced a few weeks ago. It's proposed to be reverted now: https://review.opendev.org/#/c/682945/ as we haven't found another solution for now. We have restored the patches. You can now do recheck and approve to gate but please stay aware of the situation, by checking the IRC topic on #tripleo and monitoring the zuul queue: http://zuul.openstack.org/ Thanks to infra for force-merging the patches we urgently needed; hopefully this stays exceptional and we don't face this situation again soon. We need to reduce the container image prepare to safely stay under the 3 hours for tripleo-ci-centos-7-containers-multinode. On Wed, Sep 18, 2019 at 5:19 PM Wesley Hayutin wrote: > > > On Tue, Sep 17, 2019 at 4:40 PM Emilien Macchi wrote: > >> Note that I also cleared the check for tripleo projects to accelerate the >> testing of our potential fixes. >> Hopefully we can resolve the situation really soon. >> >> On Tue, Sep 17, 2019 at 4:29 PM Wesley Hayutin >> wrote: >> >>> Greetings, >>> >>> The zuul jobs in the TripleO gate queue were put out of their misery >>> approximately at 20:14 UTC Sept 17 2019. The TripleO jobs were timing out >>> [1] and causing the gate queue to be delayed about 24 hours [2]. >>> >>> We are hoping a revert [3] will restore TripleO jobs back to their usual >>> run times. Please hold off on any rechecks or workflowing patches until >>> [3] is merged and the status on #tripleo is no longer "RED" >>> >>> We appreciate your patience while we work through this issue, the jobs >>> that were in the gate will be restored once we have confirmed and verified >>> the solution. >>> >>> Thank you >>> >>> >>> [1] https://bugs.launchpad.net/tripleo/+bug/1844446 >>> [2] >>> http://dashboard-ci.tripleo.org/d/YRJtmtNWk/cockpit?orgId=1&fullscreen&panelId=398 >>> [3] https://review.opendev.org/#/c/682729/ >>> >> >> >> -- >> Emilien Macchi >> > > Thanks for your continued patience re: the tripleo gate. > > We're currently waiting on a couple patches to land. > https://review.opendev.org/#/c/682905/ > https://review.opendev.org/#/c/682731 or > https://review.opendev.org/#/c/682945/ > > Also.. fyi, one can clearly see the performance regression here [1] > > [1] > http://dashboard-ci.tripleo.org/d/si1tipHZk/jobs-exploration?orgId=1&from=now-90d&to=now&fullscreen&panelId=16 > > > > > > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at sheep.art.pl Thu Sep 19 07:04:24 2019 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Thu, 19 Sep 2019 09:04:24 +0200 Subject: Fwd: [horizon] exception for the allow-users-change-expired-password blueprint In-Reply-To: References: Message-ID: Hello y'all, I would like to ask for an FEE for the allow-users-change-expired-password blueprint ( https://blueprints.launchpad.net/horizon/+spec/allow-users-change-expired-password). The feature is important for users, and the code is reviewed and ready to merge. There are three patches, of which one, containing the form itself, is already merged, one contains the redirect on login failure, and one contains the documentation: https://review.opendev.org/#/c/672315/ https://review.opendev.org/#/c/676167/ Thank you, -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Sep 19 08:04:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 19 Sep 2019 10:04:37 +0200 Subject: [tc] new chair In-Reply-To: References: Message-ID: Mohammed Naser wrote: > Hi everyone, > > With the new TC roster, JP and I both volunteered to be chairs of the > TC which resulted in a CIVS vote between the TC, where the outcome was > that JP won: > > http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009486.html > > Thanks for letting me serve for the past 6 months. Thanks for your leadership, Mohammed! I know from experience the TC chair role is a lot of extra work on top of normal TC member duties, so thank you for helping there. I hope the TC will continue to benefit from your first-hand experience as a user-contributor to OpenStack for a long time! JP: thanks for volunteering to take the helm. The previous chairs are available in case you need help with anything :) -- Thierry Carrez (ttx) From thierry at openstack.org Thu Sep 19 08:12:40 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 19 Sep 2019 10:12:40 +0200 Subject: [tc] Results of the two TC CIVS polls In-Reply-To: References: <8d42c44c-b8cc-9120-d0a0-2b70348cbe47@ham.ie> <20190918152255.GB31404@sm-workstation> Message-ID: <29546210-f9e4-e313-25f5-d2af8ff35cc8@openstack.org> Graham Hayes wrote: > On 18/09/2019 16:22, Sean McGinnis wrote: >> On Wed, Sep 18, 2019 at 01:14:38PM +0100, Graham Hayes wrote: >>> Release Naming: >>> >>> 1. Tied: >>>     Do not change current model of geographic names - No review >>>     Name releases after major cities - >>> https://review.opendev.org/#/c/677745/ >>> 3. >>>     Name releases after the ICAO alphabet - >>> https://review.opendev.org/#/c/677746/ >>> >>> Full results: >>> https://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1&id=E_0a9abdadec887a6b&algorithm=beatpath >> >> The scope of this vote was not clear. This was only as far as changing >> the naming scheme for the remaining V-X, right? Or was it meant for >> anything past Ussuri? > > OpenStack Release Naming for V-Z was the title of the poll. The intent > was we would sort out V-Z, and then worry about the post Z names. > > This is proving to be harder than expected as you can see - so I am not > sure what the route forward is. If the best alternate options we had can't beat "keep it the same", then I'd say it means we should keep the same naming system until we can come up with something sufficiently compelling to switch to. I still support finding a simpler way to find names (for V->Z *and* for the rollover), but this is not super-urgent -- we have a few months to find a better system. -- Thierry Carrez (ttx) From openstack at sheep.art.pl Thu Sep 19 08:19:41 2019 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Thu, 19 Sep 2019 10:19:41 +0200 Subject: [horizon] FFE request for allow-users-change-expired-password Message-ID: <20190919101941.0f6ed9da@ghostwheel> Hello everyone, I would like to request a feature freeze exception for Horizon for the allow-users-change-expired-password blueprint. The blueprint consists of three patches, of which one, implementing the actual password change form, is already merged, and the other two, implementing the redirect to that form when the password is expired, and adding documentation and release notes, have been already tested, reviewed and accepted (but can't be merged due to the freeze). Thank you, -- Radomir Dopieralski From thierry at openstack.org Thu Sep 19 09:25:27 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 19 Sep 2019 11:25:27 +0200 Subject: [release][ironic] Some Ironic deliverables might need a refresh before final Train release Message-ID: Hi everyone, Quick reminder that for deliverables following the cycle-with-intermediary model, the release team will use the latest train release available on release week. The following deliverables have done a train release, but it was not refreshed in the last two months: - bifrost (last released on 2019-06-06) - ironic-inspector (last released on 2019-07-09) - ironic-python-agent (last released on 2019-07-09) - ironic (last released on 2019-06-21) You should consider making a new one very soon, so that we don't use an outdated version for the final release. Thanks in advance, -- Thierry Carrez (ttx) From francois.scheurer at everyware.ch Thu Sep 19 09:31:44 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Thu, 19 Sep 2019 11:31:44 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: <46c4523f-8d63-4c13-898c-a636f38054f5@Spark> References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> <46c4523f-8d63-4c13-898c-a636f38054f5@Spark> Message-ID: <0632f26e-c15f-e971-5b02-7e11474e12c5@everyware.ch> Hi Renat The issue with cron triggers and identity:validate_token was fixed with the above patch. We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success. But we hit another issue with cinder.backups_create . This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger: 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups 401 Unauthorized   AccessDenied I will repost this under Subject: cron triggers execution fails with cinder.volume_snapshots_create as this is separate issue. Cheers Francois On 9/16/19 5:23 AM, Renat Akhmerov wrote: > Hi! > > Are you aware of other issues with cron triggers and trusts? I’d like > to reconcile all of that somehow. The users who I personally work with > don’t use cron triggers so I don’t have that much practical experience > with them. > > Thanks > > Renat Akhmerov > @Nokia > On 13 Sep 2019, 20:34 +0700, Francois Scheurer > , wrote: >> >> Hi Sa Pham >> >> >> Yes this is the good one. >> >> Bo Tran pointed it to me yesterday as well and it fixed the issue. >> >> See also: https://bugs.launchpad.net/mistral/+bug/1843175 >> >> Many Thanks to both of you ! >> >> >> Best Regards >> >> Francois Scheurer >> >> >> >> >> On 9/13/19 3:23 PM, Sa Pham wrote: >>> Hi Francois, >>> >>> You can try this patch: https://review.opendev.org/#/c/680858/ >>> >>> Sa Pham >>> >>> On Thu, Sep 12, 2019 at 11:49 PM Francois Scheurer >>> >> > wrote: >>> >>> Hello >>> >>> >>> >>> Apparently other people have the same issue and cannot use cron >>> triggers anymore: >>> >>> https://bugs.launchpad.net/mistral/+bug/1843175 >>> >>> >>> We also tried with following patch installed but the same error >>> persists: >>> >>> https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split >>> >>> >>> >>> Cheers >>> >>> Francois >>> >>> >>> >>> >>> On 9/9/19 6:23 PM, Francois Scheurer wrote: >>>> >>>> Dear All >>>> >>>> >>>> We are using Mistral 7.0.1.1 with  Openstack Rocky. (with >>>> federated users) >>>> >>>> We can create and execute a workflow via horizon, but cron >>>> triggers always fail with this error: >>>> >>>>     { >>>>         "result": >>>>             "The action raised an exception [ >>>> action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, >>>>                     action_cls='>>> 'mistral.actions.action_factory.NovaAction'>', >>>> attributes='{u'client_method_name': u'servers.find'}', >>>>                     params='{ >>>>                         u'action_region': u'ch-zh1', >>>>                         u'name': >>>> u'42724489-1912-44d1-9a59-6c7a4bebebfa' >>>>                     }' >>>>                 ] >>>>                 \n NovaAction.servers.find failed: You are not >>>> authorized to perform the requested action: >>>> identity:validate_token. (HTTP 403) (Request-ID: >>>> req-ec1aea36-c198-4307-bf01-58aca74fad33) >>>>             " >>>>     } >>>> >>>> Adding the role *admin* or *service* to the user logged in >>>> horizon is "fixing" the issue, I mean that the cron trigger >>>> then works as expected, >>>> >>>> but it would be obviously a bad idea to do this for all normal >>>> users ;-) >>>> >>>> So my question: is it a config problem on our side ? is it a >>>> known bug? or is it a feature in the sense that cron triggers >>>> are for normal users? >>>> >>>> >>>> After digging in the keystone debug logs (see at the end >>>> below), I found that RBAC check identity:validate_token an deny >>>> the authorization. >>>> >>>> But according to the policy.json (in keystone and in horizon), >>>> rule:owner should be enough to grant it...: >>>> >>>>             "identity:validate_token": >>>> "rule:service_admin_or_owner", >>>>                 "service_admin_or_owner": >>>> "rule:service_or_admin or rule:owner", >>>>                     "service_or_admin": "rule:admin_required or >>>> rule:service_role", >>>>                         "service_role": "role:service", >>>>                     "owner": "user_id:%(user_id)s or >>>> user_id:%(target.token.user_id)s", >>>> >>>> Thank you in advance for your help. >>>> >>>> >>>> Best Regards >>>> >>>> Francois Scheurer >>>> >>>> >>>> >>>> >>>> Keystone logs: >>>> >>>>         2019-09-05 09:38:00.902 29 DEBUG >>>> keystone.policy.backends.rules >>>> [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - >>>> testdom testdom] >>>>             enforce identity:validate_token: >>>>             { >>>>                'service_project_id':None, >>>>                'service_user_id':None, >>>>                'service_user_domain_id':None, >>>>                'service_project_domain_id':None, >>>>                'trustor_id':None, >>>>                'user_domain_id':u'testdom', >>>>                'domain_id':None, >>>>                'trust_id':u'mytrustid', >>>>                'project_domain_id':u'testdom', >>>>                'service_roles':[], >>>>                'group_ids':[], >>>>                'user_id':u'fsc', >>>>                'roles':[ >>>>                   u'_member_', >>>>                   u'creator', >>>>                   u'reader', >>>>                   u'heat_stack_owner', >>>>                   u'member', >>>>                   u'load-balancer_member'], >>>>                'system_scope':None, >>>>                'trustee_id':None, >>>>                'domain_name':None, >>>>                'is_admin_project':True, >>>>                'token':>>> (audit_id=0LAsW_0dQMWXh2cTZTLcWA, >>>> audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, >>>>                'project_id':u'fscproject' >>>>             } enforce >>>> /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 >>>>         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi >>>> [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - >>>> testdom testdom] >>>>             You are not authorized to perform the requested >>>> action: identity:validate_token.: *ForbiddenAction: You are not >>>> authorized to perform the requested action: >>>> identity:validate_token.* >>>> >>>> >>>> -- >>>> >>>> >>>> EveryWare AG >>>> François Scheurer >>>> Senior Systems Engineer >>>> Zurlindenstrasse 52a >>>> CH-8003 Zürich >>>> >>>> tel: +41 44 466 60 00 >>>> fax: +41 44 466 60 10 >>>> mail:francois.scheurer at everyware.ch >>>> web:http://www.everyware.ch >>> >>> -- >>> >>> >>> EveryWare AG >>> François Scheurer >>> Senior Systems Engineer >>> Zurlindenstrasse 52a >>> CH-8003 Zürich >>> >>> tel: +41 44 466 60 00 >>> fax: +41 44 466 60 10 >>> mail:francois.scheurer at everyware.ch >>> web:http://www.everyware.ch >>> >>> >>> >>> -- >>> Sa Pham Dang >>> Master Student - Soongsil University >>> Kakaotalk: sapd95 >>> Skype: great_bn >>> >>> >> -- >> >> >> EveryWare AG >> François Scheurer >> Senior Systems Engineer >> Zurlindenstrasse 52a >> CH-8003 Zürich >> >> tel: +41 44 466 60 00 >> fax: +41 44 466 60 10 >> mail:francois.scheurer at everyware.ch >> web:http://www.everyware.ch -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From francois.scheurer at everyware.ch Thu Sep 19 09:43:38 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Thu, 19 Sep 2019 11:43:38 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create Message-ID: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> Dear All We are using Mistral with  Openstack Rocky. (with federated users) We could then use cron triggers for instance with nova.servers_create_image or cinder.volume_snapshots_create with success. But we hit an issue with cinder.backups_create . This call will stores the backup on our swift backend (ceph rgw). The workflow works when executed directly but it fails when executed via cron trigger: 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups 401 Unauthorized   AccessDenied See details below. Cheers Francois 2019-09-17 10:46:02.436 8 INFO cinder.backup.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] GET http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f 2019-09-17 10:46:02.764 20 INFO cinder.api.openstack.wsgi [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f returned with HTTP 200 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 18532 "-" "Go-http-client/1.1" 2019-09-17 10:46:03 +0200] "GET /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 "-" "Go-http-client/1.1" 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes 2019-09-17 10:46:03.150 22 INFO cinder.volume.api [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all volumes completed successfully. 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes returned with HTTP 200 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services 2019-09-17 10:46:03.172 18 INFO cinder.api.openstack.wsgi [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services returned with HTTP 200 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots 2019-09-17 10:46:03.197 19 INFO cinder.api.openstack.wsgi [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots returned with HTTP 200 2019-09-17 10:46:03.197 19 INFO cinder.volume.api [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - default default] Get all snapshots completed successfully. 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Initialize volume connection completed successfully. 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Terminate volume connection completed successfully. 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Remove volume export completed successfully. 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = self._create_container(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server query_string=query_string) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise ClientException.from_response(resp, 'Container PUT failed', body) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self._update_backup_error(backup, six.text_type(err)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.conn.put_container(container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.force_reraise() 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server self.put_container(backup.container) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server service_token=self.service_token, **kwargs) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server tpool.Proxy(device_path)) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = self._run_backup(context, backup, volume) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server volume_size_bytes) = self._prepare_backup(backup) 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 226, in _create_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 327, in _prepare_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", line 535, in backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", line 315, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 414, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 425, in create_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", line 502, in _run_backup 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1061, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1722, in _retry 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", line 1808, in put_container 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] Exception during message handling: ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups 401 Unauthorized   AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups 401 Unauthorized   AccessDenied 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback (most recent call last): -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From dtantsur at redhat.com Thu Sep 19 09:54:48 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 19 Sep 2019 11:54:48 +0200 Subject: [release][ironic] Some Ironic deliverables might need a refresh before final Train release In-Reply-To: References: Message-ID: Hi Thierry, On 9/19/19 11:25 AM, Thierry Carrez wrote: > Hi everyone, > > Quick reminder that for deliverables following the cycle-with-intermediary > model, the release team will use the latest train release available on release > week. > > The following deliverables have done a train release, but it was not refreshed > in the last two months: > > - bifrost (last released on 2019-06-06) > - ironic-inspector (last released on 2019-07-09) > - ironic-python-agent (last released on 2019-07-09) > - ironic (last released on 2019-06-21) > > You should consider making a new one very soon, so that we don't use an outdated > version for the final release. Thanks for the reminder! We're working on getting these releases out ASAP, there are a few things that have to land though. I hope to get them finished by end of this week or early next week, will it work for you? Dmitry > > Thanks in advance, > From jichenjc at cn.ibm.com Thu Sep 19 10:17:03 2019 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Thu, 19 Sep 2019 10:17:03 +0000 Subject: [openstack-dev][cinder] question on cinder-volume A/A configuration Message-ID: An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Sep 19 11:18:16 2019 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 19 Sep 2019 13:18:16 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> Message-ID: Hello François, Given your error, are you sure your cron task load the right config with the right authorized user or something related? Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer < francois.scheurer at everyware.ch> a écrit : > Dear All > > > We are using Mistral with Openstack Rocky. (with federated users) > We could then use cron triggers for instance with > nova.servers_create_image or cinder.volume_snapshots_create with success. > > > But we hit an issue with cinder.backups_create . > > This call will stores the backup on our swift backend (ceph rgw). > The workflow works when executed directly but it fails when executed via > cron trigger: > > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > ClientException: Container PUT failed: > > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized AccessDenied > > See details below. > > > > > > Cheers > > Francois > > > > 2019-09-17 10:46:02.436 8 INFO cinder.backup.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f > volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. > 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi > [req-69a86fd7-b478-4e26-9692-a8416c41459a > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] GET > > http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f > 2019-09-17 > > 10:46:02.764 20 INFO cinder.api.openstack.wsgi > [req-69a86fd7-b478-4e26-9692-a8416c41459a > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > > http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f > returned with HTTP 200 > 2019-09-17 10:46:03 +0200] "GET > /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 > 18532 "-" "Go-http-client/1.1" > 2019-09-17 10:46:03 +0200] "GET > /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 > "-" "Go-http-client/1.1" > 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi > [req-ec93b942-2dc9-4505-8656-680bd661fc71 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] GET > > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes > 2019-09-17 > > 10:46:03.150 22 INFO cinder.volume.api > [req-ec93b942-2dc9-4505-8656-680bd661fc71 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] Get all volumes completed successfully. > 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi > [req-ec93b942-2dc9-4505-8656-680bd661fc71 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] > > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes > returned with HTTP 200 > 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi > [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] GET > > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services > 2019-09-17 > > 10:46:03.172 18 INFO cinder.api.openstack.wsgi > [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] > > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services > returned with HTTP 200 > 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi > [req-b726191c-3710-477a-b7a0-961b74f9233f > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] GET > > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots > 2019-09-17 > > 10:46:03.197 19 INFO cinder.api.openstack.wsgi > [req-b726191c-3710-477a-b7a0-961b74f9233f > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] > > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots > returned with HTTP 200 > 2019-09-17 10:46:03.197 19 INFO cinder.volume.api > [req-b726191c-3710-477a-b7a0-961b74f9233f > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] Get all snapshots completed successfully. > 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Initialize volume connection completed successfully. > 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Terminate volume connection completed successfully. > 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Remove volume export completed successfully. > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = > self._create_container(backup) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > query_string=query_string) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise > ClientException.from_response(resp, 'Container PUT failed', body) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server res = > self.dispatcher.dispatch(message) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = > f(*args, **kwargs) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = > func(ctxt, **new_args) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return > self._do_dispatch(endpoint, method, ctxt, args) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self._update_backup_error(backup, six.text_type(err)) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.conn.put_container(container) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.put_container(backup.container) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > service_token=self.service_token, **kwargs) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > tpool.Proxy(device_path)) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = > self._run_backup(context, backup, volume) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > volume_size_bytes) = self._prepare_backup(backup) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > > line 226, in _create_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > > line 327, in _prepare_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > > line 535, in backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", > > line 315, in put_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > > line 414, in create_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > > line 425, in create_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > > line 502, in _run_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > > line 194, in _do_dispatch > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > > line 265, in dispatch > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > > line 163, in _process_incoming > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", > > line 196, in force_reraise > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", > > line 220, in __exit__ > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", > > line 159, in wrapper > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > > line 1061, in put_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > > line 1722, in _retry > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > > line 1808, in put_container > > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Exception during message handling: ClientException: Container PUT > failed: > > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized AccessDenied > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > ClientException: Container PUT failed: > > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized AccessDenied > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback > (most recent call last): > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Thu Sep 19 11:46:36 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 19 Sep 2019 14:46:36 +0300 Subject: [horizon] FFE request for allow-users-change-expired-password In-Reply-To: <20190919101941.0f6ed9da@ghostwheel> References: <20190919101941.0f6ed9da@ghostwheel> Message-ID: Hi, This code is in pretty good shape and is partially merged [1]. I'm OK got grant FFE for this [1] https://review.opendev.org/#/q/topic:bp/allow-users-change-expired-password+(status:open+OR+status:merged) Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, Sep 19, 2019 at 11:20 AM Radomir Dopieralski wrote: > Hello everyone, > > I would like to request a feature freeze exception for Horizon for the > allow-users-change-expired-password blueprint. The blueprint consists > of three patches, of which one, implementing the actual password change > form, is already merged, and the other two, implementing the redirect > to that form when the password is expired, and adding documentation and > release notes, have been already tested, reviewed and accepted (but > can't be merged due to the freeze). > > Thank you, > -- > Radomir Dopieralski > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmendra.kushwaha at india.nec.com Thu Sep 19 11:54:53 2019 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Thu, 19 Sep 2019 11:54:53 +0000 Subject: [Tacker] {TOSCA] Regarding attaching persistent volume to a VDU in Tosca Template In-Reply-To: References: Message-ID: Hi Nehal, Yes, a volume can be attached with VNF for persistent storage. Please see for detail reference. [1]: https://github.com/openstack/tacker/blob/master/doc/source/reference/block_storage_usage_guide.rst [2]: https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-vnfd-block-attach.yaml Thanks & regards Dharmendra Kushwaha ________________________________________ From: Nehal Bansal Sent: Wednesday, September 18, 2019 8:29 PM To: openstack-dev at lists.openstack.org Subject: [Tacker] {TOSCA] Regarding attaching persistent volume to a VDU in Tosca Template Hi, I want to keep a backup of the data that my VNF creates. For this, I wanted to create a persistent volume such that if a VNF gets deleted and a new one gets launched, it can access the data created by the previous VNF. Is there a way to do this? Thanks. Nehal ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. From amotoki at gmail.com Thu Sep 19 12:16:44 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 19 Sep 2019 21:16:44 +0900 Subject: [horizon] FFE request for allow-users-change-expired-password In-Reply-To: References: <20190919101941.0f6ed9da@ghostwheel> Message-ID: I agree that we can grant FFE for this. Note that when we merge the patches on the allow-users-change-expired-password blueprint we also need to land https://review.opendev.org/#/c/682604/. Thanks, Akihiro Motoki (irc: amotoki) On Thu, Sep 19, 2019 at 8:50 PM Ivan Kolodyazhny wrote: > > Hi, > > This code is in pretty good shape and is partially merged [1]. I'm OK got grant FFE for this > > [1] https://review.opendev.org/#/q/topic:bp/allow-users-change-expired-password+(status:open+OR+status:merged) > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > > On Thu, Sep 19, 2019 at 11:20 AM Radomir Dopieralski wrote: >> >> Hello everyone, >> >> I would like to request a feature freeze exception for Horizon for the >> allow-users-change-expired-password blueprint. The blueprint consists >> of three patches, of which one, implementing the actual password change >> form, is already merged, and the other two, implementing the redirect >> to that form when the password is expired, and adding documentation and >> release notes, have been already tested, reviewed and accepted (but >> can't be merged due to the freeze). >> >> Thank you, >> -- >> Radomir Dopieralski >> From doug at doughellmann.com Thu Sep 19 12:40:37 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Sep 2019 08:40:37 -0400 Subject: [tc] new chair In-Reply-To: References: Message-ID: <15050EA2-FACE-48E0-8D76-C969EFA1CE89@doughellmann.com> > On Sep 19, 2019, at 4:04 AM, Thierry Carrez wrote: > > Mohammed Naser wrote: >> Hi everyone, >> With the new TC roster, JP and I both volunteered to be chairs of the >> TC which resulted in a CIVS vote between the TC, where the outcome was >> that JP won: >> http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009486.html >> Thanks for letting me serve for the past 6 months. > > Thanks for your leadership, Mohammed! > > I know from experience the TC chair role is a lot of extra work on top of normal TC member duties, so thank you for helping there. > > I hope the TC will continue to benefit from your first-hand experience as a user-contributor to OpenStack for a long time! > > JP: thanks for volunteering to take the helm. The previous chairs are available in case you need help with anything :) > > -- > Thierry Carrez (ttx) > +1 to everything Thierry said. Thank you both for serving the community. Doug From dtantsur at redhat.com Thu Sep 19 13:01:14 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 19 Sep 2019 15:01:14 +0200 Subject: [ironic] FFE: PXE boot retry Message-ID: Hi folks, I would like to ask a late FFE for https://storyboard.openstack.org/#!/story/2005167 - retry PXE boot on timeout. Random PXE failures have long been haunting both our consumers and our CI. This change may be a big relief for everyone. Since we're very late in the cycle, and the change is on the critical path, I'm making it off by default (except for the CI) with the intend to reconsider it in Ussuri. The patch https://review.opendev.org/#/c/683127/ has been tested locally, I will finish unit tests and a release note tomorrow (CEST) morning. Please let me know if you have any concerns or questions. Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Thu Sep 19 13:21:27 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Thu, 19 Sep 2019 15:21:27 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> Message-ID: Hello Hervé I tried again, this time defining explictitly all parameters, including action_region and snapshot_id. The results were same as previously: it works when executing the workflow directly but fails with a cron trigger. Or to be more precise, the cron trigger execution "succeeds" but the resulting volume backup fails : (.venv) ewfsc at ewos1-kolla1-stage:~$ openstack volume backup show -f json abe96cb1-a5e1-4035-87dd-b4292101a921 {   "status": "error",   "object_count": 0,   "fail_reason": "Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups 401 Unauthorized   AccessDenied",   "description": null,   "name": "fsc-vol-1-img-vol-bak",   "availability_zone": "ch-zh1-az1",   "created_at": "2019-09-19T13:15:02.000000",   "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691",   "updated_at": "2019-09-19T13:15:04.000000",   "data_timestamp": "2019-09-19T12:38:02.000000",   "has_dependent_backups": false,   "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114",   "container": "volumebackups",   "size": 1,   "id": "abe96cb1-a5e1-4035-87dd-b4292101a921",   "is_incremental": false } Best Regards Francois Details: Workflow     ---     version: "2.0"     create_vol_backup:       type: direct       input:         - volume_id         - container         - name         - incremental         - force         - action_region         - snapshot_id       tasks:         create_vol_backup:           action: cinder.backups_create volume_id=<% $.volume_id %> name=<% $.name %> container=<% $.container %> incremental=<% $.incremental %> force=<% $.force %> action_region=<% $.action_region%> snapshot_id=<% $.snapshot_id %>           publish:             backup_id: <% task(create_vol_backup).result %>             create_state: SUCCESS           publish-on-error:             create_state: ERROR Input     {         "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691",         "container": "volumebackups",         "name": "fsc-vol-1-img-vol-bak",         "incremental": "false",         "force": "true",         "action_region": "ch-zh1",         "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114"     } Params     {         "namespace": "",         "env": {},         "task_name": "create_vol_backup_task"     } On 9/19/19 2:28 PM, Francois Scheurer wrote: > > Hi Herve > > > Thank you for your reply. > > I am using the same input & params as when executing the workflow > directly from horizon (successfully): > >     { >         "incremental": "false", >         "force": "true", >         "name": "fsc-create-vol-backup", >         "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691" >     } > >     { >         "namespace": "", >         "env": {}, >         "task_name": "create_vol_backup_task" >     } > > Maybe I need some additional params when executing via cron? > > I will try specfying the objectstore container explicitly. > > > Best Regards > > Francois > > > > > On 9/19/19 1:18 PM, Herve Beraud wrote: >> Hello François, >> >> Given your error, are you sure your cron task load the right config >> with the right authorized user or something related? >> >> Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer >> > > a écrit : >> >> Dear All >> >> >> We are using Mistral with  Openstack Rocky. (with federated users) >> We could then use cron triggers for instance with >> nova.servers_create_image or cinder.volume_snapshots_create with >> success. >> >> >> But we hit an issue with cinder.backups_create . >> >> This call will stores the backup on our swift backend (ceph rgw). >> The workflow works when executed directly but it fails when >> executed via >> cron trigger: >> >> 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server >> ClientException: Container PUT failed: >> http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups >> >> 401 Unauthorized   AccessDenied >> >> See details below. >> >> >> >> >> >> Cheers >> >> Francois >> -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From opensrloo at gmail.com Thu Sep 19 14:10:57 2019 From: opensrloo at gmail.com (Ruby Loo) Date: Thu, 19 Sep 2019 10:10:57 -0400 Subject: [ironic] FFE: PXE boot retry In-Reply-To: References: Message-ID: Hi Dmitry, I'm fine with the FFE; it'll have minimal impact (and low risk of failure when turned off). I think we need 2 cores to agree to 'sponsor' (ie review) the feature. I am not sure we should turn it on by default, but we can discuss that in Ussuri. (Or maybe it'll depends on what the default timeout value might be...) Oh. I'm ok with reviewing. When's the cut-off date by which this needs to land? --ruby On Thu, Sep 19, 2019 at 9:04 AM Dmitry Tantsur wrote: > Hi folks, > > I would like to ask a late FFE for > https://storyboard.openstack.org/#!/story/2005167 - retry PXE boot on > timeout. Random PXE failures have long been haunting both our consumers and > our CI. This change may be a big relief for everyone. Since we're very late > in the cycle, and the change is on the critical path, I'm making it off by > default (except for the CI) with the intend to reconsider it in Ussuri. The > patch https://review.opendev.org/#/c/683127/ has been tested locally, I > will finish unit tests and a release note tomorrow (CEST) morning. > > Please let me know if you have any concerns or questions. > > Dmitry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Sep 19 14:22:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 19 Sep 2019 16:22:37 +0200 Subject: [release][ironic] Some Ironic deliverables might need a refresh before final Train release In-Reply-To: References: Message-ID: <48fb78be-2763-4fb1-5d08-ec12220b0668@openstack.org> Dmitry Tantsur wrote: > On 9/19/19 11:25 AM, Thierry Carrez wrote: >> Hi everyone, >> >> Quick reminder that for deliverables following the cycle-with-intermediary >> model, the release team will use the latest train release available on release >> week. >> >> The following deliverables have done a train release, but it was not refreshed >> in the last two months: >> >> - bifrost (last released on 2019-06-06) >> - ironic-inspector (last released on 2019-07-09) >> - ironic-python-agent (last released on 2019-07-09) >> - ironic (last released on 2019-06-21) >> >> You should consider making a new one very soon, so that we don't use an outdated >> version for the final release. > > Thanks for the reminder! We're working on getting these releases out ASAP, there > are a few things that have to land though. I hope to get them finished by end of > this week or early next week, will it work for you? Sure, that works! You actually have until the end of the month to update. After that we'll have to fallback on the existing release. -- Thierry Carrez (ttx) From hberaud at redhat.com Thu Sep 19 14:31:13 2019 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 19 Sep 2019 16:31:13 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> Message-ID: Thanks François for your reply, Have you seen the original authentication error during the running Le jeu. 19 sept. 2019 à 15:22, Francois Scheurer < francois.scheurer at everyware.ch> a écrit : > Hello Hervé > > > I tried again, this time defining explictitly all parameters, including > action_region and snapshot_id. > > The results were same as previously: it works when executing the workflow > directly but fails with a cron trigger. > > Or to be more precise, the cron trigger execution "succeeds" but the > resulting volume backup fails : > Thanks François for your reply, Have you seen the original authentication error during this execution? If not then I guess you missed some params during your first tries which introduced the authentication issue. I guess then that the volume backup fails is another issue, not related to the first authentication issue... > > (.venv) ewfsc at ewos1-kolla1-stage:~$ openstack volume backup show -f json > abe96cb1-a5e1-4035-87dd-b4292101a921 > { > "status": "error", > "object_count": 0, > "fail_reason": "Container PUT failed: > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized AccessDenied", > "description": null, > "name": "fsc-vol-1-img-vol-bak", > "availability_zone": "ch-zh1-az1", > "created_at": "2019-09-19T13:15:02.000000", > "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691", > "updated_at": "2019-09-19T13:15:04.000000", > "data_timestamp": "2019-09-19T12:38:02.000000", > "has_dependent_backups": false, > "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114", > "container": "volumebackups", > "size": 1, > "id": "abe96cb1-a5e1-4035-87dd-b4292101a921", > "is_incremental": false > } > > Best Regards > > Francois > > > Details: > > Workflow > --- > version: "2.0" > create_vol_backup: > type: direct > input: > - volume_id > - container > - name > - incremental > - force > - action_region > - snapshot_id > > tasks: > create_vol_backup: > action: cinder.backups_create volume_id=<% $.volume_id %> > name=<% $.name %> container=<% $.container %> incremental=<% $.incremental > %> force=<% $.force %> action_region=<% $.action_region%> snapshot_id=<% > $.snapshot_id %> > publish: > backup_id: <% task(create_vol_backup).result %> > create_state: SUCCESS > publish-on-error: > create_state: ERROR > > Input > { > "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691", > "container": "volumebackups", > "name": "fsc-vol-1-img-vol-bak", > "incremental": "false", > "force": "true", > "action_region": "ch-zh1", > "snapshot_id": "b4b174eb-e6d2-4f66-8070-212e3e7e6114" > } > > Params > { > "namespace": "", > "env": {}, > "task_name": "create_vol_backup_task" > } > > > On 9/19/19 2:28 PM, Francois Scheurer wrote: > > Hi Herve > > > Thank you for your reply. > > I am using the same input & params as when executing the workflow directly > from horizon (successfully): > > { > "incremental": "false", > "force": "true", > "name": "fsc-create-vol-backup", > "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691" > } > > { > "namespace": "", > "env": {}, > "task_name": "create_vol_backup_task" > } > > Maybe I need some additional params when executing via cron? > > I will try specfying the objectstore container explicitly. > > > Best Regards > > Francois > > > > > On 9/19/19 1:18 PM, Herve Beraud wrote: > > Hello François, > > Given your error, are you sure your cron task load the right config with > the right authorized user or something related? > > Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer < > francois.scheurer at everyware.ch> a écrit : > >> Dear All >> >> >> We are using Mistral with Openstack Rocky. (with federated users) >> We could then use cron triggers for instance with >> nova.servers_create_image or cinder.volume_snapshots_create with success. >> >> >> But we hit an issue with cinder.backups_create . >> >> This call will stores the backup on our swift backend (ceph rgw). >> The workflow works when executed directly but it fails when executed via >> cron trigger: >> >> 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server >> ClientException: Container PUT failed: >> >> http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups >> 401 Unauthorized AccessDenied >> >> See details below. >> >> >> >> >> >> Cheers >> >> Francois >> > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Thu Sep 19 12:28:40 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Thu, 19 Sep 2019 14:28:40 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> Message-ID: Hi Herve Thank you for your reply. I am using the same input & params as when executing the workflow directly from horizon (successfully):     {         "incremental": "false",         "force": "true",         "name": "fsc-create-vol-backup",         "volume_id": "c0022411-59a4-4c7c-9474-c7ea8ccc7691"     }     {         "namespace": "",         "env": {},         "task_name": "create_vol_backup_task"     } Maybe I need some additional params when executing via cron? I will try specfying the objectstore container explicitly. Best Regards Francois On 9/19/19 1:18 PM, Herve Beraud wrote: > Hello François, > > Given your error, are you sure your cron task load the right config > with the right authorized user or something related? > > Le jeu. 19 sept. 2019 à 11:51, Francois Scheurer > > a écrit : > > Dear All > > > We are using Mistral with  Openstack Rocky. (with federated users) > We could then use cron triggers for instance with > nova.servers_create_image or cinder.volume_snapshots_create with > success. > > > But we hit an issue with cinder.backups_create . > > This call will stores the backup on our swift backend (ceph rgw). > The workflow works when executed directly but it fails when > executed via > cron trigger: > > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > ClientException: Container PUT failed: > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > > 401 Unauthorized   AccessDenied > > See details below. > > > > > > Cheers > > Francois > > > > 2019-09-17 10:46:02.436 8 INFO cinder.backup.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Create backup started, backup: 901e1781-02ad-46d5-8ddf-e5410670cf9f > volume: c0022411-59a4-4c7c-9474-c7ea8ccc7691. > 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi > [req-69a86fd7-b478-4e26-9692-a8416c41459a > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > GET > http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f > 2019-09-17 > > 10:46:02.764 20 INFO cinder.api.openstack.wsgi > [req-69a86fd7-b478-4e26-9692-a8416c41459a > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f > > returned with HTTP 200 > 2019-09-17 10:46:03 +0200] "GET > /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 > 18532 "-" "Go-http-client/1.1" > 2019-09-17 10:46:03 +0200] "GET > /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 > "-" "Go-http-client/1.1" > 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi > [req-ec93b942-2dc9-4505-8656-680bd661fc71 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] GET > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes > 2019-09-17 > > 10:46:03.150 22 INFO cinder.volume.api > [req-ec93b942-2dc9-4505-8656-680bd661fc71 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] Get all volumes completed successfully. > 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi > [req-ec93b942-2dc9-4505-8656-680bd661fc71 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes > > returned with HTTP 200 > 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi > [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] GET > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services > 2019-09-17 > > 10:46:03.172 18 INFO cinder.api.openstack.wsgi > [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services > > returned with HTTP 200 > 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi > [req-b726191c-3710-477a-b7a0-961b74f9233f > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] GET > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots > 2019-09-17 > > 10:46:03.197 19 INFO cinder.api.openstack.wsgi > [req-b726191c-3710-477a-b7a0-961b74f9233f > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] > http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots > > returned with HTTP 200 > 2019-09-17 10:46:03.197 19 INFO cinder.volume.api > [req-b726191c-3710-477a-b7a0-961b74f9233f > b141574ee71f49a0b53a05ae968576c5 f099965b37ac41489e9cac8c9d208711 - > default default] Get all snapshots completed successfully. > 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Initialize volume connection completed successfully. > 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Terminate volume connection completed successfully. > 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Remove volume export completed successfully. > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = > self._create_container(backup) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > query_string=query_string) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise > ClientException.from_response(resp, 'Container PUT failed', body) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server res = > self.dispatcher.dispatch(message) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = > f(*args, **kwargs) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = > func(ctxt, **new_args) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return > self._do_dispatch(endpoint, method, ctxt, args) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self._update_backup_error(backup, six.text_type(err)) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.conn.put_container(container) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.put_container(backup.container) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > service_token=self.service_token, **kwargs) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > tpool.Proxy(device_path)) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = > self._run_backup(context, backup, volume) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > volume_size_bytes) = self._prepare_backup(backup) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > > line 226, in _create_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > > line 327, in _prepare_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > > line 535, in backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", > > line 315, in put_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > > line 414, in create_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > > line 425, in create_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > > line 502, in _run_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > > line 194, in _do_dispatch > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > > line 265, in dispatch > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > > line 163, in _process_incoming > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", > > line 196, in force_reraise > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", > > line 220, in __exit__ > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", > > line 159, in wrapper > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > > line 1061, in put_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > > line 1722, in _retry > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > > line 1808, in put_container > > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa > 3e9b1a4fe95048a3b98fb5abebd44f6c aeac4b07d8b144178c43c65f29fa9dac - > 18b20663b571455c8da31fde994d031a 18b20663b571455c8da31fde994d031a] > Exception during message handling: ClientException: Container PUT > failed: > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > > 401 Unauthorized   AccessDenied > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > ClientException: Container PUT failed: > http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > > 401 Unauthorized   AccessDenied > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback > (most recent call last): > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > > web: http://www.everyware.ch > > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From rajatdhasmana at gmail.com Thu Sep 19 15:11:07 2019 From: rajatdhasmana at gmail.com (Rajat Dhasmana) Date: Thu, 19 Sep 2019 20:41:07 +0530 Subject: [cinder][FFE] Feature Freeze Exceptions Message-ID: Hi, I would like to request FFE for the following cinder feature : - Untyped to Default Volume Type: https://review.opendev.org/#/c/639180 Thanks and Regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Sep 19 15:14:06 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 19 Sep 2019 10:14:06 -0500 Subject: [cinder][FFE] Feature Freeze Exceptions In-Reply-To: References: Message-ID: <7b75aaf0-3550-f1b2-750b-17007bc592a7@gmail.com> Rajat, We have discussed this one in the past and it looks like it just missed reviews.  We did want to get this in place so I am granting the FFE. Thanks! Jay On 9/19/2019 10:11 AM, Rajat Dhasmana wrote: > Hi, > > I would like to request FFE for the following cinder feature : > > * Untyped to Default Volume Type: > https://review.opendev.org/#/c/639180 > > > > Thanks and Regards > Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Thu Sep 19 15:20:48 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Thu, 19 Sep 2019 17:20:48 +0200 Subject: [monasca] Ussuri release planning meeting Message-ID: <4dca22de-8fde-6b88-c9f2-c4ed2fc55f3c@suse.com> Hello everybody, as yesterday discussed in the team meeting I'd like to find the best time for the planning meeting for the next release cycle. I though about two 4 hours long time slots. Please fill in your preferences in this Doodle survey: https://doodle.com/poll/ugxr89tqmkfwa5r7 Please also leave any feedback about the times in the etherpad. Please let me know if you would like to move the meeting to some other time or make it shorter (or longer). https://etherpad.openstack.org/p/monasca-planning-ussuri Best greetings Witek From ignaziocassano at gmail.com Thu Sep 19 15:42:48 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 19 Sep 2019 17:42:48 +0200 Subject: [oslo] run outlasted interval Message-ID: Hello, I have an openstack queens installation on centos 7 and starting from last night I got some warnings in several service: nova conductor reports: 2019-09-19 17:37:54.091 161110 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 29.88 sec 2019-09-19 17:37:54.092 161108 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 29.88 sec 2019-09-19 17:37:54.093 161118 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 29.88 sec 2019-09-19 17:37:54.095 161116 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 29.88 sec 2019-09-19 17:37:54.096 161109 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 29.88 sec 2019-09-19 17:37:54.097 161107 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 29.88 sec openvswitch agent reports: 019-09-19 17:16:14.499 26115 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' run outlasted interval by 3.57 sec 2019-09-19 17:23:44.200 26115 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' run outlasted interval by 29.71 sec 2019-09-19 17:37:54.077 26115 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' run outlasted interval by 9.91 sec Please, anyone could help me ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajatdhasmana at gmail.com Thu Sep 19 15:41:48 2019 From: rajatdhasmana at gmail.com (Rajat Dhasmana) Date: Thu, 19 Sep 2019 21:11:48 +0530 Subject: [cinder][FFE] Feature Freeze Exceptions In-Reply-To: <7b75aaf0-3550-f1b2-750b-17007bc592a7@gmail.com> References: <7b75aaf0-3550-f1b2-750b-17007bc592a7@gmail.com> Message-ID: Hi Jay, Thanks for the approval, as discussed i would also like to request FFE for the following Cinder NEC Driver patches : - NEC Driver: allow more than 4 iSCSI portals : https://review.opendev.org/#/c/668088/ - NEC Driver: Support revert to snapshot : https://review.opendev.org/#/c/675083/ - NEC Driver: Storage assist retype and a bugfix : https://review.opendev.org/#/c/674586/ Regards Rajat Dhasmana On Thu, Sep 19, 2019 at 8:48 PM Jay Bryant wrote: > Rajat, > > We have discussed this one in the past and it looks like it just missed > reviews. We did want to get this in place so I am granting the FFE. > > Thanks! > > Jay > On 9/19/2019 10:11 AM, Rajat Dhasmana wrote: > > Hi, > > I would like to request FFE for the following cinder feature : > > > - Untyped to Default Volume Type: https://review.opendev.org/#/c/639180 > > > > Thanks and Regards > Rajat Dhasmana > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 19 15:48:02 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 19 Sep 2019 10:48:02 -0500 Subject: [oslo][nova][neutron] run outlasted interval In-Reply-To: References: Message-ID: Adding Nova and Neutron tags as I don't think this is an Oslo problem. What I believe those log messages are saying is that the thing from Nova/Neutron that Oslo called took longer to run than it should have. On 9/19/19 10:42 AM, Ignazio Cassano wrote: > Hello, I have an openstack queens installation on centos 7 and starting > from last night I got some warnings in several service: > > nova conductor reports: > > 2019-09-19 17:37:54.091 161110 WARNING oslo.service.loopingcall [-] > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > outlasted interval by 29.88 sec > 2019-09-19 17:37:54.092 161108 WARNING oslo.service.loopingcall [-] > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > outlasted interval by 29.88 sec > 2019-09-19 17:37:54.093 161118 WARNING oslo.service.loopingcall [-] > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > outlasted interval by 29.88 sec > 2019-09-19 17:37:54.095 161116 WARNING oslo.service.loopingcall [-] > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > outlasted interval by 29.88 sec > 2019-09-19 17:37:54.096 161109 WARNING oslo.service.loopingcall [-] > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > outlasted interval by 29.88 sec > 2019-09-19 17:37:54.097 161107 WARNING oslo.service.loopingcall [-] > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > outlasted interval by 29.88 sec > > openvswitch agent reports: > 019-09-19 17:16:14.499 26115 WARNING oslo.service.loopingcall [-] > Function > 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' > run outlasted interval by 3.57 sec > 2019-09-19 17:23:44.200 26115 WARNING oslo.service.loopingcall [-] > Function > 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' > run outlasted interval by 29.71 sec > 2019-09-19 17:37:54.077 26115 WARNING oslo.service.loopingcall [-] > Function > 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' > run outlasted interval by 9.91 sec > > Please, anyone could help me ? > Regards > Ignazio > > From arne.wiebalck at cern.ch Thu Sep 19 15:50:07 2019 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 19 Sep 2019 17:50:07 +0200 Subject: [ironic] Tips on testing custom hardware manager? In-Reply-To: <06723f39-ec67-c98b-9e2d-c9b375d568e8@uchicago.edu> References: <06723f39-ec67-c98b-9e2d-c9b375d568e8@uchicago.edu> Message-ID: <3b588cef-563e-78a3-d471-d2a6cff3184b@cern.ch> Jason, One thing we do is having the image pull in the custom hardware manager branch via git: we build the image once and make changes on the branch which is then pulled in on the next iteration. As this avoids rebuilding/uploading the image for every change, our dev cycle has become much shorter. Another thing we do for debugging our custom hardware manager is to add (debug) steps to it. These steps wait for certain file to appear before moving on: the IPA will basically spin in this step until we log in and touch the flag file. With one or two steps like this we can set "breakpoints" to check things while developing our hardware manager. HTH, Arne On 19.09.19 03:34, Jason Anderson wrote: > Hi all, > > I am hoping to get some tips on how to test out a custom hardware > manager. One of my colleagues is working on a project that involves > implementing a custom in-band cleaning step, which we are implementing > by creating our own ramdisk image that includes an extra library, which > is necessary for the clean step. We already have created the image and > ensured it has IPA installed and that all seems to work fine (in that, > it executes on the node and we see our code running--and failing!) > > The issue we are having is that we encounter some issues in our fully > integrated environment (such as the provisioning network having > different networking rules) and replicating this environment in some > local development context is very difficult. Right now our workflow is > really onerous as a result: my colleague has to rebuild the ramdisk > image, re-upload it to Glance, update the test Ironic node to reference > that image, then perform a rebuild. One cycle of this takes a while as > you can imagine. I was wondering: is it possible to somehow interrupt or > give a larger window for some interactive debugging? The amount of time > we have to run some quick tests/debugging is small because the deploy > will time out and cancel itself or it will proceed and fail. > > Thusfar I haven't found any documentation or written experience on this > admittedly niche task. Perhaps somebody has already gone down this road > and can advise on some tips? It would be much appreciated! > > Cheers, > > -- > Jason Anderson > > Chameleon DevOps Lead > *Consortium for Advanced Science and Engineering, The University of Chicago* > *Mathematics & Computer Science Division, Argonne National Laboratory* From juliaashleykreger at gmail.com Thu Sep 19 15:53:20 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 19 Sep 2019 08:53:20 -0700 Subject: [ironic] FFE: PXE boot retry In-Reply-To: References: Message-ID: I'm good with this change FFE. I've looked through most of the submitted patch and it LGTM thus far minus unit tests. As for Ruby's question, I think ASAP given CI issues. -Julia On Thu, Sep 19, 2019 at 7:17 AM Ruby Loo wrote: > > Hi Dmitry, > > I'm fine with the FFE; it'll have minimal impact (and low risk of failure when turned off). I think we need 2 cores to agree to 'sponsor' (ie review) the feature. > > I am not sure we should turn it on by default, but we can discuss that in Ussuri. (Or maybe it'll depends on what the default timeout value might be...) > > Oh. I'm ok with reviewing. > > When's the cut-off date by which this needs to land? > > --ruby > > > On Thu, Sep 19, 2019 at 9:04 AM Dmitry Tantsur wrote: >> >> Hi folks, >> >> I would like to ask a late FFE for https://storyboard.openstack.org/#!/story/2005167 - retry PXE boot on timeout. Random PXE failures have long been haunting both our consumers and our CI. This change may be a big relief for everyone. Since we're very late in the cycle, and the change is on the critical path, I'm making it off by default (except for the CI) with the intend to reconsider it in Ussuri. The patch https://review.opendev.org/#/c/683127/ has been tested locally, I will finish unit tests and a release note tomorrow (CEST) morning. >> >> Please let me know if you have any concerns or questions. >> >> Dmitry From dtantsur at redhat.com Thu Sep 19 15:54:12 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 19 Sep 2019 17:54:12 +0200 Subject: [ironic] FFE: PXE boot retry In-Reply-To: References: Message-ID: I don't risk talking about timing given the CI state.. but early next week the latest. On Thu, Sep 19, 2019 at 4:11 PM Ruby Loo wrote: > Hi Dmitry, > > I'm fine with the FFE; it'll have minimal impact (and low risk of failure > when turned off). I think we need 2 cores to agree to 'sponsor' (ie review) > the feature. > > I am not sure we should turn it on by default, but we can discuss that in > Ussuri. (Or maybe it'll depends on what the default timeout value might > be...) > > Oh. I'm ok with reviewing. > > When's the cut-off date by which this needs to land? > > --ruby > > > On Thu, Sep 19, 2019 at 9:04 AM Dmitry Tantsur > wrote: > >> Hi folks, >> >> I would like to ask a late FFE for >> https://storyboard.openstack.org/#!/story/2005167 - retry PXE boot on >> timeout. Random PXE failures have long been haunting both our consumers and >> our CI. This change may be a big relief for everyone. Since we're very late >> in the cycle, and the change is on the critical path, I'm making it off by >> default (except for the CI) with the intend to reconsider it in Ussuri. The >> patch https://review.opendev.org/#/c/683127/ has been tested locally, I >> will finish unit tests and a release note tomorrow (CEST) morning. >> >> Please let me know if you have any concerns or questions. >> >> Dmitry >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 19 16:01:08 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 19 Sep 2019 11:01:08 -0500 Subject: [oslo][nova][neutron] run outlasted interval In-Reply-To: References: Message-ID: On 9/19/2019 10:48 AM, Ben Nemec wrote: > Adding Nova and Neutron tags as I don't think this is an Oslo problem. > What I believe those log messages are saying is that the thing from > Nova/Neutron that Oslo called took longer to run than it should have. Right, check the usage on the hosts running those services to see if something like CPU is maxed out. -- Thanks, Matt From ignaziocassano at gmail.com Thu Sep 19 16:02:21 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 19 Sep 2019 18:02:21 +0200 Subject: [oslo][nova][neutron] run outlasted interval In-Reply-To: References: Message-ID: Hello, thanks for your answr. I have another installatiom with same release, number of nodes and controllers and the warning does not appear. Insering debug=true in nova.conf did not help. Ignazio Il gio 19 set 2019, 17:48 Ben Nemec ha scritto: > Adding Nova and Neutron tags as I don't think this is an Oslo problem. > What I believe those log messages are saying is that the thing from > Nova/Neutron that Oslo called took longer to run than it should have. > > On 9/19/19 10:42 AM, Ignazio Cassano wrote: > > Hello, I have an openstack queens installation on centos 7 and starting > > from last night I got some warnings in several service: > > > > nova conductor reports: > > > > 2019-09-19 17:37:54.091 161110 WARNING oslo.service.loopingcall [-] > > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > > outlasted interval by 29.88 sec > > 2019-09-19 17:37:54.092 161108 WARNING oslo.service.loopingcall [-] > > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > > outlasted interval by 29.88 sec > > 2019-09-19 17:37:54.093 161118 WARNING oslo.service.loopingcall [-] > > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > > outlasted interval by 29.88 sec > > 2019-09-19 17:37:54.095 161116 WARNING oslo.service.loopingcall [-] > > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > > outlasted interval by 29.88 sec > > 2019-09-19 17:37:54.096 161109 WARNING oslo.service.loopingcall [-] > > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > > outlasted interval by 29.88 sec > > 2019-09-19 17:37:54.097 161107 WARNING oslo.service.loopingcall [-] > > Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run > > outlasted interval by 29.88 sec > > > > openvswitch agent reports: > > 019-09-19 17:16:14.499 26115 WARNING oslo.service.loopingcall [-] > > Function > > > 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' > > > run outlasted interval by 3.57 sec > > 2019-09-19 17:23:44.200 26115 WARNING oslo.service.loopingcall [-] > > Function > > > 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' > > > run outlasted interval by 29.71 sec > > 2019-09-19 17:37:54.077 26115 WARNING oslo.service.loopingcall [-] > > Function > > > 'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state' > > > run outlasted interval by 9.91 sec > > > > Please, anyone could help me ? > > Regards > > Ignazio > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Sep 19 16:02:59 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 19 Sep 2019 11:02:59 -0500 Subject: [cinder][FFE] Feature Freeze Exceptions In-Reply-To: References: <7b75aaf0-3550-f1b2-750b-17007bc592a7@gmail.com> Message-ID: <41b3620c-5ed0-23dc-f0f4-180d5cee6f6c@gmail.com> Rajat, You are welcome. Ok, as discussed the patches below are all limited to your driver and it appears that reviews were slow.  We can provide an exception for these as well. Thanks! Jay On 9/19/2019 10:41 AM, Rajat Dhasmana wrote: > Hi Jay, > > Thanks for the approval, as discussed i would also like to request FFE > for the following Cinder NEC Driver patches : > > * NEC Driver: allow more than 4 iSCSI portals : > https://review.opendev.org/#/c/668088/ > * NEC Driver: Support revert to snapshot : > https://review.opendev.org/#/c/675083/ > * NEC Driver: Storage assist retype and a bugfix : > https://review.opendev.org/#/c/674586/ > > > Regards > Rajat Dhasmana > > On Thu, Sep 19, 2019 at 8:48 PM Jay Bryant > wrote: > > Rajat, > > We have discussed this one in the past and it looks like it just > missed reviews.  We did want to get this in place so I am granting > the FFE. > > Thanks! > > Jay > > On 9/19/2019 10:11 AM, Rajat Dhasmana wrote: >> Hi, >> >> I would like to request FFE for the following cinder feature : >> >> * Untyped to Default Volume Type: >> https://review.opendev.org/#/c/639180 >> >> >> >> Thanks and Regards >> Rajat Dhasmana > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Thu Sep 19 16:08:03 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 19 Sep 2019 11:08:03 -0500 Subject: [Security] Security SIG Newsletter Message-ID: Past couple weeks were a bit slow, but this week has some updates! #Date: 19 Sept 2019 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Meeting Notes - Summary: http://eavesdrop.openstack.org/meetings/security/2019/security.2019-09-19-15.00.html - Discussed the recently public big here: https://bugs.launchpad.net/horizon/+bug/1842930 - Current path forward is to clear up documentation to warn about this and provide info about caching settings. - nickthetait is currently working on https://bugs.launchpad.net/ossp-security-documentation/+bug/1703353 - Will create a page describing the usage and functionality of the audit middleware & CADF notifications #VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - Deleted user still can delete volumes in Horizon: https://bugs.launchpad.net/horizon/+bug/1842930 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Sep 19 16:28:27 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 19 Sep 2019 18:28:27 +0200 Subject: [oslo][nova][neutron] run outlasted interval In-Reply-To: References: Message-ID: Hello, I'll check tomorrow. Regards Ignazio Il gio 19 set 2019, 18:05 Matt Riedemann ha scritto: > On 9/19/2019 10:48 AM, Ben Nemec wrote: > > Adding Nova and Neutron tags as I don't think this is an Oslo problem. > > What I believe those log messages are saying is that the thing from > > Nova/Neutron that Oslo called took longer to run than it should have. > > Right, check the usage on the hosts running those services to see if > something like CPU is maxed out. > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Sep 19 17:19:31 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 19 Sep 2019 12:19:31 -0500 Subject: [all][release] Proposed release schedule for Ussuri Message-ID: <20190919171931.GA17509@sm-workstation> Hey everyone, Now that we have a date for the next event [0], it might be a good time to finalize the release cycle schedule for the Ussuri release. I have put up a proposal for the schedule here: https://review.opendev.org/#/c/679822/ Please take a look and give any feedback if you see any issues with where the milestone dates align with any major holidays or other external factors that we should consider. For your viewing convenience, here is the rendered output from the docs job: https://openstack.fortnebula.com:13808/v1/AUTH_e8fd161dc34c421a979a9e6421f823e9/zuul_opendev_logs_c4c/679822/3/check/openstack-tox-docs/c4cbc92/docs/ussuri/schedule.html Thanks! Sean [0] http://lists.openstack.org/pipermail/foundation/2019-September/002794.html From mark at stackhpc.com Thu Sep 19 17:25:15 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 19 Sep 2019 18:25:15 +0100 Subject: [kolla] Kayobe Train planning meeting In-Reply-To: References: Message-ID: On Wed, 18 Sep 2019, 14:56 Mark Goddard, wrote: > Hi, > > Just as the rest of the world is wrapping up the Train release, we > find ourselves having just released Stein and starting on Train > development. Given the timing of CentOS 8 which we intend to support > in the kolla Train release, we have some time to do some feature > development in kayobe. I've set up a Doodle poll [1] with two hour > slots next week to plan the next release. > > [1] https://doodle.com/poll/y7fakbhbkfx5hqyk I added some more slots as we weren't able to find a suitable time. Please update your responses. > > > Cheers, > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Thu Sep 19 19:03:00 2019 From: rfolco at redhat.com (Rafael Folco) Date: Thu, 19 Sep 2019 16:03:00 -0300 Subject: [tripleo] TripleO CI Summary: Sprint 36 Message-ID: Greetings, The TripleO CI team has just completed Sprint 36 / Unified Sprint 15 (Aug 29 thru Sep 18). The following is a summary of completed work during this sprint cycle: - Addressed issues for scenario{001,002} jobs for RHEL8 in the periodic pipeline. RHEL 8 scen003, scen004 have CIX issues and are being worked on by the ruck/rovers. - Merged multi-arch container tagging support and implemented changes in the promoter code to push manifests with annotated metadata for architecture in addition to the arch tagged containers strategy. - Implemented the provisioning and staging promoter setup to an RDO job for testing changes in the promoter server. Next sprint will close-out the tests for the staging promotion workflow. The planned work for the next sprint [1] are: - Complete the manifest implementation with a test strategy for not breaking promotion workflow. - Design and implement tests for verifying a full promotion workflow running on the staging environment. - Design CI jobs in zuul to build and run tests against ceph-ansible and podman pull requests in github. - Train release branching preparation work. - Prepare CentOS8 node for upcoming distro release support across TripleO CI jobs. The Ruck and Rover for this sprint are Arx Cruz (arxcruz) and Sorin Sbarnea (zbr). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes are being tracked in etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-16 [2] https://etherpad.openstack.org/p/ruckroversprint16 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Sep 19 19:34:24 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 19 Sep 2019 14:34:24 -0500 Subject: [release] Release countdown for week R-3, September 23-27 Message-ID: <20190919193424.GA22740@sm-workstation> Development Focus ----------------- The Release Candidate (RC) deadline is next Thursday, September 26th. Work should be focused on fixing any release-critical bugs. General Information ------------------- All deliverables released under a cycle-with-rc model should have a first release candidate by the end of the week, from which a stable/train branch will be cut. This branch will track the Train release. Once stable/train has been created, master will will be ready to switch to Ussuri development. While master will no longer be feature-frozen, please prioritize any work necessary for completing Train plans. Release-critical bugfixes will need to be merged in the master branch first, then backported to the stable/train branch before a new release candidate can be proposed. Actions --------- Early in the week, the release team will be proposing RC1 patches for all cycle-with-rc projects, using the latest commit from master. If your team is ready to go for cutting RC1, please let us know by leaving a +1 on these patches. If there are still a few more patches needed before RC1, you can -1 the patch and update it later in the week with the new commit hash you would like to use. Remember, stable/train branches will be created with this, so you will want to make sure you have what you need included to avoid needing to backport changes from master (which will technically then be Ussuri) to this stable branch for any additional RCs before the final release. The release team will also be proposing releases for any deliverable following a cycle-with-intermediary model that has not produced any Train release so far. Upcoming Deadlines & Dates -------------------------- RC1 deadline: September 26 (R-3 week) Final RC deadline: October 10 (R-1 week) Final Train release: October 16 Forum+PTG at Shanghai summit: November 4 From whayutin at redhat.com Thu Sep 19 21:59:27 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 19 Sep 2019 15:59:27 -0600 Subject: [tripleo][ci] gate jobs killed / reset In-Reply-To: References: Message-ID: On Wed, Sep 18, 2019 at 8:48 PM Emilien Macchi wrote: > Status: > > We have identified that the 2 major issues are: > > - Inflight validations taking too much time. They were enabled by default, > we changed that: > https://review.opendev.org/#/c/683001/ > https://review.opendev.org/#/c/682905/ > https://review.opendev.org/#/c/682943 > They are now disabled by default and also disabled in > tripleo-ci-centos-7-containers-multinode > > - tripleo-container-image-prepare now takes 20 min instead of 10 min > before, because of the re-authentication logic that was introduced a few > weeks ago. It's proposed to be reverted now: > https://review.opendev.org/#/c/682945/ as we haven't found another > solution for now. > > We have restored the patches. You can now do recheck and approve to gate > but please stay aware of the situation, by checking the IRC topic on > #tripleo and monitoring the zuul queue: http://zuul.openstack.org/ > > Thanks to infra for force-merging the patches we urgently needed; > hopefully this stays exceptional and we don't face this situation again > soon. > > We need to reduce the container image prepare to safely stay under the 3 > hours for tripleo-ci-centos-7-containers-multinode. > > We're still not out of the woods yet.. the gate is still not back to where it should be. tripleo-ci-centos-7-containers-multinode is still running well over 3 hours [1] We're going to see if another container registry provides better performance. Thanks [1] http://dashboard-ci.tripleo.org/d/si1tipHZk/jobs-exploration?orgId=1&from=now-12h&to=now&var-influxdb_filter=job_name%7C%3D%7Ctripleo-ci-centos-7-containers-multinode&var-influxdb_filter=branch%7C%3D%7Cmaster > On Wed, Sep 18, 2019 at 5:19 PM Wesley Hayutin > wrote: > >> >> >> On Tue, Sep 17, 2019 at 4:40 PM Emilien Macchi >> wrote: >> >>> Note that I also cleared the check for tripleo projects to accelerate >>> the testing of our potential fixes. >>> Hopefully we can resolve the situation really soon. >>> >>> On Tue, Sep 17, 2019 at 4:29 PM Wesley Hayutin >>> wrote: >>> >>>> Greetings, >>>> >>>> The zuul jobs in the TripleO gate queue were put out of their misery >>>> approximately at 20:14 UTC Sept 17 2019. The TripleO jobs were timing out >>>> [1] and causing the gate queue to be delayed about 24 hours [2]. >>>> >>>> We are hoping a revert [3] will restore TripleO jobs back to their >>>> usual run times. Please hold off on any rechecks or workflowing patches >>>> until [3] is merged and the status on #tripleo is no longer "RED" >>>> >>>> We appreciate your patience while we work through this issue, the jobs >>>> that were in the gate will be restored once we have confirmed and verified >>>> the solution. >>>> >>>> Thank you >>>> >>>> >>>> [1] https://bugs.launchpad.net/tripleo/+bug/1844446 >>>> [2] >>>> http://dashboard-ci.tripleo.org/d/YRJtmtNWk/cockpit?orgId=1&fullscreen&panelId=398 >>>> [3] https://review.opendev.org/#/c/682729/ >>>> >>> >>> >>> -- >>> Emilien Macchi >>> >> >> Thanks for your continued patience re: the tripleo gate. >> >> We're currently waiting on a couple patches to land. >> https://review.opendev.org/#/c/682905/ >> https://review.opendev.org/#/c/682731 or >> https://review.opendev.org/#/c/682945/ >> >> Also.. fyi, one can clearly see the performance regression here [1] >> >> [1] >> http://dashboard-ci.tripleo.org/d/si1tipHZk/jobs-exploration?orgId=1&from=now-90d&to=now&fullscreen&panelId=16 >> >> >> >> >> >> >> > > > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Fri Sep 20 00:53:47 2019 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Fri, 20 Sep 2019 00:53:47 +0000 Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support In-Reply-To: References: , Message-ID: Hi All , Thank all for you reviewing tacker patches [1]. Dharmendra: Big thank you for co-ordinating with tosca-parser team to merge critical patches without which we wouldn't have hope to merge this feature in Train cycle. We have fixed all review comments and uploaded new patch sets. Request you to please take a look at it. Also, we have reported a LP bug in python-tackerclient [2] and also fixed it in patch [3]. [1] : https://review.opendev.org/#/q/status:open+project:openstack/tacker+branch:master+topic:bp/tosca-csar-mgmt-driver [2] https://bugs.launchpad.net/python-tackerclient/+bug/1844625 [3] : https://review.opendev.org/#/c/683203 Regards, Tushar Patil ________________________________________ From: Dharmendra Kushwaha Sent: Friday, September 13, 2019 7:10 PM To: Patil, Tushar; openstack-discuss at lists.openstack.org Subject: Re: [tacker] Feature Freeze Exception Request - Add VNF packages support Hi Tushar, Thanks for your hard effort. I had released tosce-parser1.6.0 as in [1], and lets wait [2] to get merged. Regarding tackerclient code, we already have merged it, and will release tackerclient today. Tacker have cycle-with-rc release model, So ok, we can wait some time for this feature(server patches). We just needs to make sure that no broken code goes in the last movement and can be tested before rc release. [1]: https://review.opendev.org/#/c/681240 [2]: https://review.opendev.org/#/c/681819 Thanks & Regards Dharmendra Kushwaha ________________________________________ From: Patil, Tushar Sent: Friday, September 13, 2019 2:24 PM To: openstack-discuss at lists.openstack.org Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support Hi Dharmendra and all Core reviewers In train cycle ,we are committed to implement spec “VNF packages support for VNF onboarding” [1]. All patches [2] are uploaded on the gerrit and code review is in progress but as we have dependency on tosca-parser library, patches are not yet merged. Now, tosca-parser library new version 1.6.0. is released but we are waiting for patch [3] to merge which will update the constraints of tosca-parser to 1.6.0 in requirements project. Once that happens, we will make changes to the tacker patch [4] to update the lower constraints of tosca-parser to 1.6.0 which will run all functional and unit tests added for this feature successfully on the CI job. I would like to request feature freeze exception for “VNF packages support for VNF onboarding” [1]. We will make sure all the review comments on the patches will be fixed promptly so that we can merge them as soon as possible. [1] : https://review.opendev.org/#/c/582930/ [2] : https://review.opendev.org/#/q/topic:bp/tosca-csar-mgmt-driver+(status:open+OR+status:merged) [3] : https://review.opendev.org/#/c/681819/ [4]: https://review.opendev.org/#/c/675600/ Thanks, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From berndbausch at gmail.com Fri Sep 20 01:39:02 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Fri, 20 Sep 2019 10:39:02 +0900 Subject: [openstack-dev][cinder] question on cinder-volume A/A configuration In-Reply-To: References: Message-ID: On 2019/09/19 7:17 PM, Chen CH Ji wrote: > compute node1 and node 2 both use this backend and I can see only 1 > compute services It's not quite clear to me what you mean by "backend" for compute nodes 1 and 2. But see my guess below. > [root at controller ~]# cinder service-list > +------------------+----------------+------+---------+-------+----------------------------+-----------------+ > | Binary           | Host           | Zone | Status  | State | > Updated_at                 | Disabled Reason | > +------------------+----------------+------+---------+-------+----------------------------+-----------------+ > | cinder-scheduler | controller     | nova | enabled | up  | > 2019-09-19T09:16:21.000000 | -               | > | cinder-volume    | FC at POWERMAX_FC | nova | enabled | up  | > 2019-09-19T09:16:30.000000 | -               | > +------------------+----------------+------+---------+-------+----------------------------+-----------------+ You say that you can see only one compute service, but here you are listing Cinder services, not Nova services. > and now I am creating 5 instances from nova at same time (boot from > volume) , the scheduler will report those error time to time like > following ,but actually the 2 services on both 2 compute nodes runs > fine .. I guess that you are running cinder-volume on both compute nodes, and your problem is that only one of the cinder-volume services is up. Is that correct? If I am guessing correctly, one of the two cinder-volume services is unable to reach cinder-api or is not running fine or not running at all. As a result, cinder-api is not aware of it and doesn't list it. > 2019-09-19 17:53:10.951 20916 WARNING cinder.scheduler.host_manager > [req-19e722e8-1523-4121-8987-3cb450a8038e > 071294a19fa8463788822565e0927fce f43175c07dc8415899d6b350dbede772 - > default default] volume service is down. (host: FC at POWERMAX_FC) Where do you see this warning message? It looks like this particular service is not running fine. If my guess is correct, I would expect to see additional information in the log of the problematic cinder-volume service. Bernd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at ntt-tx.co.jp Fri Sep 20 05:52:43 2019 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Fri, 20 Sep 2019 14:52:43 +0900 Subject: [cinder][tooz]Lock-files are remained Message-ID: Hi, I'm using Queens cinder with the following setting. --------------------------------- [coordination] backend_url = file://$state_path --------------------------------- As a result, the files like the following were remained under the state path after some operations.[1] cinder-63dacb3d-bd4d-42bb-88fe-6e4180164765-delete_volume cinder-32c426af-82b4-41de-b637-7d76fed69e83-delete_snapshot In my understanding, these are lock-files created for synchronization by tooz. But, these lock-files were not deleted after finishing operations. Is this behaviour correct? [1] e.g. Delete volume, Delete snapshot -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From ignaziocassano at gmail.com Fri Sep 20 07:09:48 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 20 Sep 2019 09:09:48 +0200 Subject: [oslo][nova][neutron] run outlasted interval In-Reply-To: References: Message-ID: Hello, no cpu problems on my 3 controllers. The warning appears on all controllers. I tried to reboot one controller at time , but he warning continues to appear :-( Ignazio Il giorno gio 19 set 2019 alle ore 18:05 Matt Riedemann ha scritto: > On 9/19/2019 10:48 AM, Ben Nemec wrote: > > Adding Nova and Neutron tags as I don't think this is an Oslo problem. > > What I believe those log messages are saying is that the thing from > > Nova/Neutron that Oslo called took longer to run than it should have. > > Right, check the usage on the hosts running those services to see if > something like CPU is maxed out. > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Fri Sep 20 10:51:32 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 20 Sep 2019 12:51:32 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> Message-ID: <20190920105132.2oc6igmnkehq65wy@localhost> On 19/09, Francois Scheurer wrote: > Dear All > > > We are using Mistral with  Openstack Rocky. (with federated users) > We could then use cron triggers for instance with nova.servers_create_image > or cinder.volume_snapshots_create with success. > > > But we hit an issue with cinder.backups_create . > > This call will stores the backup on our swift backend (ceph rgw). > The workflow works when executed directly but it fails when executed via > cron trigger: > > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: > Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized   AccessDenied > > See details below. > Hi, This makes no sense, the Swift connection credentials don't depend on the OpenStack user calling the service, they are internal to the Backup service. If after this error you can still create a backup manually, then the backup service works fine and the swiftclient as well (since we rely on it not failing the create call on for an container). I would start by checking on the Swift logs to see why this request was rejected and the manual one isn't. Cheers, Gorka. > > > > > Cheers > > Francois > > > > 2019-09-17 10:46:02.436 8 INFO cinder.backup.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] Create backup started, backup: > 901e1781-02ad-46d5-8ddf-e5410670cf9f volume: > c0022411-59a4-4c7c-9474-c7ea8ccc7691. > 2019-09-17 10:46:02.746 20 INFO cinder.api.openstack.wsgi > [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] GET http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f > 2019-09-17 10:46:02.764 20 INFO cinder.api.openstack.wsgi > [req-69a86fd7-b478-4e26-9692-a8416c41459a 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] http://cinder.service.stage.i.ewcs.ch:8776/v2/aeac4b07d8b144178c43c65f29fa9dac/backups/901e1781-02ad-46d5-8ddf-e5410670cf9f > returned with HTTP 200 > 2019-09-17 10:46:03 +0200] "GET > /v3/f099965b37ac41489e9cac8c9d208711/os-services HTTP/1.1" 200 2819 18532 > "-" "Go-http-client/1.1" > 2019-09-17 10:46:03 +0200] "GET > /v3/f099965b37ac41489e9cac8c9d208711/snapshots HTTP/1.1" 200 17 23618 "-" > "Go-http-client/1.1" > 2019-09-17 10:46:03.098 22 INFO cinder.api.openstack.wsgi > [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes > 2019-09-17 10:46:03.150 22 INFO cinder.volume.api > [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] Get all volumes > completed successfully. > 2019-09-17 10:46:03.152 22 INFO cinder.api.openstack.wsgi > [req-ec93b942-2dc9-4505-8656-680bd661fc71 b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/volumes > returned with HTTP 200 > 2019-09-17 10:46:03.162 18 INFO cinder.api.openstack.wsgi > [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services > 2019-09-17 10:46:03.172 18 INFO cinder.api.openstack.wsgi > [req-3e1ce449-305e-4e1f-9b51-aa56da6e2076 b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/os-services > returned with HTTP 200 > 2019-09-17 10:46:03.182 19 INFO cinder.api.openstack.wsgi > [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] GET http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots > 2019-09-17 10:46:03.197 19 INFO cinder.api.openstack.wsgi > [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] http://cinder.service.stage.ewcs.ch/v3/f099965b37ac41489e9cac8c9d208711/snapshots > returned with HTTP 200 > 2019-09-17 10:46:03.197 19 INFO cinder.volume.api > [req-b726191c-3710-477a-b7a0-961b74f9233f b141574ee71f49a0b53a05ae968576c5 > f099965b37ac41489e9cac8c9d208711 - default default] Get all snapshots > completed successfully. > 2019-09-17 10:46:03.878 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] Initialize volume connection completed > successfully. > 2019-09-17 10:46:04.468 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] Terminate volume connection completed > successfully. > 2019-09-17 10:46:04.501 30 INFO cinder.volume.manager > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] Remove volume export completed > successfully. > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server container = > self._create_container(backup) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > query_string=query_string) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server raise > ClientException.from_response(resp, 'Container PUT failed', body) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server     res = > self.dispatcher.dispatch(message) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = f(*args, > **kwargs) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server result = > func(ctxt, **new_args) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server return > self._do_dispatch(endpoint, method, ctxt, args) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self._update_backup_error(backup, six.text_type(err)) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.conn.put_container(container) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > self.put_container(backup.container) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > service_token=self.service_token, **kwargs) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > tpool.Proxy(device_path)) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server updates = > self._run_backup(context, backup, volume) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server volume_size_bytes) > = self._prepare_backup(backup) > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > line 226, in _create_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > line 327, in _prepare_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/chunkeddriver.py", > line 535, in backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", > line 315, in put_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > line 414, in create_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > line 425, in create_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/cinder/backup/manager.py", > line 502, in _run_backup > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > line 194, in _do_dispatch > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", > line 265, in dispatch > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 163, in _process_incoming > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", > line 196, in force_reraise > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", > line 220, in __exit__ > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", > line 159, in wrapper > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > line 1061, in put_container > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > line 1722, in _retry > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/swiftclient/client.py", > line 1808, in put_container > > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server > [req-3b5104f4-4aca-489f-86e0-78c5523d6faa 3e9b1a4fe95048a3b98fb5abebd44f6c > aeac4b07d8b144178c43c65f29fa9dac - 18b20663b571455c8da31fde994d031a > 18b20663b571455c8da31fde994d031a] Exception during message handling: > ClientException: Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized   AccessDenied > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server ClientException: > Container PUT failed: http://rgw.service.stage.i.ewcs.ch/swift/v1/AUTH_aeac4b07d8b144178c43c65f29fa9dac/volumebackups > 401 Unauthorized   AccessDenied > 2019-09-17 10:46:04.525 8 ERROR oslo_messaging.rpc.server Traceback (most > recent call last): > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch From geguileo at redhat.com Fri Sep 20 10:56:25 2019 From: geguileo at redhat.com (Gorka Eguileor Gimeno) Date: Fri, 20 Sep 2019 12:56:25 +0200 Subject: [openstack-dev][cinder] question on cinder-volume A/A configuration In-Reply-To: References: Message-ID: On Thu, Sep 19, 2019 at 12:19 PM Chen CH Ji wrote: > Hi stackers > I have a 1 controller + 2 compute nodes settings, same > configuration works fine on Ocata version but failed to run on Stein > version , I am using > > compute node1 and node 2 both use this backend and I can see only 1 > compute services > > > [root at controller ~]# cinder service-list > > +------------------+----------------+------+---------+-------+----------------------------+-----------------+ > | Binary | Host | Zone | Status | State | Updated_at > | Disabled Reason | > > +------------------+----------------+------+---------+-------+----------------------------+-----------------+ > | cinder-scheduler | controller | nova | enabled | up | > 2019-09-19T09:16:21.000000 | - | > | cinder-volume | FC at POWERMAX_FC | nova | enabled | up | > 2019-09-19T09:16:30.000000 | - | > > +------------------+----------------+------+---------+-------+----------------------------+-----------------+ > Hi, The subject mentions cinder-volume A/A, but then you say you only have 1 controller node and there's only 1 scheduler and API service, so how is it running as A/A? > cinder.conf > > [backend] > backend_host=FC at POWERMAX_FC > > > and now I am creating 5 instances from nova at same time (boot from > volume) , the scheduler will report those error time to time like following > ,but actually the 2 services on both 2 compute nodes runs fine .. > > 2019-09-19 17:53:10.951 20916 WARNING cinder.scheduler.host_manager > [req-19e722e8-1523-4121-8987-3cb450a8038e 071294a19fa8463788822565e0927fce > f43175c07dc8415899d6b350dbede772 - default default] volume service is down. > (host: FC at POWERMAX_FC) > You should check the cinder-volume logs to see the frequency of the stats reports, since slow stats reports will make the scheduler think the nodes are down. If you had multiple controller nodes it could be out of sync clocks. Cheers, Gorka. > > those settings works fine on O and P version, so any doc / suggestion on > how to configure A/A cinder volume on S version? > > > > > Ji Chen > z Infrastructure as a Service architect > Phone: 10-82451493 > E-mail: jichenjc at cn.ibm.com > > From mark at stackhpc.com Fri Sep 20 12:20:36 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 20 Sep 2019 13:20:36 +0100 Subject: [kolla] Kayobe Train planning meeting In-Reply-To: References: Message-ID: On Thu, 19 Sep 2019 at 18:25, Mark Goddard wrote: > > > > On Wed, 18 Sep 2019, 14:56 Mark Goddard, wrote: >> >> Hi, >> >> Just as the rest of the world is wrapping up the Train release, we >> find ourselves having just released Stein and starting on Train >> development. Given the timing of CentOS 8 which we intend to support >> in the kolla Train release, we have some time to do some feature >> development in kayobe. I've set up a Doodle poll [1] with two hour >> slots next week to plan the next release. >> >> [1] https://doodle.com/poll/y7fakbhbkfx5hqyk > > > I added some more slots as we weren't able to find a suitable time. Please update your responses. We had a tie in the end, and I selected Tuesday 24th September 14:00 UTC - 16:00 UTC. We'll meet via Google meet: https://meet.google.com/ncb-axnh-sgu Etherpad is here: https://etherpad.openstack.org/p/kayobe-train-planning. Please keep adding discussion topics. Mark >> >> >> >> Cheers, >> Mark From francois.scheurer at everyware.ch Fri Sep 20 12:36:41 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 20 Sep 2019 14:36:41 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <20190920105132.2oc6igmnkehq65wy@localhost> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> Message-ID: <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> Dear Gorka and Hervé Thanks for your hints. I have set the debug log level on radosgw. I will retest now and post here the results. Cheers Francois -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From geguileo at redhat.com Fri Sep 20 12:46:51 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 20 Sep 2019 14:46:51 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> Message-ID: <20190920124651.fxf3d2eqbgi5rbc4@localhost> On 20/09, Francois Scheurer wrote: > Dear Gorka and Hervé > > > Thanks for your hints. > > I have set the debug log level on radosgw. > > I will retest now and post here the results. > > > Cheers > > Francois Hi, Sorry, I may have missed something in the conversation, weren't you using Swift? I think you need to see the Swift logs as well, since that's the API service that complained about the authorization. Cheers, Gorka. > > > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch From francois.scheurer at everyware.ch Fri Sep 20 13:40:21 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 20 Sep 2019 15:40:21 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <20190920124651.fxf3d2eqbgi5rbc4@localhost> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> <20190920124651.fxf3d2eqbgi5rbc4@localhost> Message-ID: <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> Hi Gorka We have a swift endpoint set up on opentstack, which points to our ceph radosgw backend Radosgw provides s3 & swift. So the swift logs are here actually the radosgw logs. Cheers Francois On 9/20/19 2:46 PM, Gorka Eguileor wrote: > On 20/09, Francois Scheurer wrote: >> Dear Gorka and Hervé >> >> >> Thanks for your hints. >> >> I have set the debug log level on radosgw. >> >> I will retest now and post here the results. >> >> >> Cheers >> >> Francois > Hi, > > Sorry, I may have missed something in the conversation, weren't you > using Swift? > > I think you need to see the Swift logs as well, since that's the API > service that complained about the authorization. > > Cheers, > Gorka. > >> >> >> >> -- >> >> >> EveryWare AG >> François Scheurer >> Senior Systems Engineer >> Zurlindenstrasse 52a >> CH-8003 Zürich >> >> tel: +41 44 466 60 00 >> fax: +41 44 466 60 10 >> mail: francois.scheurer at everyware.ch >> web: http://www.everyware.ch > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From emccormick at cirrusseven.com Fri Sep 20 13:53:18 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 20 Sep 2019 09:53:18 -0400 Subject: [ops] Shanghai Forum / Ops Day sessions In-Reply-To: References: Message-ID: Last Call for Forum Topics before I submit the few we have. Entries after today will (hopefully) be used for Ops Day. On Wed, Sep 18, 2019 at 11:10 AM Erik McCormick wrote: > Greetings! > > We are coming up on the Shanghai Summit and need to plan out a few > sessions for forum submissions (yeah I know, late as always). We are also > trying to see if there's enough traction to do an Ops day on Thursday after > the summit. This is a bit freeform, but if there are enough attendees > interested, we can make it happen. > > Please visit the following etherpad and suggest topics for both. +1 those > you like the most. We will submit the forum sessions on Friday so there's > not a lot of time for that part. Things for the Ops day can go on being > entered until that day. > > https://etherpad.openstack.org/p/PVG-OPS-Forum-Brainstorming > > Thanks, > Erik > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Fri Sep 20 14:02:19 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 20 Sep 2019 16:02:19 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> <20190920124651.fxf3d2eqbgi5rbc4@localhost> <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> Message-ID: <20190920140219.jdb2k2t4w5m3a7rr@localhost> On 20/09, Francois Scheurer wrote: > Hi Gorka > > > We have a swift endpoint set up on opentstack, which points to our ceph > radosgw backend > > Radosgw provides s3 & swift. > > So the swift logs are here actually the radosgw logs. > Hi, OK, thanks for the clarification. Then I assume you prefer the Swift backup driver over the Ceph one because you are using one of the OpenStack releases that had trouble with Incremental Backups on the Ceph backup driver. Cheers, Gorka. > > Cheers > > Francois > > > > On 9/20/19 2:46 PM, Gorka Eguileor wrote: > > On 20/09, Francois Scheurer wrote: > > > Dear Gorka and Hervé > > > > > > > > > Thanks for your hints. > > > > > > I have set the debug log level on radosgw. > > > > > > I will retest now and post here the results. > > > > > > > > > Cheers > > > > > > Francois > > Hi, > > > > Sorry, I may have missed something in the conversation, weren't you > > using Swift? > > > > I think you need to see the Swift logs as well, since that's the API > > service that complained about the authorization. > > > > Cheers, > > Gorka. > > > > > > > > > > > > > > -- > > > > > > > > > EveryWare AG > > > François Scheurer > > > Senior Systems Engineer > > > Zurlindenstrasse 52a > > > CH-8003 Zürich > > > > > > tel: +41 44 466 60 00 > > > fax: +41 44 466 60 10 > > > mail: francois.scheurer at everyware.ch > > > web: http://www.everyware.ch > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch From jeremyfreudberg at gmail.com Fri Sep 20 14:03:27 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Fri, 20 Sep 2019 10:03:27 -0400 Subject: [sahara] Sahara meeting frequency change Message-ID: For any interested parties not already aware, the Sahara meeting now takes place every two weeks instead of every week. The next meeting will take place October 3, 2019. More information and the latest ICS file reflecting this change are found here: http://eavesdrop.openstack.org/#OpenStack_Data_Processing_(Sahara)_Team_Meeting Thanks, Jeremy From eharney at redhat.com Fri Sep 20 14:10:25 2019 From: eharney at redhat.com (Eric Harney) Date: Fri, 20 Sep 2019 10:10:25 -0400 Subject: [cinder][tooz]Lock-files are remained In-Reply-To: References: Message-ID: <88881fd9-22f3-a4df-c5a9-e5346255ef4b@redhat.com> On 9/20/19 1:52 AM, Rikimaru Honjo wrote: > Hi, > > I'm using Queens cinder with the following setting. > > --------------------------------- > [coordination] > backend_url = file://$state_path > --------------------------------- > > As a result, the files like the following were remained under the state > path after some operations.[1] > > cinder-63dacb3d-bd4d-42bb-88fe-6e4180164765-delete_volume > cinder-32c426af-82b4-41de-b637-7d76fed69e83-delete_snapshot > > In my understanding, these are lock-files created for synchronization by > tooz. > But, these lock-files were not deleted after finishing operations. > Is this behaviour correct? > > [1] > e.g. Delete volume, Delete snapshot This is a known bug that's described here: https://github.com/harlowja/fasteners/issues/26 (The fasteners library is used by tooz, which is used by Cinder for managing these lock files.) There's an old Cinder bug for it here: https://bugs.launchpad.net/cinder/+bug/1432387 but that's marked as "Won't Fix" because Cinder needs it to be fixed in the underlying libraries. Thanks, Eric From mriedemos at gmail.com Fri Sep 20 14:42:41 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 20 Sep 2019 09:42:41 -0500 Subject: A reminder about our upstream backport policy Message-ID: <43fce573-d870-a6a4-0261-55c71aaaf121@gmail.com> I have noticed several stable/rocky-only docs fixes proposed lately across several projects and when I review them I find that what is being fixed is still broken on master and stable/stein, meaning once someone upgrades from rocky to stein or train they are just going to have to re-fix that same issue. All of the developers are from the same vendor who is targeting a rocky-based release, and while it's great that they put an emphasis on using the upstream docs for product code and work on fixing those docs upstream, we do have backport processes in place for working on stable branches, in this case specifically: https://docs.openstack.org/project-team-guide/stable-branches.html#processes So let's please follow the upstream process so we don't have to continue to fix the same things each release because backports weren't done properly the first time. -- Thanks, Matt From francois.scheurer at everyware.ch Fri Sep 20 15:32:24 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 20 Sep 2019 17:32:24 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <20190920140219.jdb2k2t4w5m3a7rr@localhost> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> <20190920124651.fxf3d2eqbgi5rbc4@localhost> <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> <20190920140219.jdb2k2t4w5m3a7rr@localhost> Message-ID: <021261ae-f1ce-343e-1695-f13f6c8082b9@everyware.ch> Hi Gorka >Then I assume you prefer the Swift backup driver over the Ceph one >because you are using one of the OpenStack releases that had trouble >with Incremental Backups on the Ceph backup driver. You are probably right. But I cannot answer that because I was not involve in that decision. Ok in the radosgw logs I see this: 2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to https://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off 2019-09-20 15:40:07.267091 7f19edb9b700  5 Failed keystone auth from https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 BTW: our radosgw is configured to delegate user authentication to keystone. In keystone logs I see this: 2019-09-20 15:40:07.218 24 INFO keystone.token.provider [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: token.user_id [f7c7296949f84a4387c5172808a0965b], token.expires_at[2019-09-21T13:40:07.000000Z], token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], token.system[None], token.domain_id[None], token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], token.federated_groups[None], token.identity_provider_id[None], token.protocol_id[None], token.access_token_id[None],token.application_credential_id[None]. 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] GET http://keystone.service.stage.ewcs.ch/v3/auth/tokens 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: 934ed82d2b14413899023da0bee6a953. So what happens is following: 1. when the user creates the cron trigger, mistral creates a trust 2. when the cron trigger executes the workflow, openstack create a volume snapshot (a rbd image) then copy it to swift (rgw) then delete the snapshot 3. when the execution finishes, if the cron trigger has no remaining executions scheduled, then mistral remove the cron trigger and the trust The problem is a racing issue: apprently the copying of the snapshot to swift run in the background and mistral removes the trust before the operation completes... That explains the error in keystone and also the cron trigger execution result which is "success" even if the resulting backup is actually "failed". To test this theory I set up the same cron trigger with more than one scheduled execution and the backups were suddenly created correctly ;-). So something need to be done on the code to deal with this racing issue. In the meantime, I will try to put a sleep action after the 'create backup' action. Best Regards Francois On 9/20/19 4:02 PM, Gorka Eguileor wrote: > On 20/09, Francois Scheurer wrote: >> Hi Gorka >> >> >> We have a swift endpoint set up on opentstack, which points to our ceph >> radosgw backend >> >> Radosgw provides s3 & swift. >> >> So the swift logs are here actually the radosgw logs. >> > Hi, > > OK, thanks for the clarification. > > Then I assume you prefer the Swift backup driver over the Ceph one > because you are using one of the OpenStack releases that had trouble > with Incremental Backups on the Ceph backup driver. > > Cheers, > Gorka. > > >> Cheers >> >> Francois >> >> >> >> On 9/20/19 2:46 PM, Gorka Eguileor wrote: >>> On 20/09, Francois Scheurer wrote: >>>> Dear Gorka and Hervé >>>> >>>> >>>> Thanks for your hints. >>>> >>>> I have set the debug log level on radosgw. >>>> >>>> I will retest now and post here the results. >>>> >>>> >>>> Cheers >>>> >>>> Francois >>> Hi, >>> >>> Sorry, I may have missed something in the conversation, weren't you >>> using Swift? >>> >>> I think you need to see the Swift logs as well, since that's the API >>> service that complained about the authorization. >>> >>> Cheers, >>> Gorka. >>> >>>> >>>> >>>> -- >>>> >>>> >>>> EveryWare AG >>>> François Scheurer >>>> Senior Systems Engineer >>>> Zurlindenstrasse 52a >>>> CH-8003 Zürich >>>> >>>> tel: +41 44 466 60 00 >>>> fax: +41 44 466 60 10 >>>> mail: francois.scheurer at everyware.ch >>>> web: http://www.everyware.ch >> -- >> >> >> EveryWare AG >> François Scheurer >> Senior Systems Engineer >> Zurlindenstrasse 52a >> CH-8003 Zürich >> >> tel: +41 44 466 60 00 >> fax: +41 44 466 60 10 >> mail: francois.scheurer at everyware.ch >> web: http://www.everyware.ch > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From mriedemos at gmail.com Fri Sep 20 15:52:27 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 20 Sep 2019 10:52:27 -0500 Subject: [nova] The test of NUMA aware live migration In-Reply-To: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> References: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> Message-ID: <10e25785-4271-9f19-db15-0c31ea7543ee@gmail.com> On 9/17/2019 7:44 AM, wang.ya wrote: > But if add the property “hw:cpu_policy='dedicated'”, it will not correct > after serval live migrations. > > Which means the live migrate can be success, but the vCPU pin are not > correct(two instance have serval same vCPU pin on same host). > Is the race you're describing the same issue reported in this bug? https://bugs.launchpad.net/nova/+bug/1829349 Also, what is the max_concurrent_live_migrations config option set to? That defaults to 1 but I'm wondering if you've changed it at all. -- Thanks, Matt From sean.mcginnis at gmx.com Fri Sep 20 16:16:18 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 20 Sep 2019 11:16:18 -0500 Subject: [RelMgmt] Moving release team meeting for the week of Sept 23 Message-ID: <20190920161618.GA8920@sm-workstation> Hey everyone, Due to travel conflicts next week, rather than cancelling the weekly meeting during the RC week we will be moving it to Friday at 15:00 UTC. This is a one time change of the meeting time. Afterwards we will return to our normal schedule of Thursday, 16:00 UTC. Let me know if there are any questions or concerns. Sean From Ryan.Liang at dell.com Fri Sep 20 13:39:17 2019 From: Ryan.Liang at dell.com (Ryan.Liang at dell.com) Date: Fri, 20 Sep 2019 13:39:17 +0000 Subject: [cinder][FFE] Feature Freeze Exceptions In-Reply-To: References: <3fa786af13af4f2582beded802a8e472@KULX13MDC131.APAC.DELL.COM> Message-ID: <8ec2659315184166a542b5cc6a968f50@KULX13MDC131.APAC.DELL.COM> Sorry. Sending to the mailing list. Hi, I’d like to request FFE for the following Cinder Unity driver review: Unity: Add replication support. https://review.opendev.org/#/c/633451/ This change is limited to our Unity driver. Thanks, -Ryan From: Sean McGinnis Sent: Friday, September 20, 2019 9:02 PM To: Liang, Ryan Cc: Jay Bryant; Sun, Hao; Karthik, Rajini Subject: Re: [cinder][FFE] Feature Freeze Exceptions [EXTERNAL EMAIL] Hey Ryan, I think that should be fine, but you need to post this to the mailing list, not directly to Jay. Sean On Fri, Sep 20, 2019 at 1:13 AM > wrote: Hi Jay and Sean, I’d like to request FFE for the following Cinder Unity driver review: Unity: Add replication support. https://review.opendev.org/#/c/633451/ This change is limited to our Unity driver. Thanks, -Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 20 17:08:57 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 20 Sep 2019 12:08:57 -0500 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <20190920170857.GA15110@sm-workstation> On Fri, Sep 20, 2019 at 04:46:49PM +0000, zuul at openstack.org wrote: > Build failed. > > - tag-releases https://zuul.opendev.org/t/openstack/build/9413e62eae174733a7f577c6090c3059 : TIMED_OUT in 30m 56s > - publish-tox-docs-static https://zuul.opendev.org/t/openstack/build/None : SKIPPED > The tagging job failed here because I tried to include too many new branch creation requests in one patch. This job will be reenqueued so the processing runs again. The automation is smart enough to skip over the ones that have already been branched, so it should be able to just pick up where it left off. This was half way through processing the python-designateclient branching. The branch was created, and one of the two automatic stable patches were submitted before the job was killed. I have manually submitted the one that was missed here: https://review.opendev.org/#/c/683592/ Sean From miguel at mlavalle.com Fri Sep 20 18:23:35 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 20 Sep 2019 13:23:35 -0500 Subject: [neutron] [stable] Proposing Slawek Kaplonski to Neutron Stable core Message-ID: Hi Stackers, In order to strengthen our stable core team, I want to nominate Slawek Kaplonski to it. Over the past years he has made countless contributions in all areas of the project, has been a member of the core team since two years ago and is the incoming PTL for the U cycle ( https://governance.openstack.org/election/results/ussuri/ptl.html). I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Sep 20 18:46:22 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 20 Sep 2019 13:46:22 -0500 Subject: [neutron] [stable] Proposing Slawek Kaplonski to Neutron Stable core In-Reply-To: References: Message-ID: On 9/20/2019 1:23 PM, Miguel Lavalle wrote: > In order to strengthen our stable core team, I want to nominate Slawek > Kaplonski to it. Over the past years he has made countless contributions > in all areas of the project, has been a member of the core team since > two years ago and is the incoming PTL for the U cycle > (https://governance.openstack.org/election/results/ussuri/ptl.html). As most probably know by now, being core/PTL on master doesn't translate necessarily to understanding the stable review policy [1] but looking at Slawek's stable branch reviews it looks like he's been very active already [2]. So +1 from me. [1] https://docs.openstack.org/project-team-guide/stable-branches.html [2] https://review.opendev.org/#/q/reviewedby:skaplons%2540redhat.com+branch:%255Estable%255C/.* -- Thanks, Matt From sean.mcginnis at gmx.com Fri Sep 20 19:25:18 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 20 Sep 2019 14:25:18 -0500 Subject: [neutron] [stable] Proposing Slawek Kaplonski to Neutron Stable core In-Reply-To: References: Message-ID: <20190920192518.GA20118@sm-workstation> On Fri, Sep 20, 2019 at 01:46:22PM -0500, Matt Riedemann wrote: > On 9/20/2019 1:23 PM, Miguel Lavalle wrote: > > In order to strengthen our stable core team, I want to nominate Slawek > > Kaplonski to it. Over the past years he has made countless contributions > > in all areas of the project, has been a member of the core team since > > two years ago and is the incoming PTL for the U cycle > > (https://governance.openstack.org/election/results/ussuri/ptl.html). > > As most probably know by now, being core/PTL on master doesn't translate > necessarily to understanding the stable review policy [1] but looking at > Slawek's stable branch reviews it looks like he's been very active already > [2]. So +1 from me. > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > [2] https://review.opendev.org/#/q/reviewedby:skaplons%2540redhat.com+branch:%255Estable%255C/.* > Spot checking some of the stable reviews, everything looks good. +1 from me me from a stable perspective. Sean From haleyb.dev at gmail.com Fri Sep 20 20:05:13 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 20 Sep 2019 16:05:13 -0400 Subject: [neutron] [stable] Proposing Slawek Kaplonski to Neutron Stable core In-Reply-To: References: Message-ID: <1f095f55-29cf-51e1-f406-93834a208855@gmail.com> On 9/20/19 2:23 PM, Miguel Lavalle wrote: > Hi Stackers, > > In order to strengthen our stable core team, I want to nominate Slawek > Kaplonski to it. Over the past years he has made countless contributions > in all areas of the project, has been a member of the core team since > two years ago and is the incoming PTL for the U cycle > (https://governance.openstack.org/election/results/ussuri/ptl.html). > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel +1 from me from a neutron perspective. -Brian From fungi at yuggoth.org Fri Sep 20 20:29:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 20 Sep 2019 20:29:51 +0000 Subject: [i18n][tc] The future of I18n In-Reply-To: <462cad35-832e-c5b0-8971-a97f386f78e0@openstack.org> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> <20190906133759.obgszlvqexgam5n3@csail.mit.edu> <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> <462cad35-832e-c5b0-8971-a97f386f78e0@openstack.org> Message-ID: <20190920202951.e3vi33rfnezleici@yuggoth.org> On 2019-09-09 12:30:56 +0200 (+0200), Thierry Carrez wrote: [...] > Note that SIG members are considered ATCs (just like project team > members) and can vote in the TC election... so there would be no > difference really (except I18n SIG members would no longer have to > formally vote for a PTL). [...] We've asserted in the past that this should be the case, but no work was done to implement support for it in our technical election tooling once SIGs became an official kind of governance structure. I have taken this as a cue to go ahead and add it, so once this merges it will *actually* be true: https://review.opendev.org/683727 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jasonanderson at uchicago.edu Fri Sep 20 21:33:32 2019 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Fri, 20 Sep 2019 21:33:32 +0000 Subject: [ironic] Tips on testing custom hardware manager? In-Reply-To: <3b588cef-563e-78a3-d471-d2a6cff3184b@cern.ch> References: <06723f39-ec67-c98b-9e2d-c9b375d568e8@uchicago.edu> <3b588cef-563e-78a3-d471-d2a6cff3184b@cern.ch> Message-ID: Hi Arne, Thank you for those tips. While the Git-based solution wouldn't work for us due to our networking rules (can't pull from a remote repo on the provisioning network), the flag files are very clever and we had some luck using them to get more time to check things. Cheers! /Jason On 9/19/19 10:50 AM, Arne Wiebalck wrote: > Jason, > > One thing we do is having the image pull in the custom hardware > manager branch via git: we build the image once and make changes > on the branch which is then pulled in on the next iteration. > As this avoids rebuilding/uploading the image for every change, > our dev cycle has become much shorter. > > Another thing we do for debugging our custom hardware manager is > to add (debug) steps to it. These steps wait for certain file to > appear before moving on: the IPA will basically spin in this step > until we log in and touch the flag file. With one or two steps like > this we can set "breakpoints" to check things while developing our > hardware manager. > > HTH, >  Arne > > On 19.09.19 03:34, Jason Anderson wrote: >> Hi all, >> >> I am hoping to get some tips on how to test out a custom hardware >> manager. One of my colleagues is working on a project that involves >> implementing a custom in-band cleaning step, which we are >> implementing by creating our own ramdisk image that includes an extra >> library, which is necessary for the clean step. We already have >> created the image and ensured it has IPA installed and that all seems >> to work fine (in that, it executes on the node and we see our code >> running--and failing!) >> >> The issue we are having is that we encounter some issues in our fully >> integrated environment (such as the provisioning network having >> different networking rules) and replicating this environment in some >> local development context is very difficult. Right now our workflow >> is really onerous as a result: my colleague has to rebuild the >> ramdisk image, re-upload it to Glance, update the test Ironic node to >> reference that image, then perform a rebuild. One cycle of this takes >> a while as you can imagine. I was wondering: is it possible to >> somehow interrupt or give a larger window for some interactive >> debugging? The amount of time we have to run some quick >> tests/debugging is small because the deploy will time out and cancel >> itself or it will proceed and fail. >> >> Thusfar I haven't found any documentation or written experience on >> this admittedly niche task. Perhaps somebody has already gone down >> this road and can advise on some tips? It would be much appreciated! >> >> Cheers, >> >> -- >> Jason Anderson >> >> Chameleon DevOps Lead >> *Consortium for Advanced Science and Engineering, The University of >> Chicago* >> *Mathematics & Computer Science Division, Argonne National Laboratory* > From colleen at gazlene.net Fri Sep 20 23:10:08 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 20 Sep 2019 16:10:08 -0700 Subject: [keystone] Keystone Team Update - Week of 16 September 2019 Message-ID: <632454cd-65a5-44c7-88c9-49bb3202f2f7@www.fastmail.com> # Keystone Team Update - Week of 16 September 2019 ## News ### RC1 Status Even after discovering some gaps in our policy migrations we managed to quickly cover them and are on track to close all our RC1-targeted bugs[1] on time! [1] https://launchpad.net/keystone/+milestone/train-rc1 ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. No office hours next week due to lack of topics. Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 32 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 58 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Milestone Outlook https://releases.openstack.org/train/schedule.html We're on target to release RC1 next week. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From vkmc at redhat.com Sat Sep 21 12:54:26 2019 From: vkmc at redhat.com (Victoria Martinez de la Cruz) Date: Sat, 21 Sep 2019 09:54:26 -0300 Subject: [Women-of-openstack] Outreachy Application Deadline - Call for mentors and projects In-Reply-To: References: Message-ID: Hi all, Samuel, I'll be sending my application for mentoring today/tomorrow. Happy to hear we have an extended deadline to do this and sorry I didn't come back earlier. Adding openstack-discuss to CC. Cheers, V On Wed, Sep 18, 2019 at 12:11 PM Samuel de Medeiros Queiroz < samueldmq at gmail.com> wrote: > Hi everyone! > > Outreachy helps people from underrepresented > groups get involved in free and open source software by matching interns > with established mentors in the upstream communities. > > OpenStack is a participating organization in the Outreachy Dec 2019 to > Mar 2020 round. If you're interested to be a mentor, please register as a > mentor in the Outreachy website and publish your project ideas. > > According to this round's schedule > , the > initial application is due next week: > > - > *Sept. 24, 2019 at 4pm UTC Initial application deadline * > - *Nov. 5, 2019 at 4pm UTC Final application deadline* > > It is important to get projects submitted *as soon as possible* so that > applicants can sign up before the *Sept. 24 deadline*. > > Once signed up, they will have between *Oct. 1, 2019 to Nov. 5, 2019 to > contribute to the projects*. > > If you have any questions about becoming a mentor or want to sponsor an > intern, please contact me (samueldmq at gmail.com) or Mahati Chamarthy ( > mahati.chamarthy at gmail.com). > > Thank you, > Samuel de Medeiros Queiroz > _______________________________________________ > Women-of-openstack mailing list > Women-of-openstack at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/women-of-openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkmc at redhat.com Sat Sep 21 12:54:26 2019 From: vkmc at redhat.com (Victoria Martinez de la Cruz) Date: Sat, 21 Sep 2019 09:54:26 -0300 Subject: [Women-of-openstack] Outreachy Application Deadline - Call for mentors and projects In-Reply-To: References: Message-ID: Hi all, Samuel, I'll be sending my application for mentoring today/tomorrow. Happy to hear we have an extended deadline to do this and sorry I didn't come back earlier. Adding openstack-discuss to CC. Cheers, V On Wed, Sep 18, 2019 at 12:11 PM Samuel de Medeiros Queiroz < samueldmq at gmail.com> wrote: > Hi everyone! > > Outreachy helps people from underrepresented > groups get involved in free and open source software by matching interns > with established mentors in the upstream communities. > > OpenStack is a participating organization in the Outreachy Dec 2019 to > Mar 2020 round. If you're interested to be a mentor, please register as a > mentor in the Outreachy website and publish your project ideas. > > According to this round's schedule > , the > initial application is due next week: > > - > *Sept. 24, 2019 at 4pm UTC Initial application deadline * > - *Nov. 5, 2019 at 4pm UTC Final application deadline* > > It is important to get projects submitted *as soon as possible* so that > applicants can sign up before the *Sept. 24 deadline*. > > Once signed up, they will have between *Oct. 1, 2019 to Nov. 5, 2019 to > contribute to the projects*. > > If you have any questions about becoming a mentor or want to sponsor an > intern, please contact me (samueldmq at gmail.com) or Mahati Chamarthy ( > mahati.chamarthy at gmail.com). > > Thank you, > Samuel de Medeiros Queiroz > _______________________________________________ > Women-of-openstack mailing list > Women-of-openstack at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/women-of-openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkmc at redhat.com Sat Sep 21 12:54:26 2019 From: vkmc at redhat.com (Victoria Martinez de la Cruz) Date: Sat, 21 Sep 2019 09:54:26 -0300 Subject: [Women-of-openstack] Outreachy Application Deadline - Call for mentors and projects In-Reply-To: References: Message-ID: Hi all, Samuel, I'll be sending my application for mentoring today/tomorrow. Happy to hear we have an extended deadline to do this and sorry I didn't come back earlier. Adding openstack-discuss to CC. Cheers, V On Wed, Sep 18, 2019 at 12:11 PM Samuel de Medeiros Queiroz < samueldmq at gmail.com> wrote: > Hi everyone! > > Outreachy helps people from underrepresented > groups get involved in free and open source software by matching interns > with established mentors in the upstream communities. > > OpenStack is a participating organization in the Outreachy Dec 2019 to > Mar 2020 round. If you're interested to be a mentor, please register as a > mentor in the Outreachy website and publish your project ideas. > > According to this round's schedule > , the > initial application is due next week: > > - > *Sept. 24, 2019 at 4pm UTC Initial application deadline * > - *Nov. 5, 2019 at 4pm UTC Final application deadline* > > It is important to get projects submitted *as soon as possible* so that > applicants can sign up before the *Sept. 24 deadline*. > > Once signed up, they will have between *Oct. 1, 2019 to Nov. 5, 2019 to > contribute to the projects*. > > If you have any questions about becoming a mentor or want to sponsor an > intern, please contact me (samueldmq at gmail.com) or Mahati Chamarthy ( > mahati.chamarthy at gmail.com). > > Thank you, > Samuel de Medeiros Queiroz > _______________________________________________ > Women-of-openstack mailing list > Women-of-openstack at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/women-of-openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Sat Sep 21 19:39:10 2019 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Sat, 21 Sep 2019 21:39:10 +0200 Subject: [neutron] [stable] Proposing Slawek Kaplonski to Neutron Stable core In-Reply-To: References: Message-ID: On Fri, 20 Sep 2019 at 20:48, Matt Riedemann wrote: > On 9/20/2019 1:23 PM, Miguel Lavalle wrote: > > In order to strengthen our stable core team, I want to nominate Slawek > > Kaplonski to it. Over the past years he has made countless contributions > > in all areas of the project, has been a member of the core team since > > two years ago and is the incoming PTL for the U cycle > > (https://governance.openstack.org/election/results/ussuri/ptl.html). > > As most probably know by now, being core/PTL on master doesn't translate > necessarily to understanding the stable review policy [1] but looking at > Slawek's stable branch reviews it looks like he's been very active > already [2]. So +1 from me. > Also worth looking at is Slawek's long list of completed backports: https://review.opendev.org/#/q/author:skaplons%2540redhat.com+branch:%255Estable%255C/.* I can say as neutron stable core that, I know I will have an easy stable review when I see that author, so +1 from me > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > [2] > > https://review.opendev.org/#/q/reviewedby:skaplons%2540redhat.com+branch:%255Estable%255C/.* > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sun Sep 22 15:37:51 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 22 Sep 2019 10:37:51 -0500 Subject: [nova] New gate bug 1844929, timed out waiting for response from cell during scheduling Message-ID: I noticed this while looking at a grenade failure on an unrelated patch: https://bugs.launchpad.net/nova/+bug/1844929 The details are in the bug but it looks like this showed up around Sept 17 and hits mostly on FortNebula nodes but also OVH nodes. It's restricted to grenade jobs and while I don't see anything obvious in the rabbitmq logs (the only errors are about uwsgi [api] heartbeat issues), it's possible that these are slower infra nodes and we're just not waiting for something properly during the grenade upgrade. We also don't seem to have the mysql logs published during the grenade jobs which we need to fix (and recently did fix for devstack jobs [1] but grenade jobs are still using devstack-gate so log collection happens there). I didn't see any changes in nova, grenade or devstack since Sept 16 that look like they would be related to this so I'm guessing right now it's just a combination of performance on certain infra nodes (slower?) and something in grenade/nova not restarting properly or not waiting long enough for the upgrade to complete. [1] https://github.com/openstack/devstack/commit/f92c346131db2c89b930b1a23f8489419a2217dc -- Thanks, Matt From mark at stackhpc.com Sun Sep 22 16:55:03 2019 From: mark at stackhpc.com (Mark Goddard) Date: Sun, 22 Sep 2019 17:55:03 +0100 Subject: [nova] New gate bug 1844929, timed out waiting for response from cell during scheduling In-Reply-To: References: Message-ID: On Sun, 22 Sep 2019, 16:39 Matt Riedemann, wrote: > I noticed this while looking at a grenade failure on an unrelated patch: > > https://bugs.launchpad.net/nova/+bug/1844929 > > The details are in the bug but it looks like this showed up around Sept > 17 and hits mostly on FortNebula nodes but also OVH nodes. It's > restricted to grenade jobs and while I don't see anything obvious in the > rabbitmq logs (the only errors are about uwsgi [api] heartbeat issues), > it's possible that these are slower infra nodes and we're just not > waiting for something properly during the grenade upgrade. We also don't > seem to have the mysql logs published during the grenade jobs which we > need to fix (and recently did fix for devstack jobs [1] but grenade jobs > are still using devstack-gate so log collection happens there). > > I didn't see any changes in nova, grenade or devstack since Sept 16 that > look like they would be related to this so I'm guessing right now it's > just a combination of performance on certain infra nodes (slower?) and > something in grenade/nova not restarting properly or not waiting long > enough for the upgrade to complete. > Julia recently fixed an issue in ironic caused by a low MTU on fortnebula. May or may not be related. [1] > > https://github.com/openstack/devstack/commit/f92c346131db2c89b930b1a23f8489419a2217dc > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Sep 23 07:37:46 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 23 Sep 2019 09:37:46 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <021261ae-f1ce-343e-1695-f13f6c8082b9@everyware.ch> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> <20190920124651.fxf3d2eqbgi5rbc4@localhost> <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> <20190920140219.jdb2k2t4w5m3a7rr@localhost> <021261ae-f1ce-343e-1695-f13f6c8082b9@everyware.ch> Message-ID: <20190923073746.diuqub3ciyqi3duk@localhost> On 20/09, Francois Scheurer wrote: > Hi Gorka > > > >Then I assume you prefer the Swift backup driver over the Ceph one > >because you are using one of the OpenStack releases that had trouble >with > Incremental Backups on the Ceph backup driver. > > > You are probably right. But I cannot answer that because I was not involve > in that decision. > > > Ok in the radosgw logs I see this: > > > 2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao > 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to > https://keystone.service.stage.ewcs.ch/v3/auth/tokens > 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off > 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to > https://keystone.service.stage.ewcs.ch/v3/auth/tokens > 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off > 2019-09-20 15:40:07.267091 7f19edb9b700  5 Failed keystone auth from > https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 > BTW: our radosgw is configured to delegate user authentication to keystone. > > In keystone logs I see this: > > 2019-09-20 15:40:07.218 24 INFO keystone.token.provider > [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: > token.user_id [f7c7296949f84a4387c5172808a0965b], > token.expires_at[2019-09-21T13:40:07.000000Z], > token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], > token.system[None], token.domain_id[None], > token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], > token.federated_groups[None], token.identity_provider_id[None], > token.protocol_id[None], > token.access_token_id[None],token.application_credential_id[None]. > 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi > [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b > 4120792f50bc4cf2b4f97c4546462f06 - default default] GET > http://keystone.service.stage.ewcs.ch/v3/auth/tokens > 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi > [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b > 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: > 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: > 934ed82d2b14413899023da0bee6a953. > > > So what happens is following: > > 1. when the user creates the cron trigger, mistral creates a trust > 2. when the cron trigger executes the workflow, openstack create a > volume snapshot (a rbd image) then copy it to swift (rgw) then > delete the snapshot > 3. when the execution finishes, if the cron trigger has no remaining > executions scheduled, then mistral remove the cron trigger and the trust > > The problem is a racing issue: apprently the copying of the snapshot to > swift run in the background and mistral removes the trust before the > operation completes... > > That explains the error in keystone and also the cron trigger execution > result which is "success" even if the resulting backup is actually "failed". > > > To test this theory I set up the same cron trigger with more than one > scheduled execution and the backups were suddenly created correctly ;-). > > > So something need to be done on the code to deal with this racing issue. > > In the meantime, I will try to put a sleep action after the 'create backup' > action. > Hi, Congrats on figuring out the issue. :-) Instead of a sleep, which may get you through this issue but fall into a different one and won't return the right status code, you should probably have a loop checking the status of the backup and return a non zero status code if it ends up in "error" state. Cheers, Gorka. > > Best Regards > > Francois > > > > > > > > > > > > On 9/20/19 4:02 PM, Gorka Eguileor wrote: > > On 20/09, Francois Scheurer wrote: > > > Hi Gorka > > > > > > > > > We have a swift endpoint set up on opentstack, which points to our ceph > > > radosgw backend > > > > > > Radosgw provides s3 & swift. > > > > > > So the swift logs are here actually the radosgw logs. > > > > > Hi, > > > > OK, thanks for the clarification. > > > > Then I assume you prefer the Swift backup driver over the Ceph one > > because you are using one of the OpenStack releases that had trouble > > with Incremental Backups on the Ceph backup driver. > > > > Cheers, > > Gorka. > > > > > > > Cheers > > > > > > Francois > > > > > > > > > > > > On 9/20/19 2:46 PM, Gorka Eguileor wrote: > > > > On 20/09, Francois Scheurer wrote: > > > > > Dear Gorka and Hervé > > > > > > > > > > > > > > > Thanks for your hints. > > > > > > > > > > I have set the debug log level on radosgw. > > > > > > > > > > I will retest now and post here the results. > > > > > > > > > > > > > > > Cheers > > > > > > > > > > Francois > > > > Hi, > > > > > > > > Sorry, I may have missed something in the conversation, weren't you > > > > using Swift? > > > > > > > > I think you need to see the Swift logs as well, since that's the API > > > > service that complained about the authorization. > > > > > > > > Cheers, > > > > Gorka. > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > EveryWare AG > > > > > François Scheurer > > > > > Senior Systems Engineer > > > > > Zurlindenstrasse 52a > > > > > CH-8003 Zürich > > > > > > > > > > tel: +41 44 466 60 00 > > > > > fax: +41 44 466 60 10 > > > > > mail: francois.scheurer at everyware.ch > > > > > web: http://www.everyware.ch > > > -- > > > > > > > > > EveryWare AG > > > François Scheurer > > > Senior Systems Engineer > > > Zurlindenstrasse 52a > > > CH-8003 Zürich > > > > > > tel: +41 44 466 60 00 > > > fax: +41 44 466 60 10 > > > mail: francois.scheurer at everyware.ch > > > web: http://www.everyware.ch > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > From elfosardo at gmail.com Mon Sep 23 08:32:15 2019 From: elfosardo at gmail.com (Riccardo Pittau) Date: Mon, 23 Sep 2019 10:32:15 +0200 Subject: [neutron][drivers][ironic] FFE request - Use openstacksdk for ironic notifiers In-Reply-To: References: <7354478b-a71d-0538-c903-de90128e5b2f@fried.cc> <20190918194522.GB9740@t440s> Message-ID: The patch has merged Thanks all! On Wed, 18 Sep 2019 at 23:35, Miguel Lavalle wrote: > > Hi, > > This FFE is approved > > Thanks > > On Wed, Sep 18, 2019 at 2:45 PM Slawek Kaplonski wrote: >> >> Hi, >> >> Personally I think we can go with this is You will implement it now. >> As per discussion on IRC, Ironic code which will use those notifications isn't >> really ready yet, and will not be for Train. So even if something would possible >> go wrong (but won't for sure ;)) we shouldn't break Ironic. >> >> On Wed, Sep 18, 2019 at 11:04:54AM -0500, Eric Fried wrote: >> > > I'd like to open an FFE request to convert the ironic events notifier >> > > from the current ironicclient to openstacksdk with the change >> > > https://review.opendev.org/682040 >> > >> > This is kind of none of my business, but since the existing ironic stuff >> > was only introduced in Train [1], IMO it is important to allow this FFE >> > so neutron doesn't have to go through the pain of supporting and >> > deprecating the conf options (e.g. `ironic_url`) and code paths through >> > python-ironicclient. >> >> Thx. I agree. That's another good point to accept this FFE. >> >> > >> > efried >> > >> > [1] https://review.opendev.org/#/c/658787/ >> > >> >> -- >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> From no-reply at openstack.org Mon Sep 23 09:10:09 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 23 Sep 2019 09:10:09 -0000 Subject: kuryr-libnetwork 4.0.0.0rc1 (train) Message-ID: Hello everyone, A new release candidate for kuryr-libnetwork for the end of the Train cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kuryr-libnetwork/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Train release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/train release branch at: https://opendev.org/openstack/kuryr-libnetwork/log/?h=stable/train Release notes for kuryr-libnetwork can be found at: https://docs.openstack.org/releasenotes/kuryr-libnetwork/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/kuryr-libnetwork/+bugs and tag it *train-rc-potential* to bring it to the kuryr-libnetwork release crew's attention. From no-reply at openstack.org Mon Sep 23 09:13:44 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 23 Sep 2019 09:13:44 -0000 Subject: zun-ui 4.0.0.0rc1 (train) Message-ID: Hello everyone, A new release candidate for zun-ui for the end of the Train cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zun-ui/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Train release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/train release branch at: https://opendev.org/openstack/zun-ui/log/?h=stable/train Release notes for zun-ui can be found at: https://docs.openstack.org/releasenotes/zun-ui/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/zun-ui/+bugs and tag it *train-rc-potential* to bring it to the zun-ui release crew's attention. From yamamoto at midokura.com Mon Sep 23 11:02:24 2019 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Mon, 23 Sep 2019 20:02:24 +0900 Subject: [neutron] bug deputy report (the week of 2019-09-16) Message-ID: here's the list of what we got reported in the week. features (nova bp follow-up) https://bugs.launchpad.net/neutron/+bug/1844131 Boot a VM with an unaddressed port High https://bugs.launchpad.net/neutron/+bug/1844168 [L3] TooManyExternalNetworks: More than one external network exists. A proposed fix: https://review.opendev.org/#/c/682418/ Medium https://bugs.launchpad.net/neutron/+bug/1844124 Not possible to change fixed-ips if port is on routed provider network A proposed fix: https://review.opendev.org/#/c/682489/ https://bugs.launchpad.net/neutron/+bug/1844516 [neutron-tempest-plugin] SSH timeout exceptions when executing remote commands A proposed fix: https://review.opendev.org/#/c/682864/ https://bugs.launchpad.net/neutron/+bug/1844688 "radvd" daemon does not work by default in some containers A proposed fix: https://review.opendev.org/#/c/683207/ Low https://bugs.launchpad.net/neutron/+bug/1844171 Configuration parameter missing at "Configure the layer-3 agent" A merged fix: https://review.opendev.org/#/c/682645/ https://bugs.launchpad.net/neutron/+bug/1844607 log error when create neutron port with wrong subnet A proposed fix: https://review.opendev.org/#/c/683273/ Incomplete https://bugs.launchpad.net/neutron/+bug/1844123 Unable to trigger IPv6 Prefix Delegation https://bugs.launchpad.net/neutron/+bug/1844595 openstackNova(Rocky)--chinese incorrect codes for db character error https://bugs.launchpad.net/neutron/+bug/1844915 Duplicate packets with two networks connected by router Invalid https://bugs.launchpad.net/neutron/+bug/1844097 Redundant ipv6 address(SLAAC/DHCPv6 stateless) created for port It was an intended behavior and the rationale was explained to the submitter. From gmann at ghanshyammann.com Mon Sep 23 12:09:30 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 23 Sep 2019 07:09:30 -0500 Subject: [goals][IPv6-Only Deployments and Testing] Week R-3 Update Message-ID: <16d5e06dc1f.ad7463ae237862.4726572145847079281@ghanshyammann.com> Hello Everyone, Below is the latest updated on IPv6 goal. All the projects have the ipv6 job patch proposed now. Next step is to review then as per mentioned guidelines below or help in debugging the failure. I saw tests are failing for the few projects which need debugging and help from the project team. Summary: The projects still need to prepare the IPv6 job: * None The projects waiting for IPv6 job patch to merge: If patch is failing, help me to debug that otherwise review and merge. * Barbican * Blazar * Cyborg * Tricircle * Vitrage * Zaqar * Glance * Monasca * Neutron * Qinling * Sahara * Searchlight * Senlin * Tacker * Ec2-Api * Freezer * Heat * Ironic * Karbor * Kuryr * Magnum * Masakari * Mistral * Murano * Octavia (johnsom is working on this and will take over the base patch) The projects have merged the IPv6 jobs: * Designate * Murano * Trove * Cloudkitty * Congress * Horizon * Keystone * Nova * Placement * Solum * Telemetry * Watcher * Zun * Cinder * Manila * Swift The projects do not need the IPv6 job (CLI, lib, deployment projects etc ): I have marked the tasks for below project as invalid. * Adjutant * Documentation * I18n * Infrastructure * Kolla * Loci * Openstack Charms * Openstack-Chef * Openstack-Helm * Openstackansible * Openstackclient * Openstacksdk * Oslo * Packaging-Rpm * Powervmstackers * Puppet Openstack * Rally * Release Management * Requirements * Storlets * Tripleo * Winstackers Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/677524/ There are few more but need to be tracked. How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific verification then it can be added in project side job as post-run playbooks as described in wiki page[1]. [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing From donny at fortnebula.com Mon Sep 23 12:37:04 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 23 Sep 2019 08:37:04 -0400 Subject: [nova] New gate bug 1844929, timed out waiting for response from cell during scheduling In-Reply-To: References: Message-ID: It looks to me like there are specific jobs on specific providers that are not functioning correctly. I will pick on Fort Nebula for a minute. tacker-functional-devstack-multinode just doesn't seem to work, but most of the other jobs that do something similar work ok. You can see the load on Fort Nebula here, and by looking at the data I don't see any issues with it being overloaded/oversubscribed. https://grafana.fortnebula.com/d/9MMqh8HWk/openstack-utilization?orgId=2&refresh=30s&from=now-12h&to=now Also most jobs are IO/Memory bound and Fort Nebula uses local NVME for all of the Openstack Jobs.. There isn't a reasonable way to make it any faster. With that said, I would like to get to the bottom of it. It surely doesn't help anyone to have jobs be failing for non code related reasons. ~/D On Sun, Sep 22, 2019 at 12:58 PM Mark Goddard wrote: > > > On Sun, 22 Sep 2019, 16:39 Matt Riedemann, wrote: > >> I noticed this while looking at a grenade failure on an unrelated patch: >> >> https://bugs.launchpad.net/nova/+bug/1844929 >> >> The details are in the bug but it looks like this showed up around Sept >> 17 and hits mostly on FortNebula nodes but also OVH nodes. It's >> restricted to grenade jobs and while I don't see anything obvious in the >> rabbitmq logs (the only errors are about uwsgi [api] heartbeat issues), >> it's possible that these are slower infra nodes and we're just not >> waiting for something properly during the grenade upgrade. We also don't >> seem to have the mysql logs published during the grenade jobs which we >> need to fix (and recently did fix for devstack jobs [1] but grenade jobs >> are still using devstack-gate so log collection happens there). >> >> I didn't see any changes in nova, grenade or devstack since Sept 16 that >> look like they would be related to this so I'm guessing right now it's >> just a combination of performance on certain infra nodes (slower?) and >> something in grenade/nova not restarting properly or not waiting long >> enough for the upgrade to complete. >> > > Julia recently fixed an issue in ironic caused by a low MTU on fortnebula. > May or may not be related. > > [1] >> >> https://github.com/openstack/devstack/commit/f92c346131db2c89b930b1a23f8489419a2217dc >> >> -- >> >> Thanks, >> >> Matt >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Sep 23 14:13:47 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 23 Sep 2019 16:13:47 +0200 Subject: [neutron][drivers][ironic] FFE request - Use openstacksdk for ironic notifiers In-Reply-To: References: <7354478b-a71d-0538-c903-de90128e5b2f@fried.cc> <20190918194522.GB9740@t440s> Message-ID: <20190923141347.GA31769@t440s> Hi, Thx Riccardo for working on this :) On Mon, Sep 23, 2019 at 10:32:15AM +0200, Riccardo Pittau wrote: > The patch has merged > Thanks all! > > On Wed, 18 Sep 2019 at 23:35, Miguel Lavalle wrote: > > > > Hi, > > > > This FFE is approved > > > > Thanks > > > > On Wed, Sep 18, 2019 at 2:45 PM Slawek Kaplonski wrote: > >> > >> Hi, > >> > >> Personally I think we can go with this is You will implement it now. > >> As per discussion on IRC, Ironic code which will use those notifications isn't > >> really ready yet, and will not be for Train. So even if something would possible > >> go wrong (but won't for sure ;)) we shouldn't break Ironic. > >> > >> On Wed, Sep 18, 2019 at 11:04:54AM -0500, Eric Fried wrote: > >> > > I'd like to open an FFE request to convert the ironic events notifier > >> > > from the current ironicclient to openstacksdk with the change > >> > > https://review.opendev.org/682040 > >> > > >> > This is kind of none of my business, but since the existing ironic stuff > >> > was only introduced in Train [1], IMO it is important to allow this FFE > >> > so neutron doesn't have to go through the pain of supporting and > >> > deprecating the conf options (e.g. `ironic_url`) and code paths through > >> > python-ironicclient. > >> > >> Thx. I agree. That's another good point to accept this FFE. > >> > >> > > >> > efried > >> > > >> > [1] https://review.opendev.org/#/c/658787/ > >> > > >> > >> -- > >> Slawek Kaplonski > >> Senior software engineer > >> Red Hat > >> > >> -- Slawek Kaplonski Senior software engineer Red Hat From donny at fortnebula.com Mon Sep 23 15:03:30 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 23 Sep 2019 11:03:30 -0400 Subject: Long, Slow Zuul Queues and Why They Happen In-Reply-To: <9aaf8782-92d1-dae7-c3b1-1a1d720bdd7f@gmail.com> References: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> <9aaf8782-92d1-dae7-c3b1-1a1d720bdd7f@gmail.com> Message-ID: *These are only observations, so please keep in mind I am only trying to get to the bottom of efficiency with our limited resources.* Please feel free to correct my understanding We have some core projects which many other projects depend on - Nova, Glance, Keystone, Neutron, Cinder. etc In the CI it's equal access for any project. If feature A in non-core project depends on feature B in core project - why is feature B not prioritized ? Can we solve this issue by breaking apart the current equal access structure into something more granular? I understand that improving job efficiencies will likely result in more smaller jobs, but will that actually solve issue at the gate come this time in the cycle...every release? (as I am sure it comes up every time) More smaller jobs will result in more jobs - If the job time is cut in half, but the # of jobs is doubled we will probably still have the same issue. We have limited resources and without more providers coming online I fear this issue is only going to get worse as time goes on if we do nothing. ~/DonnyD On Fri, Sep 13, 2019 at 3:47 PM Matt Riedemann wrote: > On 9/13/2019 2:03 PM, Clark Boylan wrote: > > We've been fielding a fair bit of questions and suggestions around > Zuul's long change (and job) queues over the last week or so. As a result I > tried to put a quick FAQ type document [0] on how we schedule jobs, why we > schedule that way, and how we can improve the long queues. > > > > Hoping that gives us all a better understanding of why were are in the > current situation and ideas on how we can help to improve things. > > > > [0] > https://docs.openstack.org/infra/manual/testing.html#why-are-jobs-for-changes-queued-for-a-long-time > > Thanks for writing this up Clark. > > As for the current status of the gate, several nova devs have been > closely monitoring the gate since we have 3 fairly lengthy series of > feature changes approved since yesterday and we're trying to shepherd > those through but we're seeing failures and trying to react to them. > > Two issues of note this week: > > 1. http://status.openstack.org/elastic-recheck/index.html#1843615 > > I had pushed a fix for that one earlier in the week but there was a bug > in my fix which Takashi has fixed: > > https://review.opendev.org/#/c/682025/ > > That was promoted to the gate earlier today but failed on... > > 2. http://status.openstack.org/elastic-recheck/index.html#1813147 > > We have a couple of patches up for that now which might get promoted > once we are reasonably sure those are going to pass check (promote to > gate means skipping check which is risky because if it fails in the gate > we have to re-queue the gate as the doc above explains). > > As far as overall failure classifications we're pretty good there in > elastic-recheck: > > http://status.openstack.org/elastic-recheck/data/integrated_gate.html > > Meaning for the most part we know what's failing, we just need to fix > the bugs. > > One that continues to dog us (and by "us" I mean OpenStack, not just > nova) is this one: > > http://status.openstack.org/elastic-recheck/gate.html#1686542 > > The QA team's work to split apart the big tempest full jobs into > service-oriented jobs like tempest-integrated-compute should have helped > here but we're still seeing there are lots of jobs timing out which > likely means there are some really slow tests running in too many jobs > and those require investigation. It could also be devstack setup that is > taking a long time like Clark identified with OSC usage awhile back: > > > http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008071.html > > If you have questions about how elastic-recheck works or how to help > investigate some of these failures, like with using > logstash.openstack.org, please reach out to me (mriedem), clarkb and/or > gmann in #openstack-qa. > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Mon Sep 23 15:10:30 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 23 Sep 2019 11:10:30 -0400 Subject: [nova] New gate bug 1844929, timed out waiting for response from cell during scheduling In-Reply-To: References: Message-ID: It would also be helpful to give the project a way to prefer certain infra providers for certain jobs. For the most part Fort Neubla is terrible at CPU bound long running jobs... I wish I could make it better, but I cannot. Is there a method we could come up with that would allow us to exploit certain traits of a certain provider? Maybe like some additional metadata that say what the certain provider is best at doing? For example highly IO bound jobs work like gangbusters on FN because the underlying storage is very fast, but CPU bound jobs do the direct opposite. Thoughts? ~/DonnyD On Mon, Sep 23, 2019 at 8:37 AM Donny Davis wrote: > It looks to me like there are specific jobs on specific providers that are > not functioning correctly. > > I will pick on Fort Nebula for a minute. > > tacker-functional-devstack-multinode just doesn't seem to work, but most > of the other jobs that do something similar work ok. > > You can see the load on Fort Nebula here, and by looking at the data I > don't see any issues with it being overloaded/oversubscribed. > > https://grafana.fortnebula.com/d/9MMqh8HWk/openstack-utilization?orgId=2&refresh=30s&from=now-12h&to=now > > Also most jobs are IO/Memory bound and Fort Nebula uses local NVME for all > of the Openstack Jobs.. There isn't a reasonable way to make it any faster. > > With that said, I would like to get to the bottom of it. It surely doesn't > help anyone to have jobs be failing for non code related reasons. > > ~/D > > On Sun, Sep 22, 2019 at 12:58 PM Mark Goddard wrote: > >> >> >> On Sun, 22 Sep 2019, 16:39 Matt Riedemann, wrote: >> >>> I noticed this while looking at a grenade failure on an unrelated patch: >>> >>> https://bugs.launchpad.net/nova/+bug/1844929 >>> >>> The details are in the bug but it looks like this showed up around Sept >>> 17 and hits mostly on FortNebula nodes but also OVH nodes. It's >>> restricted to grenade jobs and while I don't see anything obvious in the >>> rabbitmq logs (the only errors are about uwsgi [api] heartbeat issues), >>> it's possible that these are slower infra nodes and we're just not >>> waiting for something properly during the grenade upgrade. We also don't >>> seem to have the mysql logs published during the grenade jobs which we >>> need to fix (and recently did fix for devstack jobs [1] but grenade jobs >>> are still using devstack-gate so log collection happens there). >>> >>> I didn't see any changes in nova, grenade or devstack since Sept 16 that >>> look like they would be related to this so I'm guessing right now it's >>> just a combination of performance on certain infra nodes (slower?) and >>> something in grenade/nova not restarting properly or not waiting long >>> enough for the upgrade to complete. >>> >> >> Julia recently fixed an issue in ironic caused by a low MTU on >> fortnebula. May or may not be related. >> >> [1] >>> >>> https://github.com/openstack/devstack/commit/f92c346131db2c89b930b1a23f8489419a2217dc >>> >>> -- >>> >>> Thanks, >>> >>> Matt >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Sep 23 15:10:37 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 23 Sep 2019 08:10:37 -0700 Subject: Long, Slow Zuul Queues and Why They Happen In-Reply-To: References: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> <9aaf8782-92d1-dae7-c3b1-1a1d720bdd7f@gmail.com> Message-ID: <9148eff7-f494-43ee-be70-de25ea73d231@www.fastmail.com> On Mon, Sep 23, 2019, at 8:03 AM, Donny Davis wrote: > *These are only observations, so please keep in mind I am only trying > to get to the bottom of efficiency with our limited resources.* > Please feel free to correct my understanding > > We have some core projects which many other projects depend on - Nova, > Glance, Keystone, Neutron, Cinder. etc > In the CI it's equal access for any project. > If feature A in non-core project depends on feature B in core project - > why is feature B not prioritized ? The priority queuing happens per "gate queue". The integrated gate (nova, cinder, keystone, etc) has one queue, Tripleo has another, OSA has one and so on. We do this so that important work can happen across disparate efforts. What this means is if Nova and the rest of the integrated gate has a set of priority changes they should stop approving other changes while they work to merge those priority items. I have suggested that OpenStack needs an "air traffic controller" to help coordinate these efforts particularly around feature freeze time (I suggested it to both the QA team and release team). Any queue could use one if they wanted to. All that to say you can do this today, but it requires humans to work together and communicate what their goals are then give the CI system the correct information to act on these changes in the desired manner. > > Can we solve this issue by breaking apart the current equal access > structure into something more granular? > > I understand that improving job efficiencies will likely result in more > smaller jobs, but will that actually solve issue at the gate come this > time in the cycle...every release? (as I am sure it comes up every time) > More smaller jobs will result in more jobs - If the job time is cut in > half, but the # of jobs is doubled we will probably still have the same > issue. > > We have limited resources and without more providers coming online I > fear this issue is only going to get worse as time goes on if we do > nothing. > > ~/DonnyD > From luka.peschke at objectif-libre.com Mon Sep 23 15:20:04 2019 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Mon, 23 Sep 2019 17:20:04 +0200 Subject: [cloudkitty][requirements] Requesting a requirement freeze exception for python-cloudkittyclient Message-ID: Hello everybody, On behalf of the cloudkitty team, I'm asking for a requirements freeze exception for python-cloudkittyclient. The stable/train branch has been created from the 3.0.0 release, which matched an intermediary release of cloudkitty. Since then, support for some experimental API features has been added. We'd like to have these changes included in the Train release. They have been merged before the feature freeze, but we missed the deadline for the client release. Except for cloudkitty-dashboard, no openstack project depends on python-cloudkittyclient, so this exception shouldn't be impacting. Thanks, Luka Peschke (peschk_l) From donny at fortnebula.com Mon Sep 23 15:20:39 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 23 Sep 2019 11:20:39 -0400 Subject: Long, Slow Zuul Queues and Why They Happen In-Reply-To: <9148eff7-f494-43ee-be70-de25ea73d231@www.fastmail.com> References: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> <9aaf8782-92d1-dae7-c3b1-1a1d720bdd7f@gmail.com> <9148eff7-f494-43ee-be70-de25ea73d231@www.fastmail.com> Message-ID: In a different thread I had another possible suggestion - its probably more appropriate for this one. [1] It would also be helpful to give the project a way to prefer certain infra providers for certain jobs. For the most part Fort Neubla is terrible at CPU bound long running jobs... I wish I could make it better, but I cannot. Is there a method we could come up with that would allow us to exploit certain traits of a certain provider? Maybe like some additional metadata that say what the certain provider is best at doing? For example highly IO bound jobs work like gangbusters on FN because the underlying storage is very fast, but CPU bound jobs do the direct opposite. Thoughts? ~/DonnyD 1. http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009592.html On Mon, Sep 23, 2019 at 11:14 AM Clark Boylan wrote: > On Mon, Sep 23, 2019, at 8:03 AM, Donny Davis wrote: > > *These are only observations, so please keep in mind I am only trying > > to get to the bottom of efficiency with our limited resources.* > > Please feel free to correct my understanding > > > > We have some core projects which many other projects depend on - Nova, > > Glance, Keystone, Neutron, Cinder. etc > > In the CI it's equal access for any project. > > If feature A in non-core project depends on feature B in core project - > > why is feature B not prioritized ? > > The priority queuing happens per "gate queue". The integrated gate (nova, > cinder, keystone, etc) has one queue, Tripleo has another, OSA has one and > so on. We do this so that important work can happen across disparate > efforts. > > What this means is if Nova and the rest of the integrated gate has a set > of priority changes they should stop approving other changes while they > work to merge those priority items. I have suggested that OpenStack needs > an "air traffic controller" to help coordinate these efforts particularly > around feature freeze time (I suggested it to both the QA team and release > team). Any queue could use one if they wanted to. > > All that to say you can do this today, but it requires humans to work > together and communicate what their goals are then give the CI system the > correct information to act on these changes in the desired manner. > > > > > Can we solve this issue by breaking apart the current equal access > > structure into something more granular? > > > > I understand that improving job efficiencies will likely result in more > > smaller jobs, but will that actually solve issue at the gate come this > > time in the cycle...every release? (as I am sure it comes up every time) > > More smaller jobs will result in more jobs - If the job time is cut in > > half, but the # of jobs is doubled we will probably still have the same > > issue. > > > > We have limited resources and without more providers coming online I > > fear this issue is only going to get worse as time goes on if we do > > nothing. > > > > ~/DonnyD > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Mon Sep 23 15:41:14 2019 From: mthode at mthode.org (Matthew Thode) Date: Mon, 23 Sep 2019 10:41:14 -0500 Subject: [cloudkitty][requirements] Requesting a requirement freeze exception for python-cloudkittyclient In-Reply-To: References: Message-ID: <20190923154114.qijyjg5om4omugfv@mthode.org> On 19-09-23 17:20:04, Luka Peschke wrote: > Hello everybody, > > On behalf of the cloudkitty team, I'm asking for a requirements freeze > exception for python-cloudkittyclient. The stable/train branch has been > created from the 3.0.0 release, which matched an intermediary release of > cloudkitty. Since then, support for some experimental API features has been > added. We'd like to have these changes included in the Train release. They > have been merged before the feature freeze, but we missed the deadline for > the client release. > > Except for cloudkitty-dashboard, no openstack project depends on > python-cloudkittyclient, so this exception shouldn't be impacting. > > Thanks, > > Luka Peschke (peschk_l) > > It looks fine, only the dashboard depends on the client so you are good. Consider it FFE approved. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From miguel at mlavalle.com Mon Sep 23 15:48:55 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 23 Sep 2019 10:48:55 -0500 Subject: [cinder] Implementing a read-only administrator with the default reader role and system scope Message-ID: Dear Cinder, At my employer, Verizon Media, we want to implement a read only administrator like role using the Keystone default reader role scoped at system level. In the case of Nova, we've been following with interest their efforts here: https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+(status:open+OR+status:merged). I decided to test the same approach with Cinder. The good news is that the policy.py modules are almost the same in Nova and Cinder. I then looked, though, at some API calls, like https://docs.openstack.org/api-ref/block-storage/v3/index.html#volumes-volumes. The url in these calls include project_id, which in the case of a system scoped token I don't have. My questions are: 1) Am I missing something? 2) Are there any plans to implement a revision to these APIs in the near future so we can leverage the system scope and roles like reader for policy management? 3) One short term alternative in my case is to add "wrappers" to those API calls where we want to enable the reader role with system scope, extract the project_id from the context and forward the call to the regular API. Does this make sense? Are there any caveats to this? Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Mon Sep 23 16:05:19 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Mon, 23 Sep 2019 18:05:19 +0200 Subject: [requirements][monasca] Request for FFE for monasca-common Message-ID: <5fcb64b9-4a76-ddab-0b35-cae66e617a4b@suse.com> Hello Requirements Team, I'm requesting FFE for monasca-common library. The new requested version 2.16.1 includes critical fix for Confluent Kafka client. The library is used only by Monasca project. https://review.opendev.org/683986 Thanks Witek From mthode at mthode.org Mon Sep 23 18:03:21 2019 From: mthode at mthode.org (Matthew Thode) Date: Mon, 23 Sep 2019 13:03:21 -0500 Subject: [requirements][monasca] Request for FFE for monasca-common In-Reply-To: <5fcb64b9-4a76-ddab-0b35-cae66e617a4b@suse.com> References: <5fcb64b9-4a76-ddab-0b35-cae66e617a4b@suse.com> Message-ID: <20190923180321.lolapu65flbdsjy7@mthode.org> On 19-09-23 18:05:19, Witek Bedyk wrote: > Hello Requirements Team, > > I'm requesting FFE for monasca-common library. The new requested version > 2.16.1 includes critical fix for Confluent Kafka client. > > The library is used only by Monasca project. > > https://review.opendev.org/683986 > > Thanks > Witek > The main question I have is if you will need to change anything in the consuming projects and re-release them (and since they are all monasca, are you ok with doing that if needed). +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ | Repository | Filename | Line | Text | +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ | openstack/monasca-agent | requirements.txt | 29 | monasca-common>=2.7.0 # Apache-2.0 | | openstack/monasca-agent | setup.cfg | 58 | monasca-common>=1.4.0 # Apache-2.0 | | openstack/monasca-api | requirements.txt | 24 | monasca-common>=2.7.0 # Apache-2.0 | | openstack/monasca-events-api | requirements.txt | 18 | monasca-common>=1.4.0 # Apache-2.0 | | openstack/monasca-log-api | requirements.txt | 16 | monasca-common>=2.7.0 # Apache-2.0 | | openstack/monasca-notification | requirements.txt | 11 | monasca-common>=2.7.0 # Apache-2.0 | | openstack/monasca-persister | requirements.txt | 8 | monasca-common>=2.16.0 # Apache-2.0 | | openstack/monasca-tempest-plugin | requirements.txt | 13 | monasca-common>=2.8.0 # Apache-2.0 | | openstack/monasca-transform | requirements.txt | 10 | monasca-common>=2.7.0 # Apache-2.0 | +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ Once those questions are answered I can (dis)approve. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Mon Sep 23 18:06:12 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Sep 2019 13:06:12 -0500 Subject: [nova] New gate bug 1844929, timed out waiting for response from cell during scheduling In-Reply-To: References: Message-ID: <3ec07d9e-5dfb-bc67-65a9-21cd3092e65c@gmail.com> On 9/22/2019 10:37 AM, Matt Riedemann wrote: > We also don't seem to have the mysql logs published during the grenade > jobs which we need to fix (and recently did fix for devstack jobs [1] > but grenade jobs are still using devstack-gate so log collection happens > there). Fix for mysql log collection in grenade jobs is here: https://review.opendev.org/#/c/684042/ I'm just waiting on results to make sure that works before removing the -W. -- Thanks, Matt From mriedemos at gmail.com Mon Sep 23 18:06:49 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Sep 2019 13:06:49 -0500 Subject: [nova] New gate bug 1844929, timed out waiting for response from cell during scheduling In-Reply-To: References: Message-ID: <9b3e520a-1169-743f-084a-b4e73cf97220@gmail.com> On 9/22/2019 11:55 AM, Mark Goddard wrote: > Julia recently fixed an issue in ironic caused by a low MTU on > fortnebula. May or may not be related. Thanks but it looks like that was specific to ironic jobs and looking at logstash it's fixed: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22dropped%20over-mtu%20packet%5C%22%20AND%20tags%3A%5C%22syslog.txt%5C%22&from=7d -- Thanks, Matt From openstack at nemebean.com Mon Sep 23 19:59:01 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 23 Sep 2019 14:59:01 -0500 Subject: [oslo] New courtesy ping list for Ussuri Message-ID: <473f0fcb-c8c2-e8ae-812e-15575e898d66@nemebean.com> As we discussed at the beginning of the cycle, I'll be clearing the current ping list in the next few weeks. This is to prevent courtesy pinging people who are no longer active on the project. If you wish to continue receiving courtesy pings at the start of the Oslo meeting please add yourself to the new list on the agenda template [0]. Note that the new list is above the template, called "Courtesy ping list for Ussuri". If you add yourself again to the end of the existing list I'll assume you want to be left on though. :-) Thanks. -Ben 0: https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_Template From Richard.Pioso at dell.com Tue Sep 24 00:09:05 2019 From: Richard.Pioso at dell.com (Richard.Pioso at dell.com) Date: Tue, 24 Sep 2019 00:09:05 +0000 Subject: [ironic] FFE: Add idrac HW type Redfish virtual media boot interface Message-ID: Hi, I request a late feature freeze exception (FFE) for https://review.opendev.org/#/c/672498/ -- "Add Redfish vmedia boot interface to idrac HW type". There is high demand from operators for this feature. They would be delighted if it were included in Train. We believe it is a low risk change, because of the following: 1) It affects only the idrac hardware type. 2) The highest priority boot interfaces supported by the idrac hardware type remain so. 'ipxe' and 'pxe' continue to have the highest priority, and the new 'idrac-redfish-virtual-media' has the lowest priority. The new order from highest to lowest priority is 'ipxe', 'pxe', and 'idrac-redfish-virtual-media'. 3) The new interface is based on and almost entirely leverages an already merged interface implementation, 'redfish-virtual-media'. [1] Please let me know if you have any concerns or questions. Thank you for your consideration. Rick [1] https://review.opendev.org/#/c/638453/ From berndbausch at gmail.com Tue Sep 24 03:46:37 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 24 Sep 2019 12:46:37 +0900 Subject: [keystone] presence of policy.json breaks Keystone? Message-ID: This is on a stable Stein Devstack. Problem description: ubuntu at devstack:~$ oslopolicy-sample-generator --namespace keystone >/etc/keystone/policy.json ubuntu at devstack:~$ openstack user list Internal Server Error (HTTP 500) Note that I did not modify the policy.json file above. It's mere presence is sufficient to cause the problem. When I remove it and restart Keystone, the problem goes away. The Keystone log contains a huge stacktrace with two methods in oslopolicy/_checks.py playing ping-pong with each other until they give up with RuntimeError: maximum recursion depth exceeded. This only happens with Keystone. Nova and Cinder (which also keep policy in code) are fine. This looks like a bug, but I didn't find it in launchpad. Is there a workaround? I would like to use a modified Keystone policy in a training course. Thanks for any feedback. Bernd. From honjo.rikimaru at ntt-tx.co.jp Tue Sep 24 04:42:49 2019 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Tue, 24 Sep 2019 13:42:49 +0900 Subject: [cinder][tooz]Lock-files are remained In-Reply-To: <88881fd9-22f3-a4df-c5a9-e5346255ef4b@redhat.com> References: <88881fd9-22f3-a4df-c5a9-e5346255ef4b@redhat.com> Message-ID: <05583de8-e593-e4e0-5d0f-05dc5e49ad5c@ntt-tx.co.jp_1> Hi Eric, On 2019/09/20 23:10, Eric Harney wrote: > On 9/20/19 1:52 AM, Rikimaru Honjo wrote: >> Hi, >> >> I'm using Queens cinder with the following setting. >> >> --------------------------------- >> [coordination] >> backend_url = file://$state_path >> --------------------------------- >> >> As a result, the files like the following were remained under the state path after some operations.[1] >> >> cinder-63dacb3d-bd4d-42bb-88fe-6e4180164765-delete_volume >> cinder-32c426af-82b4-41de-b637-7d76fed69e83-delete_snapshot >> >> In my understanding, these are lock-files created for synchronization by tooz. >> But, these lock-files were not deleted after finishing operations. >> Is this behaviour correct? >> >> [1] >> e.g. Delete volume, Delete snapshot > > This is a known bug that's described here: > > https://github.com/harlowja/fasteners/issues/26 > > (The fasteners library is used by tooz, which is used by Cinder for managing these lock files.) > > There's an old Cinder bug for it here: > https://bugs.launchpad.net/cinder/+bug/1432387 > > but that's marked as "Won't Fix" because Cinder needs it to be fixed in the underlying libraries. Thank you for your explanation. I understood the state. But, I have one more question. Can I think this bug doesn't affect synchronization? Best regards, > Thanks, > Eric > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From renat.akhmerov at gmail.com Tue Sep 24 06:36:20 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 24 Sep 2019 13:36:20 +0700 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <20190923073746.diuqub3ciyqi3duk@localhost> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> <20190920124651.fxf3d2eqbgi5rbc4@localhost> <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> <20190920140219.jdb2k2t4w5m3a7rr@localhost> <021261ae-f1ce-343e-1695-f13f6c8082b9@everyware.ch> <20190923073746.diuqub3ciyqi3duk@localhost> Message-ID: <4f779c2f-e43e-4f1a-a3a2-44a4e5515ef7@Spark> Hi! I would kindly ask you to add [mistral] into the subject of the emails related to Mistral. I just saw this thread accidentally (since I can’t read everything) and missed it in the first place. On the issue itself… So yes, the discovery you made makes perfect sense. I agree that a workflow should probably be responsible for tracking a status of an operation. We’ve discussed a more generic solution in the past for similar situations but it seems to be virtually impossible to find it. If you have some ideas, please share. We can discuss it. Thanks Renat Akhmerov @Nokia On 23 Sep 2019, 14:41 +0700, Gorka Eguileor , wrote: > On 20/09, Francois Scheurer wrote: > > Hi Gorka > > > > > > > Then I assume you prefer the Swift backup driver over the Ceph one > > > because you are using one of the OpenStack releases that had trouble >with > > Incremental Backups on the Ceph backup driver. > > > > > > You are probably right. But I cannot answer that because I was not involve > > in that decision. > > > > > > Ok in the radosgw logs I see this: > > > > > > 2019-09-20 15:40:06.805529 7f19edb9b700 20 token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao > > 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to > > https://keystone.service.stage.ewcs.ch/v3/auth/tokens > > 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set to off > > 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to > > https://keystone.service.stage.ewcs.ch/v3/auth/tokens > > 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set to off > > 2019-09-20 15:40:07.267091 7f19edb9b700  5 Failed keystone auth from > > https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 > > BTW: our radosgw is configured to delegate user authentication to keystone. > > > > In keystone logs I see this: > > > > 2019-09-20 15:40:07.218 24 INFO keystone.token.provider > > [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being processed: > > token.user_id [f7c7296949f84a4387c5172808a0965b], > > token.expires_at[2019-09-21T13:40:07.000000Z], > > token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], token.methods[[u'password']], > > token.system[None], token.domain_id[None], > > token.project_id[4120792f50bc4cf2b4f97c4546462f06], token.trust_id[None], > > token.federated_groups[None], token.identity_provider_id[None], > > token.protocol_id[None], > > token.access_token_id[None],token.application_credential_id[None]. > > 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi > > [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b > > 4120792f50bc4cf2b4f97c4546462f06 - default default] GET > > http://keystone.service.stage.ewcs.ch/v3/auth/tokens > > 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi > > [req-9f858abb-68f9-42cf-b71a-f1cafca91844 f7c7296949f84a4387c5172808a0965b > > 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find trust: > > 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: > > 934ed82d2b14413899023da0bee6a953. > > > > > > So what happens is following: > > > > 1. when the user creates the cron trigger, mistral creates a trust > > 2. when the cron trigger executes the workflow, openstack create a > > volume snapshot (a rbd image) then copy it to swift (rgw) then > > delete the snapshot > > 3. when the execution finishes, if the cron trigger has no remaining > > executions scheduled, then mistral remove the cron trigger and the trust > > > > The problem is a racing issue: apprently the copying of the snapshot to > > swift run in the background and mistral removes the trust before the > > operation completes... > > > > That explains the error in keystone and also the cron trigger execution > > result which is "success" even if the resulting backup is actually "failed". > > > > > > To test this theory I set up the same cron trigger with more than one > > scheduled execution and the backups were suddenly created correctly ;-). > > > > > > So something need to be done on the code to deal with this racing issue. > > > > In the meantime, I will try to put a sleep action after the 'create backup' > > action. > > > > Hi, > > Congrats on figuring out the issue. :-) > > Instead of a sleep, which may get you through this issue but fall into a > different one and won't return the right status code, you should > probably have a loop checking the status of the backup and return a non > zero status code if it ends up in "error" state. > > Cheers, > Gorka. > > > > > Best Regards > > > > Francois > > > > > > > > > > > > > > > > > > > > > > > > On 9/20/19 4:02 PM, Gorka Eguileor wrote: > > > On 20/09, Francois Scheurer wrote: > > > > Hi Gorka > > > > > > > > > > > > We have a swift endpoint set up on opentstack, which points to our ceph > > > > radosgw backend > > > > > > > > Radosgw provides s3 & swift. > > > > > > > > So the swift logs are here actually the radosgw logs. > > > > > > > Hi, > > > > > > OK, thanks for the clarification. > > > > > > Then I assume you prefer the Swift backup driver over the Ceph one > > > because you are using one of the OpenStack releases that had trouble > > > with Incremental Backups on the Ceph backup driver. > > > > > > Cheers, > > > Gorka. > > > > > > > > > > Cheers > > > > > > > > Francois > > > > > > > > > > > > > > > > On 9/20/19 2:46 PM, Gorka Eguileor wrote: > > > > > On 20/09, Francois Scheurer wrote: > > > > > > Dear Gorka and Hervé > > > > > > > > > > > > > > > > > > Thanks for your hints. > > > > > > > > > > > > I have set the debug log level on radosgw. > > > > > > > > > > > > I will retest now and post here the results. > > > > > > > > > > > > > > > > > > Cheers > > > > > > > > > > > > Francois > > > > > Hi, > > > > > > > > > > Sorry, I may have missed something in the conversation, weren't you > > > > > using Swift? > > > > > > > > > > I think you need to see the Swift logs as well, since that's the API > > > > > service that complained about the authorization. > > > > > > > > > > Cheers, > > > > > Gorka. > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > > > > EveryWare AG > > > > > > François Scheurer > > > > > > Senior Systems Engineer > > > > > > Zurlindenstrasse 52a > > > > > > CH-8003 Zürich > > > > > > > > > > > > tel: +41 44 466 60 00 > > > > > > fax: +41 44 466 60 10 > > > > > > mail: francois.scheurer at everyware.ch > > > > > > web: http://www.everyware.ch > > > > -- > > > > > > > > > > > > EveryWare AG > > > > François Scheurer > > > > Senior Systems Engineer > > > > Zurlindenstrasse 52a > > > > CH-8003 Zürich > > > > > > > > tel: +41 44 466 60 00 > > > > fax: +41 44 466 60 10 > > > > mail: francois.scheurer at everyware.ch > > > > web: http://www.everyware.ch > > > > > -- > > > > > > EveryWare AG > > François Scheurer > > Senior Systems Engineer > > Zurlindenstrasse 52a > > CH-8003 Zürich > > > > tel: +41 44 466 60 00 > > fax: +41 44 466 60 10 > > mail: francois.scheurer at everyware.ch > > web: http://www.everyware.ch > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Tue Sep 24 08:55:20 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Tue, 24 Sep 2019 10:55:20 +0200 Subject: cron triggers execution fails with cinder.volume_snapshots_create In-Reply-To: <4f779c2f-e43e-4f1a-a3a2-44a4e5515ef7@Spark> References: <21a0f692-aa42-d81d-8968-5524e8596e19@everyware.ch> <20190920105132.2oc6igmnkehq65wy@localhost> <410df12a-46b6-2a07-1e28-6c5aadaf8e53@everyware.ch> <20190920124651.fxf3d2eqbgi5rbc4@localhost> <69596a39-3d05-40ff-d757-ff62b8a1608c@everyware.ch> <20190920140219.jdb2k2t4w5m3a7rr@localhost> <021261ae-f1ce-343e-1695-f13f6c8082b9@everyware.ch> <20190923073746.diuqub3ciyqi3duk@localhost> <4f779c2f-e43e-4f1a-a3a2-44a4e5515ef7@Spark> Message-ID: <351e8477-3391-6303-9ad8-b6df24eff945@everyware.ch> Hi Gorka and Renat Thanks you for your suggestions and sorry to have forgotten the [mistral] subject prefix . >Renat: >workflow should probablybe responsible for tracking a status of an operation. >Gorka: >Instead of a sleep, which may get you through this issue but fall into a >different one and won't return the right status code, you should >probably have a loop checking the status of the backup and return a non >zero status code if it ends up in "error" state. The idea of Gorka sounds good. If you look at the snapshot worflow of Jose Castro, you will find a similar snippet: #https://techblog.web.cern.ch/techblog/post/scheduled-snapshots/ #https://gitlab.cern.ch/cloud-infrastructure/mistral-workflows/raw/master/workflows/instance_snapshot.yaml | sed -e 's%action_region: "cern"%action_region: "ch-zh1"%' >instance_snapshot.yaml     stop_instance:       description: 'Stops the instance for consistency'       action: nova.servers_stop       input:         server: <% $.instance %>         action_region: <% $.action_region %>       on-success:         - wait_for_stop_instance       on-error:         - error_task     wait_for_stop_instance:       description: 'Waits until the instance is shutoff to continue'       action: nova.servers_find       input:         id: <% $.instance %>         status: 'SHUTOFF'         action_region: <% $.action_region %>       retry:         delay: 5         count: 40       on-success:         - check_boot_source       on-error:         - error_task >We’ve discussed a more generic solution in the past for similar situations but it seems to be virtually impossible to find it. Ok so it looks that this issue cannot be fixed with a small bugfix. It would require a feature extension. I can imagine that quite a few api calls from the different openstack modules/services are asynchronous and would require mistral to check their progress status every time in a different ad hoc manner. That would make the such a new feature in mistral quite expensive to implement. It would be great if every async call would return a job_id in a standard form by each service. So mistral would be able to track them in an uniform way. This would also allows openstack client to run in sync or async mode, according to the user need. But such a design requirement better need to be done at day one; it is likely too late to change all openstack services... However, there is a minor enhancement that could be done: let the user specify if a cron trigger need to auto-delete itself after its last execution or not. Keeping expired cron triggers could be nice for: -avoiding the such racing issues as with swift/radosgw -allowing the user to edit and reschedule a expired cron trigger What do you think? Best Regards Francois On 9/24/19 8:36 AM, Renat Akhmerov wrote: > Hi! > > I would kindly ask you to add [mistral] into the subject of the emails > related to Mistral. I just saw this thread accidentally (since I can’t > read everything) and missed it in the first place. > > On the issue itself… So yes, the discovery you made makes perfect > sense. I agree that a workflow should probablybe responsible for > tracking a status of an operation. We’ve discussed a more generic > solution in the past for similar situations but it seems to be > virtually impossible to find it. If you have some ideas, please share. > We can discuss it. > > > Thanks > > Renat Akhmerov > @Nokia > On 23 Sep 2019, 14:41 +0700, Gorka Eguileor , wrote: >> On 20/09, Francois Scheurer wrote: >>> Hi Gorka >>> >>> >>>> Then I assume you prefer the Swift backup driver over the Ceph one >>>> because you are using one of the OpenStack releases that had >>>> trouble >with >>> Incremental Backups on the Ceph backup driver. >>> >>> >>> You are probably right. But I cannot answer that because I was not >>> involve >>> in that decision. >>> >>> >>> Ok in the radosgw logs I see this: >>> >>> >>> 2019-09-20 15:40:06.805529 7f19edb9b700 20 >>> token_id=gAAAAABdhNauRvNev5P90ovX7_cb5_4MkY1tg5JHFpAH8JL-_0vDs06lHW5F9Iphua7fxCWTxxdL-0fRzhR8We_nN6Hx9z3FTWcTXLUMtIUPe0WMKQgW6JkUTP8RwSjAfF4W04OztEg3VAUGN_5gWRlBX-KT9uypnEszadG1yA7gpjkCokNnD8oaIeE6arvs_EjfJib51rao >>> 2019-09-20 15:40:06.805664 7f19edb9b700 20 sending request to >>> https://keystone.service.stage.ewcs.ch/v3/auth/tokens >>> 2019-09-20 15:40:06.805803 7f19edb9b700 20 ssl verification is set >>> to off >>> 2019-09-20 15:40:07.235356 7f19edb9b700 20 sending request to >>> https://keystone.service.stage.ewcs.ch/v3/auth/tokens >>> 2019-09-20 15:40:07.235404 7f19edb9b700 20 ssl verification is set >>> to off >>> 2019-09-20 15:40:07.267091 7f19edb9b700  5 Failed keystone auth from >>> https://keystone.service.stage.ewcs.ch/v3/auth/tokens with 404 >>> BTW: our radosgw is configured to delegate user authentication to >>> keystone. >>> >>> In keystone logs I see this: >>> >>> 2019-09-20 15:40:07.218 24 INFO keystone.token.provider >>> [req-21b2f11c-9e67-4487-af05-420acfb65ace - - - - -] Token being >>> processed: >>> token.user_id [f7c7296949f84a4387c5172808a0965b], >>> token.expires_at[2019-09-21T13:40:07.000000Z], >>> token.audit_ids[[u'hFweMPCrSO2D00rNcRNECw']], >>> token.methods[[u'password']], >>> token.system[None], token.domain_id[None], >>> token.project_id[4120792f50bc4cf2b4f97c4546462f06], >>> token.trust_id[None], >>> token.federated_groups[None], token.identity_provider_id[None], >>> token.protocol_id[None], >>> token.access_token_id[None],token.application_credential_id[None]. >>> 2019-09-20 15:40:07.257 21 INFO keystone.common.wsgi >>> [req-9f858abb-68f9-42cf-b71a-f1cafca91844 >>> f7c7296949f84a4387c5172808a0965b >>> 4120792f50bc4cf2b4f97c4546462f06 - default default] GET >>> http://keystone.service.stage.ewcs.ch/v3/auth/tokens >>> 2019-09-20 15:40:07.265 21 WARNING keystone.common.wsgi >>> [req-9f858abb-68f9-42cf-b71a-f1cafca91844 >>> f7c7296949f84a4387c5172808a0965b >>> 4120792f50bc4cf2b4f97c4546462f06 - default default] Could not find >>> trust: >>> 934ed82d2b14413899023da0bee6a953.: TrustNotFound: Could not find trust: >>> 934ed82d2b14413899023da0bee6a953. >>> >>> >>> So what happens is following: >>> >>> 1. when the user creates the cron trigger, mistral creates a trust >>> 2. when the cron trigger executes the workflow, openstack create a >>> volume snapshot (a rbd image) then copy it to swift (rgw) then >>> delete the snapshot >>> 3. when the execution finishes, if the cron trigger has no remaining >>> executions scheduled, then mistral remove the cron trigger and the trust >>> >>> The problem is a racing issue: apprently the copying of the snapshot to >>> swift run in the background and mistral removes the trust before the >>> operation completes... >>> >>> That explains the error in keystone and also the cron trigger execution >>> result which is "success" even if the resulting backup is actually >>> "failed". >>> >>> >>> To test this theory I set up the same cron trigger with more than one >>> scheduled execution and the backups were suddenly created correctly ;-). >>> >>> >>> So something need to be done on the code to deal with this racing issue. >>> >>> In the meantime, I will try to put a sleep action after the 'create >>> backup' >>> action. >>> >> >> Hi, >> >> Congrats on figuring out the issue. :-) >> >> Instead of a sleep, which may get you through this issue but fall into a >> different one and won't return the right status code, you should >> probably have a loop checking the status of the backup and return a non >> zero status code if it ends up in "error" state. >> >> Cheers, >> Gorka. >> >>> >>> Best Regards >>> >>> Francois >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On 9/20/19 4:02 PM, Gorka Eguileor wrote: >>>> On 20/09, Francois Scheurer wrote: >>>>> Hi Gorka >>>>> >>>>> >>>>> We have a swift endpoint set up on opentstack, which points to our >>>>> ceph >>>>> radosgw backend >>>>> >>>>> Radosgw provides s3 & swift. >>>>> >>>>> So the swift logs are here actually the radosgw logs. >>>>> >>>> Hi, >>>> >>>> OK, thanks for the clarification. >>>> >>>> Then I assume you prefer the Swift backup driver over the Ceph one >>>> because you are using one of the OpenStack releases that had trouble >>>> with Incremental Backups on the Ceph backup driver. >>>> >>>> Cheers, >>>> Gorka. >>>> >>>> >>>>> Cheers >>>>> >>>>> Francois >>>>> >>>>> >>>>> >>>>> On 9/20/19 2:46 PM, Gorka Eguileor wrote: >>>>>> On 20/09, Francois Scheurer wrote: >>>>>>> Dear Gorka and Hervé >>>>>>> >>>>>>> >>>>>>> Thanks for your hints. >>>>>>> >>>>>>> I have set the debug log level on radosgw. >>>>>>> >>>>>>> I will retest now and post here the results. >>>>>>> >>>>>>> >>>>>>> Cheers >>>>>>> >>>>>>> Francois >>>>>> Hi, >>>>>> >>>>>> Sorry, I may have missed something in the conversation, weren't you >>>>>> using Swift? >>>>>> >>>>>> I think you need to see the Swift logs as well, since that's the API >>>>>> service that complained about the authorization. >>>>>> >>>>>> Cheers, >>>>>> Gorka. >>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> >>>>>>> EveryWare AG >>>>>>> François Scheurer >>>>>>> Senior Systems Engineer >>>>>>> Zurlindenstrasse 52a >>>>>>> CH-8003 Zürich >>>>>>> >>>>>>> tel: +41 44 466 60 00 >>>>>>> fax: +41 44 466 60 10 >>>>>>> mail: francois.scheurer at everyware.ch >>>>>>> web: http://www.everyware.ch >>>>> -- >>>>> >>>>> >>>>> EveryWare AG >>>>> François Scheurer >>>>> Senior Systems Engineer >>>>> Zurlindenstrasse 52a >>>>> CH-8003 Zürich >>>>> >>>>> tel: +41 44 466 60 00 >>>>> fax: +41 44 466 60 10 >>>>> mail: francois.scheurer at everyware.ch >>>>> web: http://www.everyware.ch >>>> >>> -- >>> >>> >>> EveryWare AG >>> François Scheurer >>> Senior Systems Engineer >>> Zurlindenstrasse 52a >>> CH-8003 Zürich >>> >>> tel: +41 44 466 60 00 >>> fax: +41 44 466 60 10 >>> mail: francois.scheurer at everyware.ch >>> web: http://www.everyware.ch >>> >> >> >> -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From wang.ya at 99cloud.net Tue Sep 24 09:02:02 2019 From: wang.ya at 99cloud.net (wang.ya) Date: Tue, 24 Sep 2019 17:02:02 +0800 Subject: [nova] The test of NUMA aware live migration In-Reply-To: <10e25785-4271-9f19-db15-0c31ea7543ee@gmail.com> References: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> <10e25785-4271-9f19-db15-0c31ea7543ee@gmail.com> Message-ID: I think the two issues should be similar. As I said, the first instance live migrate to host, but in resource tracker, the cache 'cn' not updated, at the moment, second instance live migrate to same host, then the vCPU pin policy broken. The issue is not reproducible every time, it need to go through multiple live migrate (I wrote a script to run live migrate automatic). I have checked the nova's config, the ' max_concurrent_live_migrations' option is default :) I've report the issue to launchpad, you can find the log in attachment: https://bugs.launchpad.net/nova/+bug/1845146 On 2019/9/20, 11:52 PM, "Matt Riedemann" wrote: On 9/17/2019 7:44 AM, wang.ya wrote: > But if add the property “hw:cpu_policy='dedicated'”, it will not correct > after serval live migrations. > > Which means the live migrate can be success, but the vCPU pin are not > correct(two instance have serval same vCPU pin on same host). > Is the race you're describing the same issue reported in this bug? https://bugs.launchpad.net/nova/+bug/1829349 Also, what is the max_concurrent_live_migrations config option set to? That defaults to 1 but I'm wondering if you've changed it at all. -- Thanks, Matt From moguimar at redhat.com Tue Sep 24 09:24:47 2019 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Tue, 24 Sep 2019 11:24:47 +0200 Subject: [barbican][FFE] Feature Freeze Exception - Secret Consumers Message-ID: Hi, I'd like to request a FFE for the Secret Consumers API patches: https://review.opendev.org/#/q/topic:secret-consumers The changes are limited to a new feature described here: https://specs.openstack.org/openstack/barbican-specs/specs/train/secret-consumers.html Thanks, -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Tue Sep 24 09:39:53 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Tue, 24 Sep 2019 11:39:53 +0200 Subject: [requirements][monasca] Request for FFE for monasca-common In-Reply-To: <20190923180321.lolapu65flbdsjy7@mthode.org> References: <5fcb64b9-4a76-ddab-0b35-cae66e617a4b@suse.com> <20190923180321.lolapu65flbdsjy7@mthode.org> Message-ID: On 9/23/19 8:03 PM, Matthew Thode wrote: > On 19-09-23 18:05:19, Witek Bedyk wrote: >> Hello Requirements Team, >> >> I'm requesting FFE for monasca-common library. The new requested version >> 2.16.1 includes critical fix for Confluent Kafka client. >> >> The library is used only by Monasca project. >> >> https://review.opendev.org/683986 >> >> Thanks >> Witek >> > > The main question I have is if you will need to change anything in the > consuming projects and re-release them (and since they are all monasca, > are you ok with doing that if needed). > > +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ > | Repository | Filename | Line | Text | > +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ > | openstack/monasca-agent | requirements.txt | 29 | monasca-common>=2.7.0 # Apache-2.0 | > | openstack/monasca-agent | setup.cfg | 58 | monasca-common>=1.4.0 # Apache-2.0 | > | openstack/monasca-api | requirements.txt | 24 | monasca-common>=2.7.0 # Apache-2.0 | > | openstack/monasca-events-api | requirements.txt | 18 | monasca-common>=1.4.0 # Apache-2.0 | > | openstack/monasca-log-api | requirements.txt | 16 | monasca-common>=2.7.0 # Apache-2.0 | > | openstack/monasca-notification | requirements.txt | 11 | monasca-common>=2.7.0 # Apache-2.0 | > | openstack/monasca-persister | requirements.txt | 8 | monasca-common>=2.16.0 # Apache-2.0 | > | openstack/monasca-tempest-plugin | requirements.txt | 13 | monasca-common>=2.8.0 # Apache-2.0 | > | openstack/monasca-transform | requirements.txt | 10 | monasca-common>=2.7.0 # Apache-2.0 | > +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ > > Once those questions are answered I can (dis)approve. > Hi Matthew, there is no need to change lower constraints in the consuming projects as they work perfectly fine with legacy Kafka client in these versions. Also all unit tests for lower-constraints jobs pass. Only upper-constraints will have to be updated with the new version to install the version with the bugfix for Train. Thanks Witek From renat.akhmerov at gmail.com Tue Sep 24 11:16:16 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 24 Sep 2019 18:16:16 +0700 Subject: [mistral][FFE] Request for FFE: skipping validation In-Reply-To: <93074c61-7edc-4320-848a-2ad318b6fc51@Spark> References: <93074c61-7edc-4320-848a-2ad318b6fc51@Spark> Message-ID: Hi, We’d like to land the patch [1] (when it’s done) in Train although it adds an API query string parameter and a new configuration option. It has a big user impact (significant performance) boost. That change is backwards compatible and doesn’t break any other release (or stable branch maintenance) policies. I’ve already discussed it with the release team in IRC and got a preliminary approval from them. And since I’m the PTL of Mistral I approve this FFE as well :) Let me know if you have any concerns/comments. Thanks [1] https://review.opendev.org/#/c/683344/ Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Tue Sep 24 12:57:08 2019 From: dmendiza at redhat.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=) Date: Tue, 24 Sep 2019 07:57:08 -0500 Subject: [barbican][FFE] Feature Freeze Exception - Secret Consumers In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi Moisés, Thank you for all the work on this feature. I think that we do need this FFE since your patches have been held up by failing gates that were out of your control. Thanks, - - Douglas Mendizábal Barbican PTL On 9/24/19 4:24 AM, Moises Guimaraes de Medeiros wrote: > Hi, > > I'd like to request a FFE for the Secret Consumers API patches: > > > https://review.opendev.org/#/q/topic:secret-consumers > > The changes are limited to a new feature described here: > > > https://specs.openstack.org/openstack/barbican-specs/specs/train/secre t-consumers.html > > Thanks, > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl2KEqMACgkQgB6WFOq/ OrcK3g//SpSqqmwVy+F4oO2ImhILRWuVlsWcNvLf6YZNY3sBxNeSDVc4/pHq69RS DITJGAFrML9EE0szzbnmDntLwqiPS4Dblxd1PbksXw9qXO3Z77AkPtiban7tAUON 2gV4VbL8TBFVKfZvvJJ0o178Igo6pDkQDjBaMTriZHB/SXzECr5TXDv5XmLEZ82F B1sHqUtO1UQ4FGe91Hs+PpVCYdOdePenknOBxNxuY0K+7IJ1h7k22v+T0I2ZvKSq IDiIASJYAnk90SgGe6l8Zmc2dmRiucN4kuDTF+Fi1qcVHttMyC85PfsWS9zL5+nr Dx829M/CPgY/7xiv3XA6tb3rj2YCpvSLY7Pdgzv79AipqsVXc4RIlPa+Mzyho31S fEROGevGVF1PW2qRnlgSwlETDEz5+e/p9rCdwO4tEB4N/XUJPrum3whNMtQ5evqM vMxSVuoz4neRIlkdTPv4FeJGBj9S1Y4glIRgBnCsQ34hTZ2PW6gkyQ4LIX9guu2x BjWslldLvinp9ZZCpqD1nyjxiSu8WqtwpUQCuv6/qk17ggjeoA5+bpj0Ofp0xYuF NOC/qxAf0u0vo4sK5QPGAs3t5IpfAC6KRDYbQKCsi4lz5ou4PP7YTOz/KvOkeh46 QttMi4MBh6PaYOjWSHlyP39RJio2yy9MsA4zC/b3zsGAjpdwBHE= =ysCi -----END PGP SIGNATURE----- From openstack at nemebean.com Tue Sep 24 13:30:16 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 24 Sep 2019 08:30:16 -0500 Subject: [keystone][oslo] presence of policy.json breaks Keystone? In-Reply-To: References: Message-ID: <085e3550-d720-03e8-843e-c81bb6df2716@nemebean.com> On 9/23/19 10:46 PM, Bernd Bausch wrote: > This is on a stable Stein Devstack. Problem description: > > ubuntu at devstack:~$ oslopolicy-sample-generator --namespace keystone > >/etc/keystone/policy.json > ubuntu at devstack:~$ openstack user list > Internal Server Error (HTTP 500) > > Note that I did not modify the policy.json file above. It's mere > presence is sufficient to cause the problem. When I remove it and > restart Keystone, the problem goes away. > > The Keystone log contains a huge stacktrace with two methods in > oslopolicy/_checks.py playing ping-pong with each other until they give > up with RuntimeError: maximum recursion depth exceeded. > > This only happens with Keystone. Nova and Cinder (which also keep policy > in code) are fine. > > This looks like a bug, but I didn't find it in launchpad. Is there a > workaround? I would like to use a modified Keystone policy in a training > course. Unfortunately there are two potential bugs that you may be hitting. Fortunately they're both fixed on master. I've proposed backports of the patches to stable/stein. First is bad aliases created when a policy rule is deprecated but the name isn't changed: https://review.opendev.org/#/c/684316 Second is a problem with the deprecation logic that can cause an implicit loop because of how we handle overrides of deprecated policies: https://review.opendev.org/#/c/684314/ I'm guessing you're hitting one of those. This is a relatively new thing because of the migration to use scopes in Keystone, which is why you don't see it in any of the other projects. > > Thanks for any feedback. > > Bernd. > > From ignaziocassano at gmail.com Tue Sep 24 13:32:18 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 24 Sep 2019 15:32:18 +0200 Subject: [gnocchi] uWSGI error Message-ID: Hello All, I hane an openstack queens installation with centos and since some days gnocchi api serice reports: connect_to_tcp()/socket(): Too many open files [core/socket.c line 462] Tue Sep 24 08:36:24 2019 - *** uWSGI listen queue of socket "127.0.0.1:40000" (fd: 10) full !!! (101/100) *** connect_to_tcp()/socket(): Too many open files [core/socket.c line 462] connect_to_tcp()/socket(): Too many open files [core/socket.c line 462] Tue Sep 24 08:36:25 2019 - *** uWSGI listen queue of socket "127.0.0.1:40000" (fd: 10) full !!! (101/100) *** Tue Sep 24 08:36:26 2019 - *** uWSGI listen queue of socket "127.0.0.1:40000" (fd: 10) full !!! (101/100) *** Tue Sep 24 08:36:27 2019 - *** uWSGI listen queue of socket "127.0.0.1:40000" (fd: 10) full !!! (101/100) *** Tue Sep 24 08:36:28 2019 - *** uWSGI listen queue of socket "127.0.0.1:40000" (fd: 10) full !!! (101/100) *** Please, must I change some limits ? Where ? I hahe noy found any uwsgi configuration file for gnocchi api service . Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Sep 24 14:16:39 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Sep 2019 16:16:39 +0200 Subject: Retiring stale repositories from the OpenStack org on GitHub Message-ID: Hi everyone, The migration of our infrastructure to the Opendev domain gave us the opportunity to no longer have everything under "openstack" and stop the confusion around what is a part of OpenStack and what is just hosted on the same infrastructure. To that effect, in April we transferred non-OpenStack repositories to their own organization names on Opendev, with the non-claimed ones being kept for the time being under a "x" default organization. One consequence of that transition is that non-OpenStack repositories that were previously mirrored to GitHub under the "openstack" organization are now stale and no longer updated, which is very misleading. Those should now be retired, with a clear pointer to the original repository on Opendev. Jim and I volunteered to build tools to do handle that retirement and we are now ready to run those Thursday. This will not affect OpenStack repositories or repositories that were already retired or migrated off the OpenStack org on GitHub (think openstack-infra, opendev, airship...). That will only clean up no-longer-mirrored, stale, non-openstack repositories still present in the OpenStack GitHub organization. If you own a non-openstack repository on Opendev and would like to enable GitHub mirroring (to a GitHub org of your choice), it is possible to configure it as part of your Zuul jobs. You can follow instructions at: http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005007.html Cheers, -- Jim and Thierry From mihalis68 at gmail.com Tue Sep 24 14:51:16 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 24 Sep 2019 10:51:16 -0400 Subject: [ops] ops meetups team minutes Message-ID: The OpenStack Ops Meetups team met briefly today on IRC, minutes are below. The team is working on some sessions for the Forum in Shanghai, together with two mid-cycle meetups in 2020, possibly in London and then South Korea. We'll share more details of those when we can. We continue to meet at 10am EDT on #openstack-operators weekly. Meeting ended Tue Sep 24 14:35:02 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:35 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-09-24-14.11.html 10:35 AM O<•openstack> Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-09-24-14.11.txt 10:35 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-09-24-14.11.log.html Cheers Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Sep 24 15:17:31 2019 From: zigo at debian.org (Thomas Goirand) Date: Tue, 24 Sep 2019 17:17:31 +0200 Subject: [keystone][oslo] presence of policy.json breaks Keystone? In-Reply-To: <085e3550-d720-03e8-843e-c81bb6df2716@nemebean.com> References: <085e3550-d720-03e8-843e-c81bb6df2716@nemebean.com> Message-ID: <00c03bd6-7437-d878-3648-2393be55f96e@debian.org> On 9/24/19 3:30 PM, Ben Nemec wrote: > > On 9/23/19 10:46 PM, Bernd Bausch wrote: >> This is on a stable Stein Devstack. Problem description: >> >> ubuntu at devstack:~$ oslopolicy-sample-generator --namespace keystone >>  >/etc/keystone/policy.json >> ubuntu at devstack:~$ openstack user list >> Internal Server Error (HTTP 500) >> >> Note that I did not modify the policy.json file above. It's mere >> presence is sufficient to cause the problem. When I remove it and >> restart Keystone, the problem goes away. >> >> The Keystone log contains a huge stacktrace with two methods in >> oslopolicy/_checks.py playing ping-pong with each other until they >> give up with RuntimeError: maximum recursion depth exceeded. >> >> This only happens with Keystone. Nova and Cinder (which also keep >> policy in code) are fine. >> >> This looks like a bug, but I didn't find it in launchpad. Is there a >> workaround? I would like to use a modified Keystone policy in a >> training course. > > Unfortunately there are two potential bugs that you may be hitting. > Fortunately they're both fixed on master. I've proposed backports of the > patches to stable/stein. > > First is bad aliases created when a policy rule is deprecated but the > name isn't changed: https://review.opendev.org/#/c/684316 > > Second is a problem with the deprecation logic that can cause an > implicit loop because of how we handle overrides of deprecated policies: > https://review.opendev.org/#/c/684314/ > > I'm guessing you're hitting one of those. This is a relatively new thing > because of the migration to use scopes in Keystone, which is why you > don't see it in any of the other projects. Hi Ben, Please do backport these patch, they are useful, and the bug is kind of annoying. :) Cheers, Thomas Goirand (zigo) From dtantsur at redhat.com Tue Sep 24 15:22:01 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 24 Sep 2019 17:22:01 +0200 Subject: [requirements] [ironic] FFE for python-ironicclient 3.1.0 Message-ID: Hi all, we would like to request an exception to release python-ironicclient 3.1.0 from the stable/train branch. The current 3.0.0 release has several issues, one of them [1] is critical and breaks no-auth mode (used e.g. in bifrost). I'm also proposing [2] to exclude python-ironicclient 3.0.0 from requirement. A release request will be posted once [3] merges. The minor version bump is because we've made the previously existing implicit oslo.config dependency explicit [4]. I don't believe the new release will break anyone who is not broken by 3.0.0 already. Thanks, Dmitry [1] https://storyboard.openstack.org/#!/story/2006600 [2] https://review.opendev.org/#/c/684376/ [3] https://review.opendev.org/#/c/684363/ [4] https://review.opendev.org/#/c/684281/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue Sep 24 15:23:07 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 24 Sep 2019 10:23:07 -0500 Subject: [requirements][monasca] Request for FFE for monasca-common In-Reply-To: References: <5fcb64b9-4a76-ddab-0b35-cae66e617a4b@suse.com> <20190923180321.lolapu65flbdsjy7@mthode.org> Message-ID: <20190924152307.lpyvbgzhr7nzdi5n@mthode.org> On 19-09-24 11:39:53, Witek Bedyk wrote: > On 9/23/19 8:03 PM, Matthew Thode wrote: > > On 19-09-23 18:05:19, Witek Bedyk wrote: > > > Hello Requirements Team, > > > > > > I'm requesting FFE for monasca-common library. The new requested version > > > 2.16.1 includes critical fix for Confluent Kafka client. > > > > > > The library is used only by Monasca project. > > > > > > https://review.opendev.org/683986 > > > > > > Thanks > > > Witek > > > > > > > The main question I have is if you will need to change anything in the > > consuming projects and re-release them (and since they are all monasca, > > are you ok with doing that if needed). > > > > +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ > > | Repository | Filename | Line | Text | > > +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ > > | openstack/monasca-agent | requirements.txt | 29 | monasca-common>=2.7.0 # Apache-2.0 | > > | openstack/monasca-agent | setup.cfg | 58 | monasca-common>=1.4.0 # Apache-2.0 | > > | openstack/monasca-api | requirements.txt | 24 | monasca-common>=2.7.0 # Apache-2.0 | > > | openstack/monasca-events-api | requirements.txt | 18 | monasca-common>=1.4.0 # Apache-2.0 | > > | openstack/monasca-log-api | requirements.txt | 16 | monasca-common>=2.7.0 # Apache-2.0 | > > | openstack/monasca-notification | requirements.txt | 11 | monasca-common>=2.7.0 # Apache-2.0 | > > | openstack/monasca-persister | requirements.txt | 8 | monasca-common>=2.16.0 # Apache-2.0 | > > | openstack/monasca-tempest-plugin | requirements.txt | 13 | monasca-common>=2.8.0 # Apache-2.0 | > > | openstack/monasca-transform | requirements.txt | 10 | monasca-common>=2.7.0 # Apache-2.0 | > > +----------------------------------+----------------------------------------------------------+------+-------------------------------------+ > > > > Once those questions are answered I can (dis)approve. > > > > > Hi Matthew, > > there is no need to change lower constraints in the consuming projects as > they work perfectly fine with legacy Kafka client in these versions. Also > all unit tests for lower-constraints jobs pass. > > Only upper-constraints will have to be updated with the new version to > install the version with the bugfix for Train. > > Thanks > Witek > Sounds good then, ffe approved. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Tue Sep 24 15:26:28 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 24 Sep 2019 10:26:28 -0500 Subject: [requirements] [ironic] FFE for python-ironicclient 3.1.0 In-Reply-To: References: Message-ID: <20190924152628.5yjhluqezeibs37x@mthode.org> On 19-09-24 17:22:01, Dmitry Tantsur wrote: > Hi all, > > we would like to request an exception to release python-ironicclient 3.1.0 > from the stable/train branch. The current 3.0.0 release has several issues, > one of them [1] is critical and breaks no-auth mode (used e.g. in bifrost). > I'm also proposing [2] to exclude python-ironicclient 3.0.0 from > requirement. > > A release request will be posted once [3] merges. The minor version bump is > because we've made the previously existing implicit oslo.config dependency > explicit [4]. I don't believe the new release will break anyone who is not > broken by 3.0.0 already. > > Thanks, > Dmitry > > [1] https://storyboard.openstack.org/#!/story/2006600 > [2] https://review.opendev.org/#/c/684376/ > [3] https://review.opendev.org/#/c/684363/ > [4] https://review.opendev.org/#/c/684281/ It looks like the following projects depend on python-ironicclient code. Will they need to update their requirements and cause a re-release (I suspect at least some of them will need to mask 3.0.0). +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------------+ | Repository | Filename | Line | Text | +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------------+ | openstack/congress | requirements.txt | 23 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/fuel-qa | fuelweb_test/requirements.txt | 21 | python-ironicclient>=1.1.0 # Apache-2.0 | | openstack/ironic-inspector | requirements.txt | 18 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/ironic-ui | requirements.txt | 6 | python-ironicclient!=2.5.2,!=2.7.1,>=2.3.0 # Apache-2.0 | | openstack/mistral | requirements.txt | 53 | python-ironicclient!=2.7.1,>=2.7.0 # Apache-2.0 | | openstack/networking-baremetal | requirements.txt | 12 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/nova | test-requirements.txt | 16 | python-ironicclient!=2.7.1,>=2.7.0 # Apache-2.0 | | openstack/openstackclient | requirements.txt | 13 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/python-openstackclient | test-requirements.txt | 28 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/python-tripleoclient | requirements.txt | 12 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/rally-openstack | requirements.txt | 22 | python-ironicclient>=2.2.0 # Apache Software License | | openstack/searchlight | requirements.txt | 61 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/tenks | ansible/roles/ironic-enrolment/files/requirements.txt | 5 | python-ironicclient>=2.5.0 # Apache | | openstack/tripleo-common | requirements.txt | 16 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/tripleo-validations | requirements.txt | 11 | python-ironicclient>=2.3.0 # Apache-2.0 | | openstack/upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 239 | python-ironicclient==2.6.0 | | openstack/watcher | requirements.txt | 42 | python-ironicclient>=2.5.0 # Apache-2.0 | | x/cisco-ironic-contrib | test-requirements.txt | 16 | python-ironicclient>=0.8.0 | | x/mogan | requirements.txt | 11 | python-ironicclient>=2.3.0 # Apache-2.0 | | x/osops-tools-contrib | ansible_requirements.txt | 48 | python-ironicclient==1.7.0 | | x/valence | requirements.txt | 25 | python-ironicclient>=2.2.0 # Apache-2.0 | +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------------+ -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Tue Sep 24 15:28:11 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 24 Sep 2019 10:28:11 -0500 Subject: Do we have a definitive list of native pdf package dependencies for bindep? Message-ID: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> I've been on a least a few pdf goal related reviews and "why can't I build docs anymore" IRC conversations now to ask if there is a definitive list anywhere of the native packages needed to build the pdf-docs tox target so we can put those in bindep.txt for each project. os-brick looks like it has a pretty expansive set of packages defined [1]. Should each project be copying that? Where would a definitive list even live? Is that something that should go into the requirements repo's bindep file [2] or is that specific just to what binary packages are needed to run tests against the requirements repo? Maybe the list should live in the storyboard dashboard [3] for the goal? [1] https://github.com/openstack/os-brick/commit/132a531e1768dea2db3275da376f163adc8fbf34#diff-03625fa9d8a51df3251e367a19ecfca5 [2] https://github.com/openstack/requirements/blob/master/bindep.txt [3] https://storyboard.openstack.org/#!/board/175 -- Thanks, Matt From mriedemos at gmail.com Tue Sep 24 15:33:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 24 Sep 2019 10:33:36 -0500 Subject: Do we have a definitive list of native pdf package dependencies for bindep? In-Reply-To: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> References: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> Message-ID: <5f3409f6-c0e0-538c-2969-296a0b669da6@gmail.com> On 9/24/2019 10:28 AM, Matt Riedemann wrote: > os-brick looks like it has a pretty expansive set of packages defined > [1]. Should each project be copying that? Where would a definitive list > even live? Is that something that should go into the requirements repo's > bindep file [2] or is that specific just to what binary packages are > needed to run tests against the requirements repo? Maybe the list should > live in the storyboard dashboard [3] for the goal? > > [1] > https://github.com/openstack/os-brick/commit/132a531e1768dea2db3275da376f163adc8fbf34#diff-03625fa9d8a51df3251e367a19ecfca5 > > [2] https://github.com/openstack/requirements/blob/master/bindep.txt > [3] https://storyboard.openstack.org/#!/board/175 Should we just use this? https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-build-pdf-docs/tasks/main.yaml#L7 But that's only for ubuntu bionic nodes, right? It's at least a start but people trying to build pdf-docs tox targets on CentOS, Fedora, etc likely still won't work. -- Thanks, Matt From dtantsur at redhat.com Tue Sep 24 15:34:32 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 24 Sep 2019 17:34:32 +0200 Subject: [requirements] [ironic] FFE for python-ironicclient 3.1.0 In-Reply-To: <20190924152628.5yjhluqezeibs37x@mthode.org> References: <20190924152628.5yjhluqezeibs37x@mthode.org> Message-ID: Hi, On Tue, Sep 24, 2019 at 5:29 PM Matthew Thode wrote: > On 19-09-24 17:22:01, Dmitry Tantsur wrote: > > Hi all, > > > > we would like to request an exception to release python-ironicclient > 3.1.0 > > from the stable/train branch. The current 3.0.0 release has several > issues, > > one of them [1] is critical and breaks no-auth mode (used e.g. in > bifrost). > > I'm also proposing [2] to exclude python-ironicclient 3.0.0 from > > requirement. > > > > A release request will be posted once [3] merges. The minor version bump > is > > because we've made the previously existing implicit oslo.config > dependency > > explicit [4]. I don't believe the new release will break anyone who is > not > > broken by 3.0.0 already. > > > > Thanks, > > Dmitry > > > > [1] https://storyboard.openstack.org/#!/story/2006600 > > [2] https://review.opendev.org/#/c/684376/ > > [3] https://review.opendev.org/#/c/684363/ > > [4] https://review.opendev.org/#/c/684281/ > > It looks like the following projects depend on python-ironicclient code. > Will they need to update their requirements and cause a re-release (I > suspect at least some of them will need to mask 3.0.0). > It may be desired for some projects to mask ironicclient 3.0.0 (I'll take care of bifrost). Bumping the version should not be required, if I understand the process right. Dmitry > > > +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------------+ > | Repository | Filename > | Line | Text > | > > +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------------+ > | openstack/congress | requirements.txt > | 23 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/fuel-qa | > fuelweb_test/requirements.txt | 21 | > python-ironicclient>=1.1.0 # Apache-2.0 > | > | openstack/ironic-inspector | requirements.txt > | 18 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/ironic-ui | requirements.txt > | 6 | > python-ironicclient!=2.5.2,!=2.7.1,>=2.3.0 # Apache-2.0 > | > | openstack/mistral | requirements.txt > | 53 | > python-ironicclient!=2.7.1,>=2.7.0 # Apache-2.0 > | > | openstack/networking-baremetal | requirements.txt > | 12 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/nova | > test-requirements.txt | 16 | > python-ironicclient!=2.7.1,>=2.7.0 # Apache-2.0 > | > | openstack/openstackclient | requirements.txt > | 13 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/python-openstackclient | > test-requirements.txt | 28 | > python-ironicclient>=2.3.0 # Apache-2.0 > | > | openstack/python-tripleoclient | requirements.txt > | 12 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/rally-openstack | requirements.txt > | 22 | python-ironicclient>=2.2.0 > # Apache Software License | > | openstack/searchlight | requirements.txt > | 61 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/tenks | > ansible/roles/ironic-enrolment/files/requirements.txt | 5 | > python-ironicclient>=2.5.0 # Apache > | > | openstack/tripleo-common | requirements.txt > | 16 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/tripleo-validations | requirements.txt > | 11 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | openstack/upstream-institute-virtual-environment | > elements/upstream-training/static/tmp/requirements.txt | 239 | > python-ironicclient==2.6.0 > | > | openstack/watcher | requirements.txt > | 42 | python-ironicclient>=2.5.0 # > Apache-2.0 | > | x/cisco-ironic-contrib | > test-requirements.txt | 16 | > python-ironicclient>=0.8.0 > | > | x/mogan | requirements.txt > | 11 | python-ironicclient>=2.3.0 # > Apache-2.0 | > | x/osops-tools-contrib | > ansible_requirements.txt | 48 | > python-ironicclient==1.7.0 > | > | x/valence | requirements.txt > | 25 | python-ironicclient>=2.2.0 # > Apache-2.0 | > > +--------------------------------------------------+----------------------------------------------------------+------+----------------------------------------------------------------------------------+ > > -- > Matthew Thode > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Tue Sep 24 15:42:54 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 24 Sep 2019 10:42:54 -0500 Subject: [requirements] [ironic] FFE for python-ironicclient 3.1.0 In-Reply-To: References: <20190924152628.5yjhluqezeibs37x@mthode.org> Message-ID: <20190924154254.wflxmlgmkbdeqiqi@mthode.org> On 19-09-24 17:34:32, Dmitry Tantsur wrote: > Hi, > > On Tue, Sep 24, 2019 at 5:29 PM Matthew Thode wrote: > > > On 19-09-24 17:22:01, Dmitry Tantsur wrote: > > > Hi all, > > > > > > we would like to request an exception to release python-ironicclient > > 3.1.0 > > > from the stable/train branch. The current 3.0.0 release has several > > issues, > > > one of them [1] is critical and breaks no-auth mode (used e.g. in > > bifrost). > > > I'm also proposing [2] to exclude python-ironicclient 3.0.0 from > > > requirement. > > > > > > A release request will be posted once [3] merges. The minor version bump > > is > > > because we've made the previously existing implicit oslo.config > > dependency > > > explicit [4]. I don't believe the new release will break anyone who is > > not > > > broken by 3.0.0 already. > > > > > > Thanks, > > > Dmitry > > > > > > [1] https://storyboard.openstack.org/#!/story/2006600 > > > [2] https://review.opendev.org/#/c/684376/ > > > [3] https://review.opendev.org/#/c/684363/ > > > [4] https://review.opendev.org/#/c/684281/ > > > > It looks like the following projects depend on python-ironicclient code. > > Will they need to update their requirements and cause a re-release (I > > suspect at least some of them will need to mask 3.0.0). > > > > It may be desired for some projects to mask ironicclient 3.0.0 (I'll take > care of bifrost). Bumping the version should not be required, if I > understand the process right. > > Dmitry > I think that'll cause a re-release, or the next train release for that project will just have it (which could be after 'release' time. As long as those clients are happy to use 3.0.0 in train. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aj at suse.com Tue Sep 24 15:49:37 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 24 Sep 2019 17:49:37 +0200 Subject: Do we have a definitive list of native pdf package dependencies for bindep? In-Reply-To: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> References: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> Message-ID: <1539b4d0-79fa-2d01-ec1e-e77782e48e01@suse.com> On 24/09/2019 17.28, Matt Riedemann wrote: > I've been on a least a few pdf goal related reviews and "why can't I > build docs anymore" IRC conversations now to ask if there is a > definitive list anywhere of the native packages needed to build the > pdf-docs tox target so we can put those in bindep.txt for each project. You can still build docs without changes, tox -e docs was not changed. The list of packages that the job installs is here: https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-build-pdf-docs/tasks/main.yaml#L10-L16 So, adding files to bindep.txt is optional. > > os-brick looks like it has a pretty expansive set of packages defined > [1]. Should each project be copying that? Where would a definitive list The list in the prepare-build-pdf-docs role is the common subset. I'm surprised that os-brick needs more packages. > even live? Is that something that should go into the requirements repo's > bindep file [2] or is that specific just to what binary packages are > needed to run tests against the requirements repo? Maybe the list should > live in the storyboard dashboard [3] for the goal? If you need to add something, it needs to go in the repo, not in the requirments repo. Andreas > > [1] > https://github.com/openstack/os-brick/commit/132a531e1768dea2db3275da376f163adc8fbf34#diff-03625fa9d8a51df3251e367a19ecfca5 > > [2] https://github.com/openstack/requirements/blob/master/bindep.txt > [3] https://storyboard.openstack.org/#!/board/175 > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From gr at ham.ie Tue Sep 24 15:56:31 2019 From: gr at ham.ie (Graham Hayes) Date: Tue, 24 Sep 2019 16:56:31 +0100 Subject: [FFE][designate] Pool Manager Removal and refactor Message-ID: <7949a7d8-ae8d-6a1e-7a81-f97f27cd7b27@ham.ie> Hi All, I am granting a FFE for 3 patches: https://review.opendev.org/#/c/678432/ - Refactored service layer https://review.opendev.org/#/c/665476/22 - Removed deprecated pool-manager implementation https://review.opendev.org/#/c/657289/ - Removed deprecated powerdns 3 driver These patches remove a duplicate code paths that have been deprecated for an extended period of time, and reduce a lot of the complexity in our implementation. Please contact me if there is any issues or concern. Thanks, Graham From juliaashleykreger at gmail.com Tue Sep 24 16:54:02 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 24 Sep 2019 09:54:02 -0700 Subject: [ironic] FFE: Add idrac HW type Redfish virtual media boot interface In-Reply-To: References: Message-ID: I am in support of getting this in, but ironic is essentially at the time for our release window for 13.0.0 as we need to create the stable/train branch. To do this will force 13.1 to be the final version to train if we need to back-port this patch. I can review it tonight after my meetings, and possibly Dmitry or another core can review and if they feel that it can be approved tomorrow. I guess the deterrent for me mentally at this point is we’ve already geared documentation towards 13.0.0 being the Train release. -Julia On Mon, Sep 23, 2019 at 8:10 PM wrote: > Hi, > > I request a late feature freeze exception (FFE) for > https://review.opendev.org/#/c/672498/ -- "Add Redfish vmedia boot > interface to idrac HW type". There is high demand from operators for this > feature. They would be delighted if it were included in Train. > > We believe it is a low risk change, because of the following: > > 1) It affects only the idrac hardware type. > 2) The highest priority boot interfaces supported by the idrac hardware > type remain so. 'ipxe' and 'pxe' continue to have the highest priority, and > the new 'idrac-redfish-virtual-media' has the lowest priority. The new > order from highest to lowest priority is 'ipxe', 'pxe', and > 'idrac-redfish-virtual-media'. > 3) The new interface is based on and almost entirely leverages an already > merged interface implementation, 'redfish-virtual-media'. [1] > > Please let me know if you have any concerns or questions. Thank you for > your consideration. > > Rick > > > [1] https://review.opendev.org/#/c/638453/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Tue Sep 24 16:55:26 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 24 Sep 2019 09:55:26 -0700 Subject: [keystone] Ussuri Virtual PTGs Message-ID: <14b79ada-b6bb-410a-b345-ef700bd58c21@www.fastmail.com> Hi team, As we've discussed in IRC and at the virtual midcycle, we'll be holding two virtual planning meetings in place of meeting in-person at the PTG in Shanghai. We'll hold one before the real event in order to prepare for the Forum, and one following it in order to disseminate information and recalibrate. I've drawn up a straw man agenda and proposed meeting dates/times in our brainstorming etherpad: https://etherpad.openstack.org/p/keystone-shanghai-ptg The proposal is to have the pre-PTG on Tuesday, October 29 at 14:00 UTC and the post-PTG on Tuesday, November 12 at 15:00 UTC. (Every time I send a scheduling poll we usually settle on Tuesday mornings anyway so I'm just cutting to the chase.) Please speak up if you want to attend but the proposed dates or times do not work for you, we still have plenty of time to play with the schedule. If no one voices concerns by Tuesday, October 8 we'll consider the dates final. The agenda and topics are also open for feedback and additional topic suggestions are still welcome. We'll plan on using jitsi.org as our conference tool but can fall back to gotomeeting if necessary. Colleen From aj at suse.com Tue Sep 24 16:56:30 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 24 Sep 2019 18:56:30 +0200 Subject: Do we have a definitive list of native pdf package dependencies for bindep? In-Reply-To: <5f3409f6-c0e0-538c-2969-296a0b669da6@gmail.com> References: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> <5f3409f6-c0e0-538c-2969-296a0b669da6@gmail.com> Message-ID: <5314157d-d991-07f3-4340-c6ffd39e0992@suse.com> On 24/09/2019 17.33, Matt Riedemann wrote: > On 9/24/2019 10:28 AM, Matt Riedemann wrote: >> os-brick looks like it has a pretty expansive set of packages defined >> [1]. Should each project be copying that? Where would a definitive >> list even live? Is that something that should go into the requirements >> repo's bindep file [2] or is that specific just to what binary >> packages are needed to run tests against the requirements repo? Maybe >> the list should live in the storyboard dashboard [3] for the goal? >> >> [1] >> https://github.com/openstack/os-brick/commit/132a531e1768dea2db3275da376f163adc8fbf34#diff-03625fa9d8a51df3251e367a19ecfca5 >> >> [2] https://github.com/openstack/requirements/blob/master/bindep.txt >> [3] https://storyboard.openstack.org/#!/board/175 > > Should we just use this? > > https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-build-pdf-docs/tasks/main.yaml#L7 Yes. > > But that's only for ubuntu bionic nodes, right? It's at least a start > but people trying to build pdf-docs tox targets on CentOS, Fedora, etc > likely still won't work. https://opendev.org/openstack/openstack-manuals/src/branch/master/bindep.txt has a list for more distros if somebody wants to patch the list to handle more distros, I'll happily take them, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jean-philippe at evrard.me Tue Sep 24 19:33:56 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 24 Sep 2019 21:33:56 +0200 Subject: [all][tc] What happened in OpenStack Governance last week Message-ID: Hello everyone, Here are a few things that happened recently: We introduced "comparison of Official Group Structures" [1]. It should help you navigate better our structures and process, if you want to start a new initiative. We now have a plan to reduce the amount of members of our Technical Committee going forward, down to 9 in Q3 2020 [2]. We've clarified the completion criteria for the PDF goal [3]. We've got fresh PTLs! Tetsuro Nakamura for Placement, Ian Y. Choi for I18n, and Monty Taylor for OpenStackSDK. Congratulations! The Ussuri runtimes are out [4][5]. Ussuri will be the first release to not require python2. Projects are encouraged to move forward! Reminder: Projects are allowed to test extra runtimes, like python2.7, if they want to. Note: Most, if not all, initiatives above were started by TC members. Keep in mind you don't need to be in the TC to propose patches in governance! For example, Tom Barron proposed quite a few patches recently, thank you Tom! :) Regards, Jean-Philippe Evrard (evrardjp) [1]: https://review.opendev.org/668093 [2]: https://review.opendev.org/681266 [3]: https://review.opendev.org/679654 [4]: https://governance.openstack.org/tc/reference/runtimes/ussuri.html [5]: https://review.opendev.org/679798 From jean-philippe at evrard.me Tue Sep 24 19:44:14 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 24 Sep 2019 21:44:14 +0200 Subject: [tc] Weekly update Message-ID: <31e04fedc599f2d1fa50b39be0df50ab51f6fca9.camel@evrard.me> Hello my friends, Here's what need attention for the OpenStack TC this week. 1. We should ensure we have two TC members focusing on next cycle goal selection process. As far as I know, there is no one assigned to this yet. And that's _very important_. 2. Jimmy McArthur sent us the results of the OpenStack User survey on the ML [1]. We currently haven't analyzed the information yet. Any volunteer to analyse the information (in order to extract action items) is welcomed. It would be great if we could discuss this at our next official meeting. 3. Our next meeting date needs to be decided (same time as usual). Pleas e use the framadate here [2]. 4. Our next meeting agenda needs clarifications. It would be great if you could update the wiki [3], so that I can send the invite to the ML. 5. We have a vice-chair candidate, Rico. It would be awesome if you could cast a vote on this nomination, if you didn't vote on it yet[4]. 6. There are plenty of patches that are waiting for your opinion. Have a look at [5] to [10] for example :) 7. We may have multiple patches with too many dissenting votes. I will abandon them if nobody is against that (pun intended). Thank you everyone! Jean-Philippe Evrard (evrardjp) [1]: http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009501.html [2]: https://framadate.org/6zASzWzX4ejkr4ae [3]: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4]: https://review.opendev.org/684262 [5]: https://review.opendev.org/681203 [6]: https://review.opendev.org/681260 [7]: https://review.opendev.org/681480 [8]: https://review.opendev.org/681924 [9]: https://review.opendev.org/680985 [10]: https://review.opendev.org/682380 From mriedemos at gmail.com Tue Sep 24 19:58:09 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 24 Sep 2019 14:58:09 -0500 Subject: Do we have a definitive list of native pdf package dependencies for bindep? In-Reply-To: <1539b4d0-79fa-2d01-ec1e-e77782e48e01@suse.com> References: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> <1539b4d0-79fa-2d01-ec1e-e77782e48e01@suse.com> Message-ID: On 9/24/2019 10:49 AM, Andreas Jaeger wrote: > You can still build docs without changes, tox -e docs was not changed. > > The list of packages that the job installs is here: > https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-build-pdf-docs/tasks/main.yaml#L10-L16 > > > > So, adding files to bindep.txt is optional. I was thinking of this specifically: https://review.opendev.org/#/c/683003/ Building docs in the gate is fine, it's handled by the docs job template, but building docs locally was failing because I didn't have that native package installed, and when I'm working on big docs changes I'm building them locally to make sure (1) the docs build works and (2) my formatting looks OK. -- Thanks, Matt From emilien at redhat.com Tue Sep 24 20:12:35 2019 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 24 Sep 2019 16:12:35 -0400 Subject: [tripleo] Deprecating paunch CLI? [EXT] In-Reply-To: <20190916082059.GA10148@sanger.ac.uk> References: <20190916082059.GA10148@sanger.ac.uk> Message-ID: On Mon, Sep 16, 2019 at 4:22 AM Dave Holland wrote: > We've found "paunch debug" useful in tracking down container issues that > we've reported to RH and then fixed, e.g. when diagnosing a too-low file > handle limit: > > paunch debug --file > /var/lib/tripleo-config/hashed-docker-container-startup-config-step_4.json > --overrides '{ "ulimit": ["nofile=9999"] }' --container neutron_l3_agent > --action run > > Will there be a way to achieve this run-with-overrides functionality > without the CLI? > Everything that will be "removed" or "replaced" in Paunch will be available elsewhere. Either in an Ansible module or still in Paunch itself. Our goal isn't to break our users but rather simplify our tooling. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Tue Sep 24 20:25:35 2019 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 24 Sep 2019 16:25:35 -0400 Subject: [nova] The test of NUMA aware live migration In-Reply-To: References: <6A5C6F83-F6A9-4DE1-A859-B787E3490AC6@99cloud.net> <10e25785-4271-9f19-db15-0c31ea7543ee@gmail.com> Message-ID: I've proposed [1], which I think should solve the issue. Could you test with that patch and let us know if the bug goes away? Thank again for helping improve this! [1] https://review.opendev.org/#/c/684409/ On Tue, Sep 24, 2019 at 5:04 AM wang.ya wrote: > > I think the two issues should be similar. As I said, the first instance live migrate to host, but in resource tracker, the cache 'cn' not updated, at the moment, second instance live migrate to same host, then the vCPU pin policy broken. > The issue is not reproducible every time, it need to go through multiple live migrate (I wrote a script to run live migrate automatic). > > I have checked the nova's config, the ' max_concurrent_live_migrations' option is default :) > > I've report the issue to launchpad, you can find the log in attachment: https://bugs.launchpad.net/nova/+bug/1845146 > > > On 2019/9/20, 11:52 PM, "Matt Riedemann" wrote: > > On 9/17/2019 7:44 AM, wang.ya wrote: > > But if add the property “hw:cpu_policy='dedicated'”, it will not correct > > after serval live migrations. > > > > Which means the live migrate can be success, but the vCPU pin are not > > correct(two instance have serval same vCPU pin on same host). > > > > Is the race you're describing the same issue reported in this bug? > > https://bugs.launchpad.net/nova/+bug/1829349 > > Also, what is the max_concurrent_live_migrations config option set to? > That defaults to 1 but I'm wondering if you've changed it at all. > > -- > > Thanks, > > Matt > > > > > > -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From li.canwei2 at zte.com.cn Wed Sep 25 03:14:35 2019 From: li.canwei2 at zte.com.cn (li.canwei2 at zte.com.cn) Date: Wed, 25 Sep 2019 11:14:35 +0800 (CST) Subject: =?UTF-8?B?W1dhdGNoZXJdIG5vIGlyYyBtZWV0aW5nIHRvZGF5?= Message-ID: <201909251114351045368@zte.com.cn> Hi, Watcher irc meeting will be cancelled because I'm no time today. Thanks, licanwei -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Sep 25 04:08:50 2019 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 25 Sep 2019 00:08:50 -0400 Subject: [tripleo] deploying tripleo on centos8 Message-ID: As you probably know, CentOS8 was published today and the TripleO team will use this platform in the CI very soon. Today I spent a bit of time trying to deploy an undercloud on centos8 and I had some issues, and I wanted to share them in case some other folks do testing as well: https://etherpad.openstack.org/p/tripleo-centos8 Note that this is really early stage testing, and some work will have to happen on the packaging side mainly. Feel free to contribute, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Wed Sep 25 07:06:52 2019 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 25 Sep 2019 09:06:52 +0200 Subject: Do we have a definitive list of native pdf package dependencies for bindep? In-Reply-To: References: <82228aea-e3ce-dc51-ab8c-9f15aa0d4e28@gmail.com> <1539b4d0-79fa-2d01-ec1e-e77782e48e01@suse.com> Message-ID: <3349091.ndt3XGRgHi@whitebase.usersys.redhat.com> On Tuesday, 24 September 2019 21:58:09 CEST Matt Riedemann wrote: > On 9/24/2019 10:49 AM, Andreas Jaeger wrote: > > You can still build docs without changes, tox -e docs was not changed. > > > > The list of packages that the job installs is here: > > https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/ > > prepare-build-pdf-docs/tasks/main.yaml#L10-L16 > > > > > > > > So, adding files to bindep.txt is optional. > > I was thinking of this specifically: > > https://review.opendev.org/#/c/683003/ > > Building docs in the gate is fine, it's handled by the docs job > template, but building docs locally was failing because I didn't have > that native package installed, and when I'm working on big docs changes > I'm building them locally to make sure (1) the docs build works and (2) > my formatting looks OK. I was wondering: could we find a way to not duplicate this information over and over in all repositories? It's a long list of packages which needs to be kept in sync all around. Ciao -- Luigi From manuel.sb at garvan.org.au Wed Sep 25 07:49:16 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Wed, 25 Sep 2019 07:49:16 +0000 Subject: numa affinity question Message-ID: <9D8A2486E35F0941A60430473E29F15B017EB795C0@MXDB1.ad.garvan.unsw.edu.au> Dear openstack user group, I have a server with 2 numa nodes and I am trying to setup nova numa affinity. [root at zeus-53 ~]# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 28 29 30 31 32 33 34 35 36 37 38 39 40 41 node 0 size: 262029 MB node 0 free: 2536 MB node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55 node 1 size: 262144 MB node 1 free: 250648 MB node distances: node 0 1 0: 10 21 1: 21 10 openstack flavor create --public xlarge.numa.perf --ram 250000 --disk 700 --vcpus 25 --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=isolate --property hw:numa_nodes='1' --property pci_passthrough:alias='nvme:4' openstack server create --network hpc --flavor xlarge.numa.perf --image centos7.6-kudu-image --availability-zone nova:zeus-53.localdomain --key-name mykey kudu-1 This is the xmldump for the created vm But for some reason the second VM fails to create with the error instance-00000108 5d278c90-27ab-4ee4-aeea-e1bf36ac246a kudu-4 2019-09-25 07:20:32 250000 700 0 0 25 admin admin 256000000 256000000 25 25600 RDO OpenStack Compute 18.2.2-1.el7 00000000-0000-0000-0000-0cc47aa482cc 5d278c90-27ab-4ee4-aeea-e1bf36ac246a Virtual Machine hvm destroy restart destroy /usr/libexec/qemu-kvm