From miguel at mlavalle.com Mon Sep 2 03:26:47 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 1 Sep 2019 22:26:47 -0500 Subject: [openstack-dev] [neutron] Cancelling Neutron weekly meeting on September 2nd Message-ID: Hi Neutrinos, September 2nd is a holiday in the USA, so we will cancel our weekly meeting. We will resume on Tuesday 10th Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbechtold at suse.com Mon Sep 2 04:12:27 2019 From: tbechtold at suse.com (Thomas Bechtold) Date: Mon, 2 Sep 2019 06:12:27 +0200 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: Message-ID: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Hi, On 8/27/19 7:58 AM, Akihiro Motoki wrote: [...] > How to get started > ------------------ > > - "How to get started" section in the PDF goal etherpad [1] explains > the minimum steps. > You can find useful examples there too. This is a bit confusing because the goal[1] mentions that there should be no extra tox target declared for the gate job. But the etherpad explains that there should be a new tox target[2]. So do we need a new tox target in the project repo? Or is that optional and just for local testing? Cheers, Tom [1] https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html#completion-criteria [2] https://etherpad.openstack.org/p/train-pdf-support-goal > - To build PDF docs locally, you need to install LaTex related > packages. See "To test locally" in the etherpad [1]. > - If you hit problems during PDF build, check the common problems > etherpad [2]. We are collecting knowledges there. > - If you have questions, feel free to ask #openstack-doc IRC channel. > > Also Please sign up your name to "Project volunteers" in [1]. > > Useful links > ------------ > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems > [3] Ongoing reviews: > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) > > Thanks, > Akihiro Motoki (amotoki) > > From sundar.nadathur at intel.com Mon Sep 2 04:52:24 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 2 Sep 2019 04:52:24 +0000 Subject: [cyborg][election][ptl] PTL candidacy for Ussuri Message-ID: <1CC272501B5BC543A05DB90AA509DED5276073B3@fmsmsx122.amr.corp.intel.com> Hello all, I would like to announce my candidacy for the PTL role of Cyborg for the Ussuri cycle. I have been involved with Cyborg since 2018 Rocky PTG, and have had the privilege of serving as Cyborg PTL for the Train cycle. In the Train cycle, Cyborg saw some important developments. We reached an agreement on integration with Nova at the PTG, and the spec that I wrote based on that agreement has been merged. We have seen new developers join the community. We have seen existing Cyborg drivers getting updated and new Cyborg drivers being proposed. We are also in the process of developing a tempest plugin for Cyborg. In the U cycle, I'd aim to build on this foundation. While we may support a certain set of VM operations with accelerators with Nova in Train, we can expand on that set in U. We should also focus on Day 2 operations like performance monitoring and health monitoring for accelerator devices. I would like to formalize and expand on the driver addition/development process. Thank you for your support. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Mon Sep 2 06:25:03 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 2 Sep 2019 08:25:03 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> Of course +1 ! On 8/30/19 2:28 PM, Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- Cédric Jeanneret (He/Him/His) Software Engineer - OpenStack Platform Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From amotoki at gmail.com Mon Sep 2 07:07:19 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 2 Sep 2019 16:07:19 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: On Mon, Sep 2, 2019 at 1:12 PM Thomas Bechtold wrote: > > Hi, > > On 8/27/19 7:58 AM, Akihiro Motoki wrote: > > [...] > > > How to get started > > ------------------ > > > > - "How to get started" section in the PDF goal etherpad [1] explains > > the minimum steps. > > You can find useful examples there too. > > This is a bit confusing because the goal[1] mentions that there should > be no extra tox target declared for the gate job. > But the etherpad explains that there should be a new tox target[2]. > > So do we need a new tox target in the project repo? Or is that optional > and just for local testing? The new tox target in the project repo is required now. The PDF doc will be generated only when the "pdf-docs" tox target does exists. When the goal is defined the docs team thought the doc gate job can handle the PDF build without extra tox env and zuul job configuration. However, during implementing the zuul job support it turns out at least a new tox env or an extra zuul job configuration is required in each project to make the docs job fail when PDF build failure is detected. As a result, we changes the approach and the new tox target is now required in each project repo. Perhaps we need to update the description of the goal definition document. Thanks, Akihiro > > Cheers, > > Tom > > [1] > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html#completion-criteria > [2] https://etherpad.openstack.org/p/train-pdf-support-goal > > > - To build PDF docs locally, you need to install LaTex related > > packages. See "To test locally" in the etherpad [1]. > > - If you hit problems during PDF build, check the common problems > > etherpad [2]. We are collecting knowledges there. > > - If you have questions, feel free to ask #openstack-doc IRC channel. > > > > Also Please sign up your name to "Project volunteers" in [1]. > > > > Useful links > > ------------ > > > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems > > [3] Ongoing reviews: > > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) > > > > Thanks, > > Akihiro Motoki (amotoki) > > > > From ccamacho at redhat.com Mon Sep 2 07:34:45 2019 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Mon, 2 Sep 2019 09:34:45 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> References: <20190830122850.GA5248@holtby> <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> Message-ID: +1 On Mon, Sep 2, 2019 at 8:36 AM Cédric Jeanneret wrote: > > Of course +1 ! > > On 8/30/19 2:28 PM, Michele Baldessari wrote: > > Hi all, > > > > Damien (dciabrin on IRC) has always been very active in all HA things in > > TripleO and I think it is overdue for him to have core rights on this > > topic. So I'd like to propose to give him core permissions on any > > HA-related code in TripleO. > > > > Please vote here and in a week or two we can then act on this. > > > > Thanks, > > > > -- > Cédric Jeanneret (He/Him/His) > Software Engineer - OpenStack Platform > Red Hat EMEA > https://www.redhat.com/ > From chx769467092 at 163.com Mon Sep 2 07:43:38 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Mon, 2 Sep 2019 15:43:38 +0800 (CST) Subject: [QA][nova][Concurrent performance] Message-ID: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> Hello everyone! Why the performance of Stein concurrent creation of VM is not as good as that of Ocata? Create 250 VMs concurrently, the O version only needs 160s, but the S version needs 250s. Security_group and port_security functions are disabled. Among 250 VMs, there are single network card and multi-network card. Using neutron-openvswitch-agent. Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Mon Sep 2 07:58:00 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Mon, 2 Sep 2019 15:58:00 +0800 Subject: [QA][nova][Concurrent performance] In-Reply-To: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> Message-ID: Hi on 2019/9/2 15:43, 崔恒香 wrote: > Why the performance of Stein concurrent creation of VM is not as good as > that of Ocata? > Create 250 VMs concurrently, the O version only needs 160s, but the S > version needs 250s. Openstack's execution environment is complicated. It's hard to say O is faster than S from the architecture viewpoint. Are you sure two systems' running environment are totally the same for comparision? including the software, hardware, network connection etc. regards. From rico.lin.guanyu at gmail.com Mon Sep 2 08:17:19 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 2 Sep 2019 16:17:19 +0800 Subject: [heat][election][ptl] Heat PTL candidacy for Ussuri cycle Message-ID: Dear Heat members, I would like to announce my candidacy as the Heat project team leader for Ussuri cycle. We have been suffered from a lack of people to help on review. So if you're reading this, please join us, and help with review and features. One of our current strategies is to make a worklist to target for each release and try to make those items finish on time. Also, another strategy is to get better integration with other projects or even cross communities. We have been triggered some discussion with SIGs and other projects to try to figgering where we can keep this integration moving. Appears we still got a lot of jobs to do. My plan for the next cycle will keep those two strategies, and extend from there if we got time. Please consider my candidacy. Thank you. Rico Lin (ricolin) -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Sep 2 08:46:55 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 02 Sep 2019 10:46:55 +0200 Subject: [election][tc] Candidacy for TC Message-ID: <0b089dd6bc0226e3652167335f8a2818300137aa.camel@evrard.me> Hello everyone, I am hereby announcing my candidacy for a renewed position on the OpenStack Technical Committee (TC). I have been following the OpenStack ecosystem since Kilo. I went through multiple companies and wore multiple hats (A cloud end- user, an OpenStack advocate in meetups and at FOSDEM, a product owner of the cloud strategy, architect of a community cloud, a deployer, a developer, a team lead, a technology analyst), which gives me a unique view on OpenStack and other adjacent communities. I am now working full time on OpenStack for SUSE. During my time at the TC, I worked on refactoring the TC health check process, worked on community goals selection, and worked on different TC activities to help both the community and my company. During my first term, I have experienced or seen pain in the OpenStack processes: project health checks, community goals, release name selection are just a series of examples. I "learnt the ropes" during that first year, but I don't think the quality of life of the OpenStack contributors has increased, which was one of my personal goals. If I get elected, I want to change the TC from the inside, and through that, change OpenStack. These are not just changes I can drive from the outside as they involve mindset changes. Without further ado, here are a few of my crazy ideas. First, I want to make the TC a birthplace for innovation, instead of being so process oriented. I have the impression our processes dulled innovation. I want to remove processes, name conventions, and just allow people to propose ideas (if possible, the wildest and craziest ideas) to the TC. I believe this would help people feel empowered to change OpenStack. With the fact that we are more and more organising ourselves as teams of specific interest, I would like the TC to issue more official stances on how to implement OpenStack-wide changes/best-practices with the help of field experts. That would mean issuing more "goals" for the projects and define a roadmap more actively. (I would love to see recommendations on using the last features of the python language to simplify our code base for example!) Finally, I would like us to try a new kind of periodic "leadership" meeting. In those meetings PTLs and SIG chairs would discuss the issues they are recently facing. That means sharing all together, being a ground for proposing new ideas again. PTLs are too overburdened by their work and don't have the occasion to share experience/expertise/crazy ideas on tech debt reduction for example. I believe this would bring people closer together. Thank you, Jean-Philippe (evrardjp) From geguileo at redhat.com Mon Sep 2 08:52:43 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 2 Sep 2019 10:52:43 +0200 Subject: [ptl][cinder] U Cycle PTL Non-Candidacy ... In-Reply-To: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> References: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> Message-ID: <20190902085243.5wxdyfrauhqridhf@localhost> Jay, Thank you for your hard work as PTL, core reviewer, and coder. I'm really glad to hear this is not a goodbye and you will be staying with us in Cinder. :-) Cheers, Gorka. On 30/08, Jay Bryant wrote: > All, > > I just wanted to communicate that I am not going to be running for another > term as Cinder's PTL. > > It has been an honor to lead the Cinder team for the last two years.  When I > started working with OpenStack nearly 6 years ago, leading the Cinder team > was one of my goals and I appreciate the team trusting me with this > responsibility for the last 4 cycles. > > I have enjoyed watching the project evolve over the last couple of years, > going from a focus on getting new features in place to a focus on ensuring > that customers get reliable storage management with an ever improving user > experience. > > Cinder's value in the storage community outside of OpenStack has been > validated as other SDS solutions have leveraged it to provide storage > management for the many vendors that Cinder supports. Cinder continues to > grow by adding things like cinderlib, making it relevant not only in > virtualized environments but also for containerized environments.  I am glad > that I have been able to help this evolution happen. > > As PTLs have done in the past, it is time for me to pursue other > opportunities in the OpenStack ecosystem and hand over the reigns to a new > leader.  Cinder has a great team and will continue to do great things.  Fear > not, I am not going to go anywhere, I plan to continue to stay active in > Cinder for the foreseeable future. > > Again, thank you for the opportunity to be Cinder's PTL, it has been a great > ride! > > Sincerely, > > Jay Bryant > > (irc: jungleboyj) > > > From chx769467092 at 163.com Mon Sep 2 08:54:41 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Mon, 2 Sep 2019 16:54:41 +0800 (CST) Subject: [QA][nova][Concurrent performance] In-Reply-To: References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> Message-ID: <1c7c5b6c.a53d.16cf12ef0ef.Coremail.chx769467092@163.com> Hi At 2019-09-02 15:58:00, "Wesley Peng" wrote: >Hi > >on 2019/9/2 15:43, 崔恒香 wrote: >> Why the performance of Stein concurrent creation of VM is not as good as >> that of Ocata? >> Create 250 VMs concurrently, the O version only needs 160s, but the S >> version needs 250s. > >Openstack's execution environment is complicated. >It's hard to say O is faster than S from the architecture viewpoint. >Are you sure two systems' running environment are totally the same for >comparision? including the software, hardware, network connection etc. The same set of servers, after testing version O, re-install the system, and then test version S. First deployed version O, the operating system is Ubuntu 16.04. Then deploy the version S, the operating system is ubuntu18.04. The port_security function is disabled.But in the S version environment, adding a lot of flows on br-int. Does this slow down the creation of VM? The flows is as follows:(for port qvo83b4285e-b5) cookie=0x9ca1d4c6ecdcb31f, duration=257335.826s, table=0, n_packets=0, n_bytes=0, priority=10,icmp6,in_port="qvo83b4285e-b5",icmp_type=136 actions=resubmit(,24) cookie=0x9ca1d4c6ecdcb31f, duration=257335.823s, table=0, n_packets=19, n_bytes=798, priority=10,arp,in_port="qvo83b4285e-b5" actions=resubmit(,24) cookie=0x9ca1d4c6ecdcb31f, duration=257335.831s, table=0, n_packets=95, n_bytes=10680, priority=9,in_port="qvo83b4285e-b5" actions=resubmit(,25) cookie=0x9ca1d4c6ecdcb31f, duration=257335.829s, table=24, n_packets=0, n_bytes=0, priority=2,icmp6,in_port="qvo83b4285e-b5",icmp_type=136,nd_target=fe80::f816:3eff:fe39:1601 actions=resubmit(,60) cookie=0x9ca1d4c6ecdcb31f, duration=257335.826s, table=24, n_packets=19, n_bytes=798, priority=2,arp,in_port="qvo83b4285e-b5",arp_spa=30.0.1.180 actions=resubmit(,25) cookie=0x9ca1d4c6ecdcb31f, duration=257335.841s, table=25, n_packets=114, n_bytes=11478, priority=2,in_port="qvo83b4285e-b5",dl_src=fa:16:3e:39:16:01 actions=resubmit(,60) Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Sep 2 08:59:25 2019 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 2 Sep 2019 10:59:25 +0200 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: +1 for me! Welcome on board Gabriele! Sorry for my late response I was on PTO... Le mer. 21 août 2019 21:50, Doug Hellmann a écrit : > > > > On Aug 21, 2019, at 10:25 AM, Ben Nemec wrote: > > > > Hello Norsk, > > > > It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a > new member of the oslo.messaging core team. He has been contributing to the > project for about a cycle now and has gotten up to speed on our development > practices. Oh, and he wrote the book on RabbitMQ[0]. :-) > > > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > > > Thanks. > > > > -Ben > > > > 0: http://shop.oreilly.com/product/9781849516501.do > > > > +1 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Mon Sep 2 09:02:12 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Mon, 2 Sep 2019 17:02:12 +0800 Subject: [QA][nova][Concurrent performance] In-Reply-To: <1c7c5b6c.a53d.16cf12ef0ef.Coremail.chx769467092@163.com> References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com> <1c7c5b6c.a53d.16cf12ef0ef.Coremail.chx769467092@163.com> Message-ID: <08274916-caba-4348-ed1e-4bd04c1f463d@gmail.com> Hi on 2019/9/2 16:54, 崔恒香 wrote: > The port_security function is disabled.But in the S version environment, adding a lot of flows on br-int. Does this slow down the creation of VM? why adding flows on S but not equivalent on O? sure too many flows take performance down. regards. From mbultel at redhat.com Mon Sep 2 09:31:53 2019 From: mbultel at redhat.com (Mathieu Bultel) Date: Mon, 2 Sep 2019 05:31:53 -0400 (EDT) Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: References: <20190830122850.GA5248@holtby> <93c43263-2d57-ad0f-bc17-0b0620053a5b@redhat.com> Message-ID: <197210146.10371673.1567416713833.JavaMail.zimbra@redhat.com> +1 :) ----- Original Message ----- From: Carlos Camacho Gonzalez To: Cédric Jeanneret Cc: OpenStack Discuss Sent: Mon, 02 Sep 2019 03:34:45 -0400 (EDT) Subject: Re: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA +1 On Mon, Sep 2, 2019 at 8:36 AM Cédric Jeanneret wrote: > > Of course +1 ! > > On 8/30/19 2:28 PM, Michele Baldessari wrote: > > Hi all, > > > > Damien (dciabrin on IRC) has always been very active in all HA things in > > TripleO and I think it is overdue for him to have core rights on this > > topic. So I'd like to propose to give him core permissions on any > > HA-related code in TripleO. > > > > Please vote here and in a week or two we can then act on this. > > > > Thanks, > > > > -- > Cédric Jeanneret (He/Him/His) > Software Engineer - OpenStack Platform > Red Hat EMEA > https://www.redhat.com/ > From zhangbailin at inspur.com Mon Sep 2 09:32:03 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Mon, 2 Sep 2019 09:32:03 +0000 Subject: [QA][nova][Concurrent performance] Message-ID: <11d8b9b3b3534717ba2c1008aa4f56bf@inspur.com> Hi > on 2019/9/2 16:54, 崔恒香 wrote: > The port_security function is disabled.But in the S version environment, > adding a lot of flows on br-int. Does this slow down the creation of VM? > why adding flows on S but not equivalent on O? True,if you want to compared the performance of the Stein and Ocata, you should keep their configuration as the same, otherwise it's not representative. > sure too many flows take performance down. @崔恒香 I think you can set the configuration item " vif_plugging_is_fatal=False" [1] in the nova-compute.conf, and then compare the performance of Ocata and Stein version by creating the same quantity servers. And you can combine with the configuration item " vif_plugging_timeout" to use. This way, you can verify if you are creating servers slow because of network. [1] https://docs.openstack.org/nova/stein/configuration/config.html#DEFAULT.vif_plugging_is_fatal [2] https://docs.openstack.org/nova/stein/configuration/config.html#DEFAULT.vif_plugging_timeout > regards. Brin Zhang From ianyrchoi at gmail.com Mon Sep 2 09:45:53 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Mon, 2 Sep 2019 18:45:53 +0900 Subject: [I18n] PTL Non-Candidacy Message-ID: <76659038-d673-3bc0-e97c-817cccdbf36a@gmail.com> Hello I18n team, As announced by [1], now is U-cycle (Ussuri) PTL nomination period, and only less than 2 days are left from now. I will not run for I18n PTL for U-cycle because I am now serving as an PTL/TC election official and the role conflicts with running for PTL [2]. I ran for election official to better understand OpenStack ecosystem and more interact with community members, not to leave from I18n team. I will still stay around I18n team, but it would be so great if someone runs for I18n PTL on upcoming cycle. Also, I shared the current status of I18n team on last July [3]. If you have not read this, please read for more information. Note that I will be at Shanghai Summit + PTG for more I18n sessions as an official team succeeding from my current PTL activities - please prioritize to participate in upcoming Summit + PTG with any kind of ways - both online and offline participation should be definitely fine. With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008679.html [2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines [3] http://lists.openstack.org/pipermail/openstack-i18n/2019-July/003439.html From a.settle at outlook.com Mon Sep 2 10:41:54 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:41:54 +0000 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: On Mon, 2019-09-02 at 16:07 +0900, Akihiro Motoki wrote: > On Mon, Sep 2, 2019 at 1:12 PM Thomas Bechtold > wrote: > > > > Hi, > > > > On 8/27/19 7:58 AM, Akihiro Motoki wrote: > > > > [...] > > > > > How to get started > > > ------------------ > > > > > > - "How to get started" section in the PDF goal etherpad [1] > > > explains > > > the minimum steps. > > > You can find useful examples there too. > > > > This is a bit confusing because the goal[1] mentions that there > > should > > be no extra tox target declared for the gate job. > > But the etherpad explains that there should be a new tox target[2]. > > > > So do we need a new tox target in the project repo? Or is that > > optional > > and just for local testing? > > The new tox target in the project repo is required now. > The PDF doc will be generated only when the "pdf-docs" tox target > does exists. > > When the goal is defined the docs team thought the doc gate job can > handle the PDF build > without extra tox env and zuul job configuration. However, during > implementing the zuul job support > it turns out at least a new tox env or an extra zuul job > configuration > is required in each project > to make the docs job fail when PDF build failure is detected. As a > result, we changes the approach > and the new tox target is now required in each project repo. > > Perhaps we need to update the description of the goal definition > document. This is something I can propose. I will update here when I have updated. Thanks, Alex > > Thanks, > Akihiro > > > > > Cheers, > > > > Tom > > > > [1] > > https://governance.openstack.org/tc/goals/selected/train/pdf-doc-ge > > neration.html#completion-criteria > > [2] https://etherpad.openstack.org/p/train-pdf-support-goal > > > > > - To build PDF docs locally, you need to install LaTex related > > > packages. See "To test locally" in the etherpad [1]. > > > - If you hit problems during PDF build, check the common problems > > > etherpad [2]. We are collecting knowledges there. > > > - If you have questions, feel free to ask #openstack-doc IRC > > > channel. > > > > > > Also Please sign up your name to "Project volunteers" in [1]. > > > > > > Useful links > > > ------------ > > > > > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > > > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-proble > > > ms > > > [3] Ongoing reviews: > > > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+ > > > OR+status:merged) > > > > > > Thanks, > > > Akihiro Motoki (amotoki) > > > > > > > > -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:45:26 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:45:26 +0000 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: On Mon, 2019-08-26 at 08:57 -0400, Doug Hellmann wrote: > Since nominations open this week, I wanted to go ahead and let you > all know that I will not be seeking re-election to the Technical > Committee this term. This is some very sad, but unsurprising news. > > My role within Red Hat has been changing over the last year, and > while I am still working on projects related to OpenStack it is no > longer my sole focus. I will still be around, but it is better for me > to make room on the TC for someone with more time to devote to it. Good luck with your exciting new role! > > It’s hard to believe it has been 6 years since I first joined the > Technical Committee. So much has happened in our community in that > time, and I want to thank all of you for the trust you have placed in > me through it all. It has been an honor to serve and help build the > community. Thank you for all the support and energy you have put into this community, the projects, and the people. Your level-headed approach to dealing with key issues, tough conversations, and difficult decisions have had a massive affect on me and influenced so many projects and people for the better. Thank you for all that you've done - and good luck with everything that is to come. Cheers, Alex > > Thank you, > Doug > > -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:46:04 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:46:04 +0000 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: On Mon, 2019-08-26 at 13:19 +0000, Jeremy Stanley wrote: > I've been on the OpenStack Technical Committee continuously for > several years, and would like to take this opportunity to thank > everyone in the community for their support and for the honor of > being chosen to represent you. I plan to continue participating in > the community, including in TC-led activities, but am stepping back > from reelection this round for a couple of reasons. > > First, I want to provide others with an opportunity to serve our > community on the TC. I hope that by standing aside for now, others > will be encouraged to run. A regular influx of fresh opinions helps > us maintain the requisite level of diversity to engage in productive > debate. > > Second, the scheduling circumstances for this election, with the TC > and PTL activities combined, will be a bit more complicated for our > election officials. I'd prefer to stay engaged in officiating so > that we can ensure it goes as smoothly for everyone a possible. To > do this without risking a conflict of interest, I need to not be > running for office. > > It's quite possible I'll run again in 6 months, but for now I'm > planning to help behind the scenes instead. Best of luck to all who > decide to run for election to any of our leadership roles! So sad to hear this! But I'm glad to see you'll be around, regardless. You have been a fantastic source of knowledge and guidance. Thank you, -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:46:42 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:46:42 +0000 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: On Mon, 2019-08-26 at 12:22 -0400, Julia Kreger wrote: > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. :( > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be > possible > for me to serve during this next term. Totally understandable! All the best and no doubt you'll still be around :) thank you for all your help and guidance over your term. > > Thanks everyone! > > -Julia > -- Alexandra Settle IRC: asettle From a.settle at outlook.com Mon Sep 2 10:47:16 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 10:47:16 +0000 Subject: [tc] not seeking reelection In-Reply-To: References: Message-ID: On Tue, 2019-08-27 at 09:59 -0500, Lance Bragstad wrote: Hi all, Now that the nomination period is open for TC candidates, I'd like to say that I won't be running for a second term on the TC. Sad news! My time on the TC has enriched my understanding of open-source communities and I appreciate all the time people put into helping me get up-to-speed. I wish the best of luck to folks putting their hat in the ring this week! And you were so good at it! Thank you for everything, it has always been so much fun working with you :) Thanks all, Lance -- Alexandra Settle > IRC: asettle -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Mon Sep 2 11:24:04 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Mon, 2 Sep 2019 13:24:04 +0200 Subject: [charms] Ussuri Cycle PTL Candidacy Message-ID: Hello all, I would like to announce my candidacy as PTL for the OpenStack Charms project for the Ussuri cycle. The project has made great progress in the Train cycle under James's capable leadership. Some examples are; further Python3 stabilization and Python2 dependency removal, multi-model support was implemented in the Zaza functional test framework, improvements were made to SSH host key handling for our charmed deployment of Nova, Neutron DVR support was improved, actions to help handle cold start of a Percona Cluster and a tool for retrofitting existing cloud images for use as Octavia Amphora was implemented. We also provided preview charmed support for Masakari. For the Ussuri cycle we look to further improve existing features as well as implement new charm features and new charms. Over several cycles we have developed a good framework for our reactive charm development of OpenStack related charms. This framework has also been adopted for use by non-OpenStack components. I think it is worth taking some time to analyze which building blocks attract non-OpenStack components, and perhaps move the generally applicable parts of the framework down a layer to make it available for general consumption. In the spirit of Open Development this will provide us with benefits we can reap for OpenStack in the long term. Cheers, -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Sep 2 12:39:09 2019 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 2 Sep 2019 14:39:09 +0200 Subject: [neutron] Bug Deputy August 26 - September 01 Message-ID: Hi, Here is the bug deputy report for the week August 26 - September 01: As far as I see only one is without assignee (#1842150 ) - Medium - [L2][OVS] add accepted egress fdb flows (#1841622): assigned - DHCP port information incomplete during the DHCP port setup (#1841636): assigned / in progress - Pyroute2 netns.ns_pids() will fail if during the function loop, one namespace is deleted (#1841753) In progress - Make the MTU attribute not nullable (#1842261) assigned - Low - rootwrap sudo process goes into defunct state (#1841682) assigned - neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin DBError (#1841788) assigned neutron_dynamic_routing - ML2 mech driver sometimes receives network context without provider attributes in delete_network_postcommit (#1841967) in progress / assigned - Undecided - excessive SQL query fanout on port list with many trunk ports (#1842150) Trunk/neutron performance - Dupilicate: - Port-Forwarding can't be set to different protocol in the same IP and Port (#1841741) duplicate of https://bugs.launchpad.net/neutron/+bug/1799155 - Whislist - [L2] stop processing ports twice in ovs-agent (#1841865) - kolla-ansible - Neutron bootstrap failing on Ubuntu bionic with Cannot change column 'network_id (#1841907) fix released in kolla - Old bugs that reappeared: - openvswitch firewall flows cause flooding on integration bridge (#1732067): High, assigned - Invalid: - instance ingress bandwidth limiting doesn't works in ocata. (#1841700) - Vlan network with vlan_id outside of available ranges for physical network can be created always (#1842052) Regards Lajos -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Sep 2 12:55:09 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 2 Sep 2019 14:55:09 +0200 Subject: [blazar][election][ptl] PTL candidacy for Ussuri Message-ID: Hi, I would like to submit my candidacy to serve as PTL of Blazar for the Ussuri release cycle. I served as PTL during the Stein and Train cycles and I am willing to continue in this role. The Train release cycle has been less active than previous ones, with core contributors being less available due to other commitments. To keep the project healthy, it is essential that we grow our community further. As an example, we started running an additional IRC meeting in a timezone compatible with the Americas, which has proved helpful with getting more people involved in the community. I would like to continue this effort in the upcoming cycle, release all the new features that are currently in progress, and work together to fix the main issues that are blocking further adoption of Blazar. Thank you for your support, Pierre Riteau (priteau) From james.page at canonical.com Mon Sep 2 13:27:27 2019 From: james.page at canonical.com (James Page) Date: Mon, 2 Sep 2019 14:27:27 +0100 Subject: [charms][election][ptl] PTL non-candidacy for Ussuri Message-ID: Hi All As Frode has kindly offered to pickup the PTL role this cycle I won't be putting myself forward as a PTL candidate for OpenStack Charms for the Ussuri release cycle. Thanks James -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmendra.kushwaha at india.nec.com Mon Sep 2 14:37:14 2019 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Mon, 2 Sep 2019 14:37:14 +0000 Subject: [tacker][election][ptl] PTL candidacy for Ussuri Message-ID: Hello Everyone, I would like to announce my candidacy again for Tacker PTL role for Ussuri cycle. I am Dharmendra Kushwaha known as dkushwaha on IRC, active member of Tacker community since Mitaka release. I run as Tacker PTL in last two cycles. I would like to thanks all who supported Tacker with their contributions in Train cycle. It is a great experience for me to working in Tacker project with very supportive contributors team. In Train cycle we planned limited features level activities and more towards project stability, bug fixes & code coverage things. Team is working on some rich features like VNF packages support, resource force delete, enhancements in containerized VNFs, and couple of other improvements. Along with daily Tacker activities, my priority for Tacker for U cycle will be more towards: * Tacker CI/CD Improvement: - Focus to introduce more functional and scenario tests. * Tacker stability & production ready: - Focus to have more error-handling and significant logging. - More user friendly documentation. * More towards NFV-MANO rich features: - Make Tacker more ESTI compatible. * More enhancements in VNF Forwarding Graph area. * More work on container based VNFs. You can find my complete contributions here: http://stackalytics.com/?release=all&project_type=all&metric=commits&user_id=dharmendra-kushwaha Thanks for reading and consideration my candidacy. Thanks & Regards Dharmendra Kushwaha IRC: dkushwaha ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. From a.settle at outlook.com Mon Sep 2 14:41:43 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 2 Sep 2019 14:41:43 +0000 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri Message-ID: Hey all, I would like to submit my candidacy for the documentation team's PTL for the Ussuri cycle. Stephen Finucane (Train PTL) will be unofficially serving alongside me in a co-PTL capacity so we can equally address documentation-related tasks and discussions. I served as the documentation PTL for Pike, and am currently serving as an elected member of the Technical Committee in the capacity of vice chair. I have been a part of the community since the beginning of 2014, and have seen the highs and the lows and continue to love working for and with this community. The definition of documentation for OpenStack has been rapidly changing and the future of the documentation team continues to evolve and change. I would like that opportunity to help guide the documentation team, and potentially finish what myself, Petr, Stephen and many others have started and carried on. Thanks, Alex -- Alexandra Settle IRC: asettle From gmann at ghanshyammann.com Mon Sep 2 14:52:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 02 Sep 2019 23:52:46 +0900 Subject: [election][tc] TC Candidacy Message-ID: <16cf276c89c.ac4c7ff3132284.441171358608330433@ghanshyammann.com> Hi All, I would like to announce my candidacy for OpenStack Technical Committee position. First of all, thanks for giving me the opportunity as the technical committee in the previous term. It has been a learning process for me to understand the community and its technicality in a much broader way. There are a lot of things to do for me which I targetted last year but not finished. I am fortunate to work in this community which help me to learn a lot on daily basis. While being a TC, I got more opportunities to talk and work with multiple projects and awesome contributors. Thank you everyone for your support and hardwork. Along with my QA and Nova role, I tried to target broader and cross projects work as my TC responsibility. Migrating the OpenStack CI/CD from Xenial to Bionic, updating the python testing, a current ongoing community goal of IPv6 deployment and testing are the main work as part of this. Obviously it is not necessary to be a TC to do community-wide work but as TC it gives more understanding and actual benefits as overall. For those who do not know, let me introduce myself. I have joined the OpenStack community since 2012 as operator and started as a full-time upstream contributor since 2014 during mid of Ice-House release. Currently, I am PTL for the QA Program since the Rocky cycle and active contributor in QA projects and Nova. Also, I have been randomly contributing in many other projects for example, to Tempest plugins for bug fix and tempest compatibility changes. Along with that, I am actively involved in programs helping new contributors in OpenStack. 1. As a mentor in the Upstream Institute Training since Barcelona Summit (Oct 2016)[1]. 2. FirstContact SIG [2] to help new contributors to onboard in OpenStack. It's always a great experience to introduce OpenStack upstream workflow to new contributors and encourage them to start contribution. I feel that is very much needed in OpenStack. Hosting Upstream Training in Tokyo was a great experience. TC direction has always been valuable and helps to keep the common standards in OpenStack. There are always room for improvements and so does in TC. In the last cycle, TC started an effort to ask the community about "what they expect from TC" but I think we did not get much feedback from the community. But these kind of effort are really great and I think making these practice in every cycle or year is needed. This is my personal interest or opinion. As TC, which is there to set and govern the technical direction and common standard in OpenStack, I think we should also participate in doing more coding. Every TC members are from some projects and contribute a lot of code there. But as TC let's make a practice to do more coding for community-wide efforts. Getting the use case or common problem from users and try to fix them by themselves if no one there. There is no restriction of doing that currently but making this as practice will help the community. Let me list down the area I want to work in my second TC term as well (few are continue from my last term target and few new): * Share Project teams work for Common Goals: This is very important for me as TC and I tried to do this at some extent. I helped on OpenStack gate testing migration from Xenial to Bionic and also I am doing the IPv6 community goal in Train cycle. My strategy is always to do the things by myself if there is no one there instead of keeping things in the backlog. I will be continuing this effort as much as possible. * Users/Operators and Developers interaction: Users and Operators are the most important part of any product and improving the users and developers interaction is much needed for any software. I still feel we are lacking in this area. There are few projects which get user feedback from time to time. Nova is great example to see many users or operators engaged with developers as direct contribution or meetup or ML etc. There are many other projects which are doing good in this area. But there are many projects who do not have much interaction or feedback from users. I would like to try a few ideas to improve this. Not just project-wise but as overall OpenStack. * TC and Developers interaction: There is good amount of effort to improve the interaction between PTL and TC in last couple of years. Health tracker was a good example and now TC Liasion. I would like to extend this interaction to each developer, not just PTL. We need some practical mechanism to have frequent discussions between TC and developers. At this stage, I do not know how to do that but I will be working on this in my next term. One way is to help them in term of coding, user feedback etc and then encourage them to take part in TC engagements. Thank you for reading and considerating my candidacy. Refernce: * Blogs: https://ghanshyammann.com * Review: http://stackalytics.com/?release=all&metric=marks&user_id=ghanshyammann&project_type=all * Commit: http://stackalytics.com/?release=all&metric=commits&user_id=ghanshyammann&project_type=all * Foundation Profile: https://www.openstack.org/community/members/profile/6461 * IRC (Freenode): gmann [1] https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute [2] https://wiki.openstack.org/wiki/First_Contact_SIG - Ghanshyam Mann (gmann) From gkotton at vmware.com Mon Sep 2 14:57:09 2019 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 2 Sep 2019 14:57:09 +0000 Subject: [QA][nova][Concurrent performance] In-Reply-To: References: <40b37395.8d74.16cf0ede539.Coremail.chx769467092@163.com>, Message-ID: Hi, When we did our testing a few months ago we saw the same thing with the degradation of the performance. Keystone was a notable bottleneck. Thanks Gary ________________________________ From: Wesley Peng Sent: Monday, September 2, 2019 10:58 AM To: openstack-discuss at lists.openstack.org Subject: Re: [QA][nova][Concurrent performance] Hi on 2019/9/2 15:43, 崔恒香 wrote: > Why the performance of Stein concurrent creation of VM is not as good as > that of Ocata? > Create 250 VMs concurrently, the O version only needs 160s, but the S > version needs 250s. Openstack's execution environment is complicated. It's hard to say O is faster than S from the architecture viewpoint. Are you sure two systems' running environment are totally the same for comparision? including the software, hardware, network connection etc. regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Mon Sep 2 15:51:48 2019 From: james.page at canonical.com (James Page) Date: Mon, 2 Sep 2019 16:51:48 +0100 Subject: [neutron][networking-infoblox] current status? Message-ID: Hi networking-infoblox developers What's the current status of this driver for Neutron? I've been working on packaging for Ubuntu today and don't see a release for Stein as well as a few reviews in gerrit that have been open for a while with no activity so I was wondering whether this project still had focus from Infoblox. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 2 17:08:57 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 2 Sep 2019 17:08:57 +0000 Subject: [all][elections][ptl][tc] Conbined PTL/TC Nominations Last Days Message-ID: <20190902170857.hrtnmn3yrcvq543i@yuggoth.org> A quick reminder that we are in the last hours for declaring PTL and TC candidacies. Nominations are open until Sep 03, 2019 23:45 UTC. If you want to stand for election, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2019-08-27 23:45:00 UTC Nominations end @ 2019-09-03 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 1 day, 6:41:51 Nominations progress : 81.73% --------------------------------------------------- Projects[1] : 63 Projects with candidates : 39 ( 61.90%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 24 (Adjutant Cyborg Designate Freezer Horizon I18n Infrastructure Loci Manila Monasca Nova Octavia OpenStackAnsible OpenStackSDK OpenStack_Helm Oslo Placement PowerVMStackers Rally Release_Management Requirements Telemetry Winstackers Zun) =================================================== Stats gathered @ 2019-09-02 17:03:09 UTC This means that with approximately one day left, 24 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. We also need at least three more candidates to fill the six open seats on the OpenStack Technical committee. Thank you, [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy [2] Any open reviews at https://review.openstack.org/#/q/is:open+project:openstack/election have not been factored into these stats. [3] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -- Jeremy Stanley, on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From e0ne at e0ne.info Mon Sep 2 18:22:37 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 2 Sep 2019 21:22:37 +0300 Subject: [Horizon] [stable] Adding Radomir Dopieralski to horizon-stable-maint In-Reply-To: References: Message-ID: Almost two weeks passed without any objections. I would like to ask Stable team to add Rodomir to the horizon-stable-maint group. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Aug 20, 2019 at 4:28 PM Ivan Kolodyazhny wrote: > Hi team, > > I'd like to propose adding Radomir Dopieralski to the horizon-stable-maint > team. > He's doing good quality reviews for stable branches [1] on a regular basis > and > I think Radomir will be a good member of our small group. > > [1] > https://review.opendev.org/#/q/reviewer:openstack%2540sheep.art.pl+NOT+branch:master > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Sep 2 18:30:11 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 2 Sep 2019 21:30:11 +0300 Subject: [horizon][ptl][election] PTL Non-Candidacy Message-ID: Hi team, It has been a pleasure and a big honour to lead Horizon team for the last three cycles. Beeing a PTL is a hard full-time job. Unfortunately, my job priorities changed and I'm not feeling that I could spend enough time as a PTL for the next cycle. There are a lot of things to be done in Horizon which we started and planned. I'm not going away from the community and will continue to contribute to the project. I'm pretty sure that with a new PTL we'll have a good time to work on at least during the next U cycle. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Sep 2 19:31:20 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 2 Sep 2019 15:31:20 -0400 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: > On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > > On Mon, Sep 2, 2019 at 1:12 PM Thomas Bechtold wrote: >> >> Hi, >> >> On 8/27/19 7:58 AM, Akihiro Motoki wrote: >> >> [...] >> >>> How to get started >>> ------------------ >>> >>> - "How to get started" section in the PDF goal etherpad [1] explains >>> the minimum steps. >>> You can find useful examples there too. >> >> This is a bit confusing because the goal[1] mentions that there should >> be no extra tox target declared for the gate job. >> But the etherpad explains that there should be a new tox target[2]. >> >> So do we need a new tox target in the project repo? Or is that optional >> and just for local testing? > > The new tox target in the project repo is required now. > The PDF doc will be generated only when the "pdf-docs" tox target does exists. > > When the goal is defined the docs team thought the doc gate job can > handle the PDF build > without extra tox env and zuul job configuration. However, during > implementing the zuul job support > it turns out at least a new tox env or an extra zuul job configuration > is required in each project > to make the docs job fail when PDF build failure is detected. As a > result, we changes the approach > and the new tox target is now required in each project repo. The whole point of structuring the goal the way we did was that we do not want to update every single repo this cycle so we could roll out PDF building transparently. We said we would allow the job to pass even if the PDF build failed, because this was phase 1 of making all of this work. The plan was to 1. extend the current job to make PDF building optional; 2. examine the results to see how many repos need significant work; 3. add a feature flag via a setting somewhere in the repo to control whether the job fails if PDFs cannot be built. That avoids a second doc job running in parallel, and still allows us to roll out the PDF build requirement over time when we have enough information to do so. > > Perhaps we need to update the description of the goal definition document. I don’t think it’s a good idea to change the scope of the goal at this point in the release cycle. > > Thanks, > Akihiro > >> >> Cheers, >> >> Tom >> >> [1] >> https://governance.openstack.org/tc/goals/selected/train/pdf-doc-generation.html#completion-criteria >> [2] https://etherpad.openstack.org/p/train-pdf-support-goal >> >>> - To build PDF docs locally, you need to install LaTex related >>> packages. See "To test locally" in the etherpad [1]. >>> - If you hit problems during PDF build, check the common problems >>> etherpad [2]. We are collecting knowledges there. >>> - If you have questions, feel free to ask #openstack-doc IRC channel. >>> >>> Also Please sign up your name to "Project volunteers" in [1]. >>> >>> Useful links >>> ------------ >>> >>> [1] https://etherpad.openstack.org/p/train-pdf-support-goal >>> [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems >>> [3] Ongoing reviews: >>> https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) >>> >>> Thanks, >>> Akihiro Motoki (amotoki) From mthode at mthode.org Mon Sep 2 21:54:44 2019 From: mthode at mthode.org (Matthew Thode) Date: Mon, 2 Sep 2019 16:54:44 -0500 Subject: [requirements][election][ptl] PTL canidacy for ussuri Message-ID: <20190902215444.v3klpjxzyvfvqnxk@mthode.org> I would like to announce my candidacy for PTL of the Requirements project for the Ussuri cycle. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. (Keep things working well) 2. Un-cap requirements where possible (stuff like cmd2). 3. Publish constraints and requirements to streamline the freeze process. 4. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. JSON libs are on the short list this go around. I look forward to continue working with you in this cycle, as your PTL or not. Thanks for your time, Matthew Thode IRC: prometheanfire -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zhangbailin at inspur.com Tue Sep 3 01:54:43 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Tue, 3 Sep 2019 01:54:43 +0000 Subject: =?utf-8?B?cmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVtlbGVjdGlvbl1bdGNd?= =?utf-8?Q?_TC_Candidacy?= Message-ID: <1822cfc26fe14eddb46b318af8e694af@inspur.com> Agree with the gmann campaign TC. He has always been concerned about the work of the OpenStack community (mainly nova) and has provided help to contributors (including me) on different projects in the community. I think that TC will make him more broad-minded and need to provide more people who contribute to the community. Brin From kevin at cloudnull.com Tue Sep 3 02:17:09 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Mon, 2 Sep 2019 21:17:09 -0500 Subject: [election][tc] Candidacy for TC Message-ID: Hello Everyone, Some of you I've known for years; others are reading this and wondering who I am. While I may not know everyone in our ever-expanding community, I'd love the opportunity to get to know more of you, and I'd be honored to represent this community as a member of the TC. At this time, I would like to (re)introduce myself[6] and announce my candidacy for the upcoming TC election. A bit about me and why OpenStack is where my heart is. I have had the pleasure of working with OpenStack since 2012 and on OpenStack since 2013. My contributions to the community have not focused on core services, so I'm sure I've never crossed paths with a lot of folks reading this. However, I've spent considerable time working on the deployment, operations, and scale components of OpenStack. I got my start in OpenStack as an administrator of public clouds in 2012. I transitioned to Private Clouds in 2013. I had the pleasure to join a system engineering and the development team; focused on platform operations and deployments. It was through my time in the Private Cloud where when I began working on tooling that would eventually become the OpenStack-Ansible project. I was PTL of OpenStack-Ansible from its inception[2] until Liberty. I remained a core reviewer with the project, continuing to work on every facet of the tooling, until very recently. In 2015, at the Vancouver OpenStack summit, the simple OpenStack initiative was announced[5]. I took an active role in this effort as it aimed to promote better cross-community collaboration in a time where folks were somewhat siloed. While this particular effort never really took off, it establishes credibility in trying to work with people not necessarily focused on OpenStack. In my opinion, the simple OpenStack initiative was a success as it helped pave the way to future relationships while also serving as a precursor to other efforts just now starting (e.g., the Ansible-SIG[4]). This year, my journey charted a slightly new course. In mid-2019, I put on a Red fedora and have begun working on TripleO. I joined an incredible team within the Deployment Framework, and I'm looking forward to the new challenges ahead of me as I dive deeper into developing cloud tooling for the enterprise. Rest assured, I'm still working on OpenStack, I'm still trying to build a more perfect cloud, and I'm still in love with our community. So now that you know a little bit about me, I bet your reading this and wondering, why I'm running for the TC. To put it simply, I believe my experience running, building, and developing both public and private clouds puts me in a position to add a distinct voice to the TC. As a member of the TC, I would like to partner with everyone interested to help build a better, more engaged, fraternity of Stackers. I also think we can do more cross-project (cross-community) collaboration. We've done some fantastic work in this space, and I'd like to take up this mantle to continue our collaborative march to success. OpenStack has been a tremendous community to be involved with. My success as an individual is directly tied to the community, and if elected to the TC, it would be my honor to give back to the community in this new capacity. I will focus on bringing different points of view to the table. I will concentrate on collaboration. I will reach out to new and old projects alike. I will work tirelessly to assist anyone who requests my help. Finally, while it goes without saying, I will do everything I can to promote OpenStack's future success. Thank you for your consideration. -- Kevin Carter IRC: Cloudnull [0] https://www.stackalytics.com/?metric=commits&release=all&module=openstackansible-group&user_id=kevin.carter at rackspace.com [1] https://www.stackalytics.com/?metric=commits&release=all&module=openstackansible-group&user_id=kevin-carter [2] http://lists.openstack.org/pipermail/openstack-operators/2014-December/005683.html [3] https://www.stackalytics.com/?metric=commits&release=all&module=tripleo-group&user_id=kevin-carter [4] https://review.opendev.org/#/c/676428/1/sigs.yaml [5] https://www.ansible.com/blog/simple-openstack [6] https://www.openstack.org/community/speakers/profile/758/kevin-carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From chx769467092 at 163.com Tue Sep 3 02:22:19 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Tue, 3 Sep 2019 10:22:19 +0800 (CST) Subject: [qa][nova][migrate]CPU doesn't have compatibility Message-ID: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> Hi This is my ERROR info(ocata): 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5229, in check_can_live_migrate_destination 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server disk_over_commit) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5240, in _do_check_can_live_migrate_destination 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server block_migration, disk_over_commit) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5670, in check_can_live_migrate_destination 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server self._compare_cpu(None, source_cpu_info, instance) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5938, in _compare_cpu 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server raise exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Tue Sep 3 02:30:54 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Tue, 3 Sep 2019 10:30:54 +0800 Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> Message-ID: <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> on 2019/9/3 10:22, 崔恒香 wrote: > 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server  raise > exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) > 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server > InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. Are you implementing a live migration? It seems you have a uncompatible CPU in peer host. regards. From chx769467092 at 163.com Tue Sep 3 03:36:39 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Tue, 3 Sep 2019 11:36:39 +0800 (CST) Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> Message-ID: <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> At 2019-09-03 10:30:54, "Wesley Peng" wrote: > > >on 2019/9/3 10:22, 崔恒香 wrote: >> 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server raise >> exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) >> 2019-09-03 10:00:46.518 25163 ERROR oslo_messaging.rpc.server >> InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. > >Are you implementing a live migration? It seems you have a uncompatible >CPU in peer host. > Yes, It's a live migration. root at compute101:~# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz root at compute102:~# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 24 Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz We can migrate the vm from compute102 to compute101 Successfully. compute101 to compute102 ERROR info: the CPU is incompatible with host CPU: Host CPU does not provide required features: f16c, rdrand, fsgsbase, smep, erms -------------- next part -------------- An HTML attachment was scrubbed... URL: From weslepeng at gmail.com Tue Sep 3 03:40:34 2019 From: weslepeng at gmail.com (Wesley Peng) Date: Tue, 3 Sep 2019 11:40:34 +0800 Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> Message-ID: on 2019/9/3 11:36, 崔恒香 wrote: > We can migrate the vm from compute102 to compute101 Successfully. > compute101 to compute102 ERROR info: the CPU is incompatible with host > CPU: Host CPU does not provide required features: f16c, rdrand, > fsgsbase, smep, erms > > The error has said, cpu of compute101 is lower than compute102, some incompatible issues happened. To live migrate, you'd better have all hosts with the same hardwares, including cpu/mem/disk etc. regards. From rony.khan at brilliant.com.bd Tue Sep 3 06:19:17 2019 From: rony.khan at brilliant.com.bd (Md. farhad Hasan Khan) Date: Tue, 3 Sep 2019 12:19:17 +0600 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <024201d54c1c$391aa360$ab4fea20$@brilliant.com.bd> References: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> <62-5d43f000-5-50968180@101299267> <024201d54c1c$391aa360$ab4fea20$@brilliant.com.bd> Message-ID: <09f901d5621f$83ea0d40$8bbe27c0$@brilliant.com.bd> Hi Jens, Thanks for your nice documentation. Thanks & B'Rgds, Farhad -----Original Message----- From: Md. Farhad Hasan Khan [mailto:rony.khan at brilliant.com.bd] Sent: Tuesday, August 6, 2019 12:00 PM To: 'Core System' Subject: FW: Openstack IPv6 neutron confiuraton -----Original Message----- From: Jens Harbott [mailto:frickler at x-ion.de] Sent: Friday, August 2, 2019 2:10 PM To: Slawek Kaplonski Cc: rony.khan at brilliant.com.bd; OpenStack Discuss Subject: Re: Openstack IPv6 neutron confiuraton On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > Hi, > > In tenant networks IPv6 packets are going same way as IPv4 packets. > There is no differences between IPv4 and IPv6 AFAIK. > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You > can find some deployment examples and explanation when ovs mechanism > driver is used and in > https://docs.openstack.org/neutron/latest/admin/deploy-lb.html > there is similar doc for linuxbridge driver. For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. [1] https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html > There are differences with e.g. how DHCP is handled for IPv6. Please > check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html > for details. Also noting that the good reference article at the end of this doc sadly has disappeared, though you can still find it via the web archives. See also https://review.opendev.org/674018 From missile0407 at gmail.com Tue Sep 3 07:51:28 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 3 Sep 2019 15:51:28 +0800 Subject: [kolla-ansible] Correct way to add/remove nodes. Message-ID: Hi, I wanna know the correct way to add/remove nodes since I can't find the completely document or tutorial about this part. Here's what I know for now. For addition: 1. Install OS and setting up network on new servers. 2. Add new server's information into /etc/hosts and inventory file 3. Do bootstrapping to these servers by using bootstrap-servers with --limit option 4. (For Ceph OSD node) Add disk label to the disks that will become OSD. 5. Deploy again. For deletion (Compute): 1. Do migration if there're VMs exist on target node. 2. Set nova-compute service down on target node. Then remove the service from nova cluster. 3. Disable all Neutron agents on target node and remove from Neutron cluster. 4. Using kolla-ansible to stop all containers on target node. 5. Cleanup all containers and left settings by using cleanup-containers and cleanup-host script. For deletion (Ceph OSD node): 1. Remove all OSDs on target node by following Ceph tutorial. 2. Using kolla-ansible to stop all containers on target node. 3. Cleanup all containers and left settings by using cleanup-containers and cleanup-host script. Now I'm not sure about Controller if there's one controller down and want to add another one into HA cluster. My thought is that add into cluster first, then delete the informations about corrupted controller. But I have no clue about the details. Only about Ceph controller (mon, rgw, mds,. etc) Does anyone has experience about this? Many thanks, Eddie. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Sep 3 08:36:30 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 3 Sep 2019 10:36:30 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: <5649a103-0bed-467b-8a0c-23a27ed56562@openstack.org> Jean-Philippe Evrard wrote: > On Tue, 2019-08-27 at 12:50 -0400, Jim Rollenhagen wrote: >> I agree even numbers are not a problem. I don't think (hope?) >> the existing TC would merge anything that went 7-6 anyway. > > Agreed with that. > > And because I didn't write my opinion on the topic: > - I agree on the reduction. I don't know what the sweet spot is. 9 > might be it. > - If all the candidates and the election officials are ok with reduced > seats this time, we could start doing it now. > > It seems the last point isn't obvious, so in the meantime could we > propose the plan for the reduction by proposing a governance change? To close on that: It's too late to change for the current election, however if we don't get any new candidate, then the TC would mechanically get reduced to 11 already. Based on how the election turns out, once it is over I'll propose a governance change to gradually transition to 9 or 11 members, which will affect future elections. Cheers, -- Thierry Carrez (ttx) From sfinucan at redhat.com Tue Sep 3 09:54:49 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 03 Sep 2019 10:54:49 +0100 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> Message-ID: <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > > On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: [snip] > > When the goal is defined the docs team thought the doc gate job can > > handle the PDF build > > without extra tox env and zuul job configuration. However, during > > implementing the zuul job support > > it turns out at least a new tox env or an extra zuul job configuration > > is required in each project > > to make the docs job fail when PDF build failure is detected. As a > > result, we changes the approach > > and the new tox target is now required in each project repo. > > The whole point of structuring the goal the way we did was that we do > not want to update every single repo this cycle so we could roll out > PDF building transparently. We said we would allow the job to pass > even if the PDF build failed, because this was phase 1 of making all > of this work. > > The plan was to 1. extend the current job to make PDF building > optional; 2. examine the results to see how many repos need > significant work; 3. add a feature flag via a setting somewhere in > the repo to control whether the job fails if PDFs cannot be built. > That avoids a second doc job running in parallel, and still allows us > to roll out the PDF build requirement over time when we have enough > information to do so. Unfortunately when we tried to implement this we found that virtually every project we looked at required _some_ amount of tweaks just to build, let alone look sensible. This was certainly true of the big service projects (nova, neutron, cinder, ...) which all ran afoul of a bug [1] in the Sphinx LaTeX builder. Given the issues with previous approach, such as the inability to easily reproduce locally and the general "hackiness" of the thing, along with the fact that we now had to submit changes against projects anyway, a collective decision was made [2] to drop that plan and persue the 'pdfdocs' tox target approach. If we're concerned about the difficulty of closing this out this cycle, I'd be in favour of just limiting our scope. IMO, the service projects are the ones that would benefit most from PDF documentation. These are the things people actually use and they tend to have the most complete documentation. Libraries can be self-documenting (yes, I know), in so far as once can use introspection, existing code examples, and the 'help' built-in to piece together what information they need. We should aim to close that gap long-term, but for now requiring modifications to ensure we have _some_ PDFs sounds a lot better than requiring no modifications and having no PDFs. Cheers, Stephen [1] https://github.com/sphinx-doc/sphinx/issues/3099 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2019-08-21.log.html#t2019-08-21T13:19:01 > > Perhaps we need to update the description of the goal definition document. > > I don’t think it’s a good idea to change the scope of the goal at > this point in the release cycle. From nate.johnston at redhat.com Tue Sep 3 11:28:04 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 3 Sep 2019 07:28:04 -0400 Subject: [election][tc] Candidacy for TC Message-ID: <20190903112705.x2vani3wdjmumhlz@bishop> Hello everyone, I would like to nominate myself for a position on the OpenStack Technical Committee. I started working in OpenStack in the Kilo release. I have always been focused on the networking aspects of OpenStack, mostly from a telco perspective. I had a two-cycle absence during Ocata/Pike when my former employer made a strategic decision to de-emphasize OpenStack. But I came back a year ago, and now I am a core reviewer for the Neutron project. I have never served on the TC. I deeply love OpenStack, the community of people that have come together to make cloud technology available in a completely open way for the world. It is indisputable that OpenStack is a truly global project now. I think the work that lies ahead of us is to cement OpenStack's place as a fundamental building block upon which future technologies are built. Now that StarlingX, Zuul, and Airship are also under the OpenStack foundation I think it will be more important for some of the strategic vision for the future evolution of OpenStack to come from the TC. Here are the main things I would focus on as a member of the TC: 1.) IPv6-only cloud computing: The incredible proliferation of network addressable devices will only accelerate. Some forward-thinking enterprises are already switching over to mostly, or entirely, IPv6 networking. Here we have an established advantage, as OpenStack supported IPv6 before any of the big public clouds, and we can continue to lean into the future by providing a well tested and documented IPv6-only option. 2.) A continued focus on Edge: Edge is not just for large enterprises with hundreds of widely spread points of presence. An edge deployment could also serve a small-to-medium business with a thin presence in two remote locations. The work to drive towards an edge architecture, combined with improvements in stability and ease of use, will make OpenStack an option in new areas, and I think that will be vitally important for our future. 3.) Making the experience of both operator and developer easier. I think this can be accomplished in a number of ways: by making the systems we use to develop and test our code more similar to operational clouds by moving beyond Devstack in the gate. 4.) Dealing with the contraction of the contributor community: There is much more documentation around what happens when a project begins than for what happens when it is no longer actively maintained. I think there is a lot of ambiguity that we ought to clear up for our users to clearly delineate a process of stepping down support as a project loses vitality, so that we are clearly communicating what they should expect from us as a community. Thank you very much for reading, and for considering my candidacy. Nate Johnston IRC: njohnston From witold.bedyk at suse.com Tue Sep 3 11:41:40 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Tue, 3 Sep 2019 13:41:40 +0200 Subject: [monasca][election][ptl] PTL Candidacy for Ussuri Message-ID: <46300de4-fa89-fe84-a746-fbe181a4a930@suse.com> Hello everyone, I would like to announce my candidacy to continue as the PTL of Monasca for the Ussuri release. After looking at my candidacy statement for the last cycle I would like to keep most of the defined themes. In the next release I would like to focus on the following goals: * strengthen the community and improve active participation and contribution * consolidate the project by concentrating on the core functionality (metrics, logs, events), cleaning up technical debt; in particular, I would like to continue driving the work on replacing the thresholding engine * collaborate with Telemetry project to identify and solve possible gaps and allow users to seamlessly migrate to Monasca * continue working on and promoting containerized deployment and Prometheus integration * continue to improve the documentation * collaborate with other OpenStack projects, e.g. by contributing to self-healing and auto-scaling SIGs Thank you for considering my candidacy. Best greetings Witek From fungi at yuggoth.org Tue Sep 3 12:01:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 12:01:01 +0000 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> Message-ID: <20190903120100.b72uecrxnan32wni@yuggoth.org> On 2019-09-03 10:54:49 +0100 (+0100), Stephen Finucane wrote: [...] > If we're concerned about the difficulty of closing this out this > cycle, I'd be in favour of just limiting our scope. [...] If the goal needs a major overhaul this late in the cycle, when projects need to be shifting their focus to release-related activities, it may be wise to defer this goal to Ussuri. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mnaser at vexxhost.com Tue Sep 3 12:31:03 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Sep 2019 08:31:03 -0400 Subject: [openstack-ansible][election] PTL candidacy for Ussuri Message-ID: I'd like to announce my candidacy for OpenStack Ansible PTL. Since my last time as PTL, I mentioned the hopes of doing a few things during the cycle: # Simplifying scenarios for usage and testing We made some really good progress on this but I think our testing has improved (see below) but there's still a big wall to climb to reach a production environment. # Using integrated repository for testing and dropping role tests This effort has been largely completed. I'm very happy of the results and I'm hoping to drop the tests/ folder once we figure out the linting stuff. # Progress on switching all deployments to use Python3 This is pretty much blocked and waiting until CentOS 8 is out so we can make an all EL8 release. # Eventual addition of CentOS 8 option The lack of availability has impared this :( # Reduction in number of config variables (encouraging overrides) We slowly started to do this for a few roles but it seems like this is a very long term thing. # Increase cooperation with other deployment projects (i.e. TripleO) This has started to happen over a few roles and the newly formed Ansible SIG which should encompass a lot of the work. I would like us to be able to continue to catch up on our technical debt as I think at this point, OSA is pretty much feature complete for the most part so it's about doing things that make the maintainership easy moving onwards. I would also like to start dropping some of the old releases that still have open branches. We don't have anyone that works on them at the moment and it's better to end them than leave them stale. Thank you for your consideration. -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From doug at doughellmann.com Tue Sep 3 12:42:21 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 3 Sep 2019 08:42:21 -0400 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> Message-ID: <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> > On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > > On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > > [snip] > >>> When the goal is defined the docs team thought the doc gate job can >>> handle the PDF build >>> without extra tox env and zuul job configuration. However, during >>> implementing the zuul job support >>> it turns out at least a new tox env or an extra zuul job configuration >>> is required in each project >>> to make the docs job fail when PDF build failure is detected. As a >>> result, we changes the approach >>> and the new tox target is now required in each project repo. >> >> The whole point of structuring the goal the way we did was that we do >> not want to update every single repo this cycle so we could roll out >> PDF building transparently. We said we would allow the job to pass >> even if the PDF build failed, because this was phase 1 of making all >> of this work. >> >> The plan was to 1. extend the current job to make PDF building >> optional; 2. examine the results to see how many repos need >> significant work; 3. add a feature flag via a setting somewhere in >> the repo to control whether the job fails if PDFs cannot be built. >> That avoids a second doc job running in parallel, and still allows us >> to roll out the PDF build requirement over time when we have enough >> information to do so. > > Unfortunately when we tried to implement this we found that virtually > every project we looked at required _some_ amount of tweaks just to > build, let alone look sensible. This was certainly true of the big > service projects (nova, neutron, cinder, ...) which all ran afoul of a > bug [1] in the Sphinx LaTeX builder. Given the issues with previous > approach, such as the inability to easily reproduce locally and the > general "hackiness" of the thing, along with the fact that we now had > to submit changes against projects anyway, a collective decision was > made [2] to drop that plan and persue the 'pdfdocs' tox target > approach. We wanted to avoid making a bunch of the same changes to projects just to add the PDF building instructions. If the *content* of a project’s documentation needs work, that’s different. We should make those changes. > > If we're concerned about the difficulty of closing this out this cycle, > I'd be in favour of just limiting our scope. IMO, the service projects > are the ones that would benefit most from PDF documentation. These are > the things people actually use and they tend to have the most complete > documentation. Libraries can be self-documenting (yes, I know), in so > far as once can use introspection, existing code examples, and the > 'help' built-in to piece together what information they need. We should > aim to close that gap long-term, but for now requiring modifications to > ensure we have _some_ PDFs sounds a lot better than requiring no > modifications and having no PDFs. > > Cheers, > Stephen > > [1] https://github.com/sphinx-doc/sphinx/issues/3099 > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2019-08-21.log.html#t2019-08-21T13:19:01 > >>> Perhaps we need to update the description of the goal definition document. >> >> I don’t think it’s a good idea to change the scope of the goal at >> this point in the release cycle. > > > From sean.mcginnis at gmx.com Tue Sep 3 12:54:22 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Sep 2019 07:54:22 -0500 Subject: [RelMgmt][election] PTL candidacy Message-ID: <20190903125422.GA28241@sm-workstation> Greetings! I would like to submit my name to continue as the release management PTL for the Ussuri release. I think release management is one of those critical functions for a project like OpenStack that often gets overlooked or just doesn't have the level of awareness that some of the other projects have. I've been PTL or active core since the Queens release. We have a lot of the release mechanisms automated now, but we still need to keep things running smoothly and handling any of the little issues that always pop up. My day job role isn't as focused on OpenStack as it had been, but I will still be able to devote enough time to help keep the fires out and guide anyone else that would like to get ready to take over the reins. Thank you for your consideration. Sean McGinnis (smcginnis) From sfinucan at redhat.com Tue Sep 3 13:04:53 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 03 Sep 2019 14:04:53 +0100 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> Message-ID: <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: > > On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > > > > On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > > > > On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > > > > [snip] > > > > > > When the goal is defined the docs team thought the doc gate job can > > > > handle the PDF build > > > > without extra tox env and zuul job configuration. However, during > > > > implementing the zuul job support > > > > it turns out at least a new tox env or an extra zuul job configuration > > > > is required in each project > > > > to make the docs job fail when PDF build failure is detected. As a > > > > result, we changes the approach > > > > and the new tox target is now required in each project repo. > > > > > > The whole point of structuring the goal the way we did was that we do > > > not want to update every single repo this cycle so we could roll out > > > PDF building transparently. We said we would allow the job to pass > > > even if the PDF build failed, because this was phase 1 of making all > > > of this work. > > > > > > The plan was to 1. extend the current job to make PDF building > > > optional; 2. examine the results to see how many repos need > > > significant work; 3. add a feature flag via a setting somewhere in > > > the repo to control whether the job fails if PDFs cannot be built. > > > That avoids a second doc job running in parallel, and still allows us > > > to roll out the PDF build requirement over time when we have enough > > > information to do so. > > > > Unfortunately when we tried to implement this we found that virtually > > every project we looked at required _some_ amount of tweaks just to > > build, let alone look sensible. This was certainly true of the big > > service projects (nova, neutron, cinder, ...) which all ran afoul of a > > bug [1] in the Sphinx LaTeX builder. Given the issues with previous > > approach, such as the inability to easily reproduce locally and the > > general "hackiness" of the thing, along with the fact that we now had > > to submit changes against projects anyway, a collective decision was > > made [2] to drop that plan and persue the 'pdfdocs' tox target > > approach. > > We wanted to avoid making a bunch of the same changes to projects just to > add the PDF building instructions. If the *content* of a project’s documentation > needs work, that’s different. We should make those changes. I thought the only reason to hack the docs venv in a Zuul job was to avoid having to mass patch projects to add tox configuration? As such, if we're already having to mass patch projects because they don't build otherwise, why wouldn't we add the tox configuration? Was there another reason to pursue the zuul-only approach that I've forgotten about/never knew? Stephen > > If we're concerned about the difficulty of closing this out this cycle, > > I'd be in favour of just limiting our scope. IMO, the service projects > > are the ones that would benefit most from PDF documentation. These are > > the things people actually use and they tend to have the most complete > > documentation. Libraries can be self-documenting (yes, I know), in so > > far as once can use introspection, existing code examples, and the > > 'help' built-in to piece together what information they need. We should > > aim to close that gap long-term, but for now requiring modifications to > > ensure we have _some_ PDFs sounds a lot better than requiring no > > modifications and having no PDFs. > > > > Cheers, > > Stephen > > > > [1] https://github.com/sphinx-doc/sphinx/issues/3099 > > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2019-08-21.log.html#t2019-08-21T13:19:01 > > > > > > Perhaps we need to update the description of the goal definition document. > > > > > > I don’t think it’s a good idea to change the scope of the goal at > > > this point in the release cycle. From doug at doughellmann.com Tue Sep 3 13:15:05 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 3 Sep 2019 09:15:05 -0400 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> Message-ID: <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> > On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: > > On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: >>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: >>> >>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: >>> >>> [snip] >>> >>>>> When the goal is defined the docs team thought the doc gate job can >>>>> handle the PDF build >>>>> without extra tox env and zuul job configuration. However, during >>>>> implementing the zuul job support >>>>> it turns out at least a new tox env or an extra zuul job configuration >>>>> is required in each project >>>>> to make the docs job fail when PDF build failure is detected. As a >>>>> result, we changes the approach >>>>> and the new tox target is now required in each project repo. >>>> >>>> The whole point of structuring the goal the way we did was that we do >>>> not want to update every single repo this cycle so we could roll out >>>> PDF building transparently. We said we would allow the job to pass >>>> even if the PDF build failed, because this was phase 1 of making all >>>> of this work. >>>> >>>> The plan was to 1. extend the current job to make PDF building >>>> optional; 2. examine the results to see how many repos need >>>> significant work; 3. add a feature flag via a setting somewhere in >>>> the repo to control whether the job fails if PDFs cannot be built. >>>> That avoids a second doc job running in parallel, and still allows us >>>> to roll out the PDF build requirement over time when we have enough >>>> information to do so. >>> >>> Unfortunately when we tried to implement this we found that virtually >>> every project we looked at required _some_ amount of tweaks just to >>> build, let alone look sensible. This was certainly true of the big >>> service projects (nova, neutron, cinder, ...) which all ran afoul of a >>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous >>> approach, such as the inability to easily reproduce locally and the >>> general "hackiness" of the thing, along with the fact that we now had >>> to submit changes against projects anyway, a collective decision was >>> made [2] to drop that plan and persue the 'pdfdocs' tox target >>> approach. >> >> We wanted to avoid making a bunch of the same changes to projects just to >> add the PDF building instructions. If the *content* of a project’s documentation >> needs work, that’s different. We should make those changes. > > I thought the only reason to hack the docs venv in a Zuul job was to > avoid having to mass patch projects to add tox configuration? As such, > if we're already having to mass patch projects because they don't build > otherwise, why wouldn't we add the tox configuration? Was there another > reason to pursue the zuul-only approach that I've forgotten about/never > knew? I expected to need to fix formatting (even up to the point of commenting things out, like we found with the giant config sample files). Those are content changes, and would be mostly unique across projects. I wanted to avoid a large number of roughly identical changes to add tox environments, zuul jobs, etc. because having a lot of patches like that across all the repos makes extra work for small gain, especially when we can get the same results with a small number of changes in one repository. The approach we discussed was to update the docs job to run some extra steps using scripts that lived in the openstackdocstheme repository. That shouldn’t require adding any extra software or otherwise modifying the tox environments. Did that approach not work out? Doug From kevin at cloudnull.com Tue Sep 3 13:51:00 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Tue, 3 Sep 2019 08:51:00 -0500 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 -- Kevin Carter IRC: Cloudnull On Fri, Aug 30, 2019 at 7:33 AM Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Sep 3 14:03:12 2019 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 3 Sep 2019 19:33:12 +0530 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190814192440.GA3048@sm-workstation> References: <20190814192440.GA3048@sm-workstation> Message-ID: On Thu, Aug 15, 2019 at 12:54 AM Sean McGinnis wrote: > > > > > > > Bringing this backup to see what we need to do to get the stable/ocata > > > branches ended for the TripleO projects. I'm bringing this up > > > because we have https://review.openstack.org/#/c/647009/ which is for > > > the upcoming rename but CI is broken and we have no interest in > > > continue to keep the stable/ocata branches alive (or fix ci for them). > > > > > So we had a discussion yesterday in TripleO meeting regarding EOL of > > Ocata and Pike Branches for TripleO projects, and there was no clarity > > regarding the process of making the branches EOL(is just pushing a > > change to openstack/releases(deliverables/ocata/.yaml) > > creating ocata-eol tag enough or something else is also needed), can > > someone from Release team point us in the right direction. > > > > > Thanks, > > > -Alex > > > > > It would appear we have additional information we should add to somewhere like: > > https://docs.openstack.org/project-team-guide/stable-branches.html > > or > > https://releases.openstack.org/#references > > I believe it really is just a matter of requesting the new tag in the > openstack/releases repo. There is a good example of this when Tony did it for > TripleO's stable/newton branch: > > https://review.opendev.org/#/c/583856/ Thanks Sean, so ocata-eol[1] and pike-eol[2] patches were proposed for TripleO and they are merged, both ocata-eol and pike-eol tags got created after the patches merged. But still stable/ocata and stable/pike branches exist. Can someone from Release Team get them cleared so there is no option left to get cherry-pick proposed to these EOL branches. If any step from TripleO maintainers is needed please guide. [1] https://review.opendev.org/#/c/677478/ [2] https://review.opendev.org/#/c/678154/ > > I think I recall there were some additional steps Tony took at the time, but I > think everything is now covered by the automated process. Tony, please correct > me if I am wrong. > > Not sure if it applies, but you may want to see if there are any Zuul jobs that > need to be cleaned up or anything of that sort. > > We do say branches will be in unmaintained in the Extended Maintenance phase > for six months before going End of Life. Looking at Ocata, that happened April > 5 of this year. Six months would put it at the beginning of October. But I > think if the team knows they will not be accepting any more patches to these > branches, then it is better to get it clearly marked as EOL so proper > expectations are set. > > Sean Thanks and Regards Yatin Karel From amotoki at gmail.com Tue Sep 3 14:12:30 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 3 Sep 2019 23:12:30 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> Message-ID: On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: > > > > > On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: > > > > On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: > >>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > >>> > >>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > >>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > >>> > >>> [snip] > >>> > >>>>> When the goal is defined the docs team thought the doc gate job can > >>>>> handle the PDF build > >>>>> without extra tox env and zuul job configuration. However, during > >>>>> implementing the zuul job support > >>>>> it turns out at least a new tox env or an extra zuul job configuration > >>>>> is required in each project > >>>>> to make the docs job fail when PDF build failure is detected. As a > >>>>> result, we changes the approach > >>>>> and the new tox target is now required in each project repo. > >>>> > >>>> The whole point of structuring the goal the way we did was that we do > >>>> not want to update every single repo this cycle so we could roll out > >>>> PDF building transparently. We said we would allow the job to pass > >>>> even if the PDF build failed, because this was phase 1 of making all > >>>> of this work. > >>>> > >>>> The plan was to 1. extend the current job to make PDF building > >>>> optional; 2. examine the results to see how many repos need > >>>> significant work; 3. add a feature flag via a setting somewhere in > >>>> the repo to control whether the job fails if PDFs cannot be built. > >>>> That avoids a second doc job running in parallel, and still allows us > >>>> to roll out the PDF build requirement over time when we have enough > >>>> information to do so. > >>> > >>> Unfortunately when we tried to implement this we found that virtually > >>> every project we looked at required _some_ amount of tweaks just to > >>> build, let alone look sensible. This was certainly true of the big > >>> service projects (nova, neutron, cinder, ...) which all ran afoul of a > >>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous > >>> approach, such as the inability to easily reproduce locally and the > >>> general "hackiness" of the thing, along with the fact that we now had > >>> to submit changes against projects anyway, a collective decision was > >>> made [2] to drop that plan and persue the 'pdfdocs' tox target > >>> approach. > >> > >> We wanted to avoid making a bunch of the same changes to projects just to > >> add the PDF building instructions. If the *content* of a project’s documentation > >> needs work, that’s different. We should make those changes. > > > > I thought the only reason to hack the docs venv in a Zuul job was to > > avoid having to mass patch projects to add tox configuration? As such, > > if we're already having to mass patch projects because they don't build > > otherwise, why wouldn't we add the tox configuration? Was there another > > reason to pursue the zuul-only approach that I've forgotten about/never > > knew? > > I expected to need to fix formatting (even up to the point of commenting things > out, like we found with the giant config sample files). Those are content changes, > and would be mostly unique across projects. > > I wanted to avoid a large number of roughly identical changes to add tox environments, > zuul jobs, etc. because having a lot of patches like that across all the repos makes > extra work for small gain, especially when we can get the same results with a small > number of changes in one repository. > > The approach we discussed was to update the docs job to run some extra steps using > scripts that lived in the openstackdocstheme repository. That shouldn’t require > adding any extra software or otherwise modifying the tox environments. Did that approach > not work out? We explored ways only to update the docs job to run extra commands to build PDF docs, but there is one problem that the job cannot know whether PDF build is ready or not. If we ignore an error from PDF build, it works for repositories which are not ready for PDF build, but we cannot prevent PDF build failure again for repositories ready for PDF build As my project team hat of neutron team, we don't want to have PDF build failure again once the PDF build starts to work. To avoid this, stephenfin, asettle, AJaeger and I agree that some flag to determine if the PDF build is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. Another way we considered is a variable in openstack-tox-docs job, but we cannot pass a variable to zuul project template, so we didn't use this way. If there is a more efficient way, I am happy to use it. Thanks, Akihiro From morgan.fainberg at gmail.com Tue Sep 3 14:55:45 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 3 Sep 2019 07:55:45 -0700 Subject: [keystone] Weekly meeting for September 3rd 2019 In-Reply-To: References: Message-ID: This is a reminder, there is no keystone weekly meeting today, September 3rd, 2019. Have a great start to your week everyone! —Morgan On Fri, Aug 30, 2019 at 14:02 Morgan Fainberg wrote: > As of this time, we are planning to skip the keystone weekly meeting for > 2019-09-03. This is to allow for work to continue with less interruption as > well as US-based folks who have Labor Day (2019-09-02 this year) off to > continue to make progress in light of the abbreviated week. > > As always, please feel free to join us on irc (freenode) in > #openstack-keystone if you have any questions. I am also available (irc > nic: kmalloc ). > > Cheers, > --Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Sep 3 15:06:18 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Sep 2019 10:06:18 -0500 Subject: [oslo][ptl][election] PTL candidacy for Ussuri Message-ID: <168ec916-9d64-adc2-31e6-c707ba956745@nemebean.com> See my election review for more details: https://review.opendev.org/#/c/679803/1/candidates/u/Oslo/openstack%2540nemebean.com Thanks. -Ben From gcerami at redhat.com Tue Sep 3 15:35:03 2019 From: gcerami at redhat.com (Gabriele Cerami) Date: Tue, 3 Sep 2019 16:35:03 +0100 Subject: [TripleO][CI] Outage on the rdoprojects.org server is causing jobs to fail Message-ID: <20190903153503.lh2hym6sjgxuiqet@localhost> Hi, this weekend, and outage caused trunk.rdoprojects.org servers to become unreachable. As main effect, all tripleo ci jobs were unable to download and install dlrn repositories for the needed hashes and failed. The outage has been resolved yesterday but there's a problem with DNS propagation and we're still seeing DNS queries returning incorrect IPs, and as a result, jobs are not consistently passing. We would advise to limit the rechecks until we are sure the DNS results are stable. Thanks. From mriedemos at gmail.com Tue Sep 3 15:37:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 3 Sep 2019 10:37:36 -0500 Subject: [Horizon] [stable] Adding Radomir Dopieralski to horizon-stable-maint In-Reply-To: References: Message-ID: <2d44f14b-582b-131b-4fa8-1a9d5f0fdd96@gmail.com> On 9/2/2019 1:22 PM, Ivan Kolodyazhny wrote: > Almost two weeks passed without any objections. > > I would like to ask Stable team to add Rodomir to the > horizon-stable-maint group. Done. -- Thanks, Matt From ianyrchoi at gmail.com Tue Sep 3 15:43:32 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 4 Sep 2019 00:43:32 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> Message-ID: <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> Akihiro Motoki wrote on 9/3/2019 11:12 PM: > On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: >> >> >>> On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: >>> >>> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: >>>>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: >>>>> >>>>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>>>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: >>>>> [snip] >>>>> >>>>>>> When the goal is defined the docs team thought the doc gate job can >>>>>>> handle the PDF build >>>>>>> without extra tox env and zuul job configuration. However, during >>>>>>> implementing the zuul job support >>>>>>> it turns out at least a new tox env or an extra zuul job configuration >>>>>>> is required in each project >>>>>>> to make the docs job fail when PDF build failure is detected. As a >>>>>>> result, we changes the approach >>>>>>> and the new tox target is now required in each project repo. >>>>>> The whole point of structuring the goal the way we did was that we do >>>>>> not want to update every single repo this cycle so we could roll out >>>>>> PDF building transparently. We said we would allow the job to pass >>>>>> even if the PDF build failed, because this was phase 1 of making all >>>>>> of this work. >>>>>> >>>>>> The plan was to 1. extend the current job to make PDF building >>>>>> optional; 2. examine the results to see how many repos need >>>>>> significant work; 3. add a feature flag via a setting somewhere in >>>>>> the repo to control whether the job fails if PDFs cannot be built. >>>>>> That avoids a second doc job running in parallel, and still allows us >>>>>> to roll out the PDF build requirement over time when we have enough >>>>>> information to do so. >>>>> Unfortunately when we tried to implement this we found that virtually >>>>> every project we looked at required _some_ amount of tweaks just to >>>>> build, let alone look sensible. This was certainly true of the big >>>>> service projects (nova, neutron, cinder, ...) which all ran afoul of a >>>>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous >>>>> approach, such as the inability to easily reproduce locally and the >>>>> general "hackiness" of the thing, along with the fact that we now had >>>>> to submit changes against projects anyway, a collective decision was >>>>> made [2] to drop that plan and persue the 'pdfdocs' tox target >>>>> approach. >>>> We wanted to avoid making a bunch of the same changes to projects just to >>>> add the PDF building instructions. If the *content* of a project’s documentation >>>> needs work, that’s different. We should make those changes. >>> I thought the only reason to hack the docs venv in a Zuul job was to >>> avoid having to mass patch projects to add tox configuration? As such, >>> if we're already having to mass patch projects because they don't build >>> otherwise, why wouldn't we add the tox configuration? Was there another >>> reason to pursue the zuul-only approach that I've forgotten about/never >>> knew? >> I expected to need to fix formatting (even up to the point of commenting things >> out, like we found with the giant config sample files). Those are content changes, >> and would be mostly unique across projects. >> >> I wanted to avoid a large number of roughly identical changes to add tox environments, >> zuul jobs, etc. because having a lot of patches like that across all the repos makes >> extra work for small gain, especially when we can get the same results with a small >> number of changes in one repository. >> >> The approach we discussed was to update the docs job to run some extra steps using >> scripts that lived in the openstackdocstheme repository. That shouldn’t require >> adding any extra software or otherwise modifying the tox environments. Did that approach >> not work out? > We explored ways only to update the docs job to run extra commands to > build PDF docs, > but there is one problem that the job cannot know whether PDF build is > ready or not. > If we ignore an error from PDF build, it works for repositories which > are not ready for PDF build, > but we cannot prevent PDF build failure again for repositories ready > for PDF build > As my project team hat of neutron team, we don't want to have PDF > build failure again > once the PDF build starts to work. > To avoid this, stephenfin, asettle, AJaeger and I agree that some flag > to determine if the PDF build > is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. > Another way we considered is a variable in openstack-tox-docs job, but > we cannot pass a variable > to zuul project template, so we didn't use this way. > If there is a more efficient way, I am happy to use it. > > Thanks, > Akihiro > Hello, Sorry for joining in this thread late, but to I first would like to try to figure out the current status regarding the current discussion on the thread: - openstackdocstheme has docstheme-build-pdf script [1] - build-pdf-docs Zuul job in openstack-zuul-jobs pre-installs all required packages [2] - Current guidance for project repos is that 1) is to just add to latex_documents settings [3] and add pdf-docs environment for trigger [4] - Project repos additionally need to change more for successful PDF builds like adding more options on conf.py [5] and changing more on rst files to explictly options like [6] . Now my questions from comments are: a) How about checking an option in somewhere else like .zuul.yaml or using grep in docs env part, not doing grep to check the existance of "pdf-docs" tox env [3]? b) Can we call docstheme-build-pdf in openstackdocstheme [1] instead of direct Sphinx & make commands in "pdf-docs" environment [4]? c) Ultimately, would executing docstheme-build-pdf command in build-pdf-docs Zuul job with another kind of trigger like bullet a) be feasible and/or be implemented by the end of this cycle? With many thanks, /Ian [1] https://review.opendev.org/#/c/665163/ [2] https://review.opendev.org/#/c/664555/25/roles/prepare-build-pdf-docs/tasks/main.yaml at 3 [3] https://review.opendev.org/#/c/678393/4/doc/source/conf.py [4] https://review.opendev.org/#/c/678393/4/tox.ini [5] https://review.opendev.org/#/c/678747/1/doc/source/conf.py at 270 [6] https://review.opendev.org/#/c/678747/1/doc/source/index.rst at 13 From anmar.salih1 at gmail.com Tue Sep 3 15:54:07 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Tue, 3 Sep 2019 11:54:07 -0400 Subject: Need help trigger aodh alarm Message-ID: Hey all, I need help trigger aodh alarm to execute a simple function. I am following the instructions here but it does't work. Here are my system configurations: 1- Operating system (Ubuntu16 server) -> running on virtual machine 2- Devstack local.conf file. 3- Devstack Stein release. Note: I tried to install Devstack on Ubuntu16 desktop and Ubuntu18 desktop but no luck. This link is the error output screen I received during the installation on Ubuntu18 desktop. Thank you in advance. Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Sep 3 16:04:36 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Sep 2019 11:04:36 -0500 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri In-Reply-To: References: Message-ID: Do we even still have a docs PTL position now that docs has become a SIG? On 9/2/19 9:41 AM, Alexandra Settle wrote: > Hey all, > > I would like to submit my candidacy for the documentation team's PTL > for the Ussuri cycle. > > Stephen Finucane (Train PTL) will be unofficially serving alongside me > in a co-PTL capacity so we can equally address documentation-related > tasks and discussions. > > I served as the documentation PTL for Pike, and am currently serving as > an elected member of the Technical Committee in the capacity of vice > chair. I have been a part of the community since the beginning of 2014, > and have seen the highs and the lows and continue to love working for > and with this community. > > The definition of documentation for OpenStack has been rapidly changing > and the future of the documentation team continues to evolve and > change. I would like that opportunity to help guide the documentation > team, and potentially finish what myself, Petr, Stephen and many others > have started and carried on. > > Thanks, > > Alex > From ianyrchoi at gmail.com Tue Sep 3 16:09:14 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 4 Sep 2019 01:09:14 +0900 Subject: [ALL][UC] The UC Special Election results Message-ID: <1e034eb4-042c-0bb1-dd94-eb6677ee6f0e@gmail.com> Hello all, On behalf of the User Committee Elections officials, I am pleased to announce the results of the UC elections for the special election 2019 [1]. Please join me in congratulating the winner: Jaesuk Ahn! With the result from the previous UC election on last month [2], total two winners (Mohamed Elsakhawy, Jaesuk Ahn) will serve UC for one year. Thank you, - Ed & Ian [1] https://governance.openstack.org/uc/reference/uc-election-sep2019.html [2] http://lists.openstack.org/pipermail/user-committee/2019-August/002870.html From gcerami at redhat.com Tue Sep 3 16:35:18 2019 From: gcerami at redhat.com (Gabriele Cerami) Date: Tue, 3 Sep 2019 17:35:18 +0100 Subject: [TripleO][CI] Outage on the rdoprojects.org server is causing jobs to fail Message-ID: <20190903163518.zf2zlt5hvk32a4fq@localhost> Hi, this weekend, and outage caused trunk.rdoprojects.org servers to become unreachable. As main effect, all tripleo ci jobs were unable to download and install dlrn repositories for the needed hashes and failed. The outage has been resolved yesterday but there's a problem with DNS propagation and we're still seeing DNS queries returning incorrect IPs, and as a result, jobs are not consistently passing. You may see problems dowloading repos, building changes, installing packages We would advise to limit the rechecks until we are sure the DNS results are stable. Thanks. From mark at stackhpc.com Tue Sep 3 16:49:57 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 3 Sep 2019 17:49:57 +0100 Subject: [kolla] Kayobe Stein release now available Message-ID: Hi, I'm pleased to announce that the first Stein cycle release for Kayobe is now available - 6.0.0. Thanks to everyone who contributed. Release notes: https://docs.openstack.org/releasenotes/kayobe/stein.html Join us on #openstack-kolla to help make the Train cycle release even better. Cheers, Mark From aj at suse.com Tue Sep 3 17:36:28 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 3 Sep 2019 19:36:28 +0200 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri In-Reply-To: References: Message-ID: On 03/09/2019 18.04, Ben Nemec wrote: > Do we even still have a docs PTL position now that docs has become a SIG? That transition is not yet effective, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Felix Imendörffer; HRB 247165 (AG München) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From fungi at yuggoth.org Tue Sep 3 17:54:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 17:54:11 +0000 Subject: [elections][ptl][tc][adjutant][cyborg][designate][i18n][manila][nova][openstacksdk][placement][powervmstackers][winstackers] Missing PTL/TC Candidates! Message-ID: <20190903175410.rbhkdrimut6uccex@yuggoth.org> A final reminder, we are now into the last few hours for declaring PTL and TC candidacies. Nominations are open until Sep 03, 2019 23:45 UTC. If you want to stand for election, don't delay, follow the instructions to make sure the community knows your intentions: https://governance.openstack.org/election/#how-to-submit-a-candidacy Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. With approximately six hours remaining, the 10 projects tagged in the Subject line of this message will be deemed leaderless if no eligible nominees step forward. In this case the TC will directly oversee PTL selection/appointment. We also need at least one more TC candidate to have enough to fill the six open seats on the OpenStack Technical committee. Thank you, -- Jeremy Stanley, on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Tue Sep 3 18:14:07 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 3 Sep 2019 12:14:07 -0600 Subject: [TripleO][CI] Outage on the rdoprojects.org server is causing jobs to fail In-Reply-To: <20190903163518.zf2zlt5hvk32a4fq@localhost> References: <20190903163518.zf2zlt5hvk32a4fq@localhost> Message-ID: On Tue, Sep 3, 2019 at 10:43 AM Gabriele Cerami wrote: > Hi, > > this weekend, and outage caused trunk.rdoprojects.org servers to become > unreachable. > As main effect, all tripleo ci jobs were unable to download and install > dlrn repositories for the needed hashes and failed. > > The outage has been resolved yesterday but there's a problem with DNS > propagation and we're still seeing DNS queries returning incorrect IPs, > and as a result, jobs are not consistently passing. > You may see problems dowloading repos, building changes, installing > packages > > We would advise to limit the rechecks until we are sure the DNS results > are stable. > > Thanks. Just adding a little more clarity. You can see the pass rate of TripleO jobs start to drop on 8/31 and recover on 9/3 in the screenshot. We have not yet fully recovered quite yet, we will update this thread when that is the case. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ci_job_pass_fail_rate.png Type: image/png Size: 52623 bytes Desc: not available URL: From sean.mcginnis at gmx.com Tue Sep 3 19:03:37 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Sep 2019 14:03:37 -0500 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <20190814192440.GA3048@sm-workstation> Message-ID: <20190903190337.GA14785@sm-workstation> > > Thanks Sean, so ocata-eol[1] and pike-eol[2] patches were proposed for > TripleO and they are merged, both ocata-eol and pike-eol tags got > created after the patches merged. But still stable/ocata and > stable/pike branches exist. Can someone from Release Team get them > cleared so there is no option left to get cherry-pick proposed to > these EOL branches. If any step from TripleO maintainers is needed > please guide. > > [1] https://review.opendev.org/#/c/677478/ > [2] https://review.opendev.org/#/c/678154/ > The release automation can only create branches, not remove them. That is something the infra team would need to do. I can't recall how this was handled in the past. Maybe someone from infra can shed some light on how EOL'ing stable branches should be handled for the no longer needed stable/* branches. Sean From jungleboyj at gmail.com Tue Sep 3 19:07:29 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 3 Sep 2019 14:07:29 -0500 Subject: [elections][tc] Announcing Candidacy for OpenStack Technical Committee Message-ID: <847a4225-6a98-4edc-aed5-f9934ec80457@gmail.com> Dear OpenStack Community, This note is to officially announce my candidacy for the OpenStack TC. For those of you that don’t know me, allow me to introduce myself:  [1] I have been active in the OpenStack Community since early in 2013. After nine years of working on Super Computing Solutions with IBM I moved to OpenStack and started working on Neutron (Quantum back in those days) before moving over to a relatively new project called Cinder. Within IBM I worked to create the processes for IBM’s storage driver development teams to interact with and contribute to the Cinder project. I became a core member of the Cinder team in the middle of 2013 and have remained active ever since. [2] [3] I have been the Cinder PTL for the last two years, starting in the Queens release. Beyond Cinder I have sought opportunities to work across OpenStack projects. I was the liaison between the Cinder and Documentation teams. This lead to opportunities to learn how OpenStack’s documentation is developed and enabled me to help improve the Cinder team’s documentation.  I have also served for quite some time as the liaison to the Oslo team. Education and on-boarding of new team members has been a focus of my tenure in OpenStack.  I started helping to lead the OpenStack Upstream Institute at the fall 2016 Summit in Barcelona.  After Barcelona I helped to revise the education to meet the needs of future Upstream Institute sessions and have coordinated with my current employer, Lenovo, to sponsor each OpenStack Upstream Institute at Summits since the Spring 2017 Summit in Boston.  Lenovo will even be hosting the OUI session in Shanghai!  I also created Cinder’s on-boarding education which I have presented at each Summit since the Fall 2017 Summit in Sydney.  I have sought opportunities to mentor new contributors both within my employers and from the community in general. I have a long experience working with OpenStack and a broad understanding of how the community works, I also feel that I have a breadth of technical experience that will benefit the TC.  I have experience in High Performance Computing and feel I can understand and represent the needs of the HPC community.  I have years of experience in the storage realm and can represent the unique concerns that storage vendors bring to OpenStack and the subsequent concerns that our OpenStack distributors have supporting OpenStack and its many drivers. Since moving from IBM to Lenovo my focus has changed from development of OpenStack to developing solutions that leverage OpenStack. I have enjoyed the opportunity to become a consumer of OpenStack as it has given me an opportunity to better understand the versatility and complexity of OpenStack.  I have been working with customers with an interest in both telco applications, particularly where Edge computing is concerned, as well as enterprise customers.  Given my latest work I feel that I am able to understand and represent many interests from the OpenStack community. If elected to the TC here are some of the concerns that I would like to address: * Ensure that we continue to improve our on-boarding and educational processes.  The days where people are assigned to only work on OpenStack are gone.  The easier we make it for people to successfully contribute to and leverage OpenStack, the more likely they will be to continue to contribute. * Improve documentation of OpenStack and Project processes. There have been a lot of discussions lately regarding undocumented processes.  There is a lot of tribal knowledge involved in OpenStack and this too makes it hard for new contributors to integrate. Improving the documentation/description of ‘how we build OpenStack’ is crucial. * I would like the community as a whole to seek ways to make OpenStack more consumable by our users and distributors.  I think the move to having longer lived stable branches has been a good step, but it has not resolved all the issues posed by customers that stay on older releases of OpenStack. The stable backport policies need to be readdressed to seek a solution that allows vendors to backport code for their customers, improving OpenStack’s usability without risking its stability. * I would like to continue the work that has been started to increase the visibility of the TC’s contributions to OpenStack and increase the effort to have the TC be a resource to the whole community. * I want to seek opportunities for OpenStack to continue to inter-operate with other cloud solutions.  Virtualization is not the only cloud approach available and customers, very often, do not want just one solution or the other.  OpenStack needs to continue to expand to address these concerns to remain vibrant and relevant. I hope that the thoughts above have resonated with you and appreciate you considering me for a position on the Technical Committee.  I am passionate about OpenStack and believe that we have a community like no other in the industry.  It would be a great honor to represent this community in a new capacity. Sincerely, Jay Bryant [1]  Foundation Profile: https://www.openstack.org/community/members/profile/8348/jay-bryant [2]  Reviews:  https://www.stackalytics.com/?user_id=jsbryant [3]  Commits: https://www.stackalytics.com/?user_id=jsbryant&metric=commits IRC (Freenode):  jungleboyj From fungi at yuggoth.org Tue Sep 3 19:22:49 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 19:22:49 +0000 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190903190337.GA14785@sm-workstation> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> Message-ID: <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> On 2019-09-03 14:03:37 -0500 (-0500), Sean McGinnis wrote: [...] > The release automation can only create branches, not remove them. > That is something the infra team would need to do. > > I can't recall how this was handled in the past. Maybe someone > from infra can shed some light on how EOL'ing stable branches > should be handled for the no longer needed stable/* branches. We've done it different ways. Sometimes it's been someone from the OpenDev/Infra sysadmins who volunteers to just delete the list of branches requested, but more recently for large batches related to EOL work we've temporarily elevated permissions for a member of the Stable Branch (now Extended Maintenance SIG?) or Release teams. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Sep 3 19:28:01 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Sep 2019 14:28:01 -0500 Subject: [ptl] [docs] [election] PTL Candidacy for Ussuri In-Reply-To: References: Message-ID: On 9/3/19 12:36 PM, Andreas Jaeger wrote: > On 03/09/2019 18.04, Ben Nemec wrote: >> Do we even still have a docs PTL position now that docs has become a SIG? > > That transition is not yet effective, Ah, didn't realize that. Thanks. From amy at demarco.com Tue Sep 3 19:35:21 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 3 Sep 2019 14:35:21 -0500 Subject: [Horizon] Help making custom theme Message-ID: For the Grace Hopper Conference's Open Source Day we're doing a Horizon based workshop for OpenStack (running Devstack Pike). The end goal is to have the attendee teams create their own OpenStack theme supporting a humanitarian effort of their choice in a few hours. I've tried modifying the material theme thinking it would be the easiest route to go but that might not be the best way to go about this.:) I've been getting some assistance from e0ne in the Horizon channel and my logo now shows up on the login page, and I had already gotten the SITE_BRAND attributes and the theme itself to show up after changing the local_settings.py. If anyone has some tips or a tutorial somewhere it would be greatly appreciated and I will gladly put together a tutorial for the repo when done. Thanks! Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 3 19:45:46 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 19:45:46 +0000 Subject: [Horizon] Help making custom theme In-Reply-To: References: Message-ID: <20190903194546.rug3bagpglhzdyio@yuggoth.org> On 2019-09-03 14:35:21 -0500 (-0500), Amy Marrich wrote: > For the Grace Hopper Conference's Open Source Day we're doing a > Horizon based workshop for OpenStack (running Devstack Pike). [...] I'm thrilled to see you were able to make it happen, thanks for representing our community there! Out of curiosity though, why Pike? I expect there's a really good reason you're stuck doing it on an almost two-year-old release, but I lack sufficient imagination to guess what it might be. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Tue Sep 3 20:05:24 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 3 Sep 2019 15:05:24 -0500 Subject: [Horizon] Help making custom theme In-Reply-To: <20190903194546.rug3bagpglhzdyio@yuggoth.org> References: <20190903194546.rug3bagpglhzdyio@yuggoth.org> Message-ID: Jeremy, It's what I could get running on City Network's generously provided infrastructure. I wasn't getting the same results installing there as locally, for instance had to turn off etcd but I don't have to on my local virtual box instance. I'd get partially through master and stein installs and then errors so I kind of stopped at a good installation and quickly made a 'golden' image and moved on to working on the workshop itself. I also attempted packstack and had errors as well so it just made more sense to move on vs continuously pestering Florian.:). Note: Errors could definitely be a result of me trying to run as lean as possible and not take advantage of the resources being donated. Amy (spotz) On Tue, Sep 3, 2019 at 2:47 PM Jeremy Stanley wrote: > On 2019-09-03 14:35:21 -0500 (-0500), Amy Marrich wrote: > > For the Grace Hopper Conference's Open Source Day we're doing a > > Horizon based workshop for OpenStack (running Devstack Pike). > [...] > > I'm thrilled to see you were able to make it happen, thanks for > representing our community there! Out of curiosity though, why Pike? > I expect there's a really good reason you're stuck doing it on an > almost two-year-old release, but I lack sufficient imagination to > guess what it might be. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 3 20:11:12 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Sep 2019 16:11:12 -0400 Subject: [openstack-ansible] weekly office hours Message-ID: Hi everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. - The 42.3 clean up and job state matrix are still pending. - We decided to retire Ocata and are thinking of retiring Pike since they are not being used or maintained anymore. - We also discussed the future of OSA. We noticed a lot of operators are going more towards containers or kubernetes and the contributions and traction for OSA have decreased, so we’re wondering if we should start thinking about adopting a new direction in the future. I suggest that you read the eavesdrop for the last point and would like to ask for input. Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Tue Sep 3 20:13:09 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Sep 2019 20:13:09 +0000 Subject: [Horizon] Help making custom theme In-Reply-To: References: <20190903194546.rug3bagpglhzdyio@yuggoth.org> Message-ID: <20190903201308.gfhyg6p7eybtnsuw@yuggoth.org> On 2019-09-03 15:05:24 -0500 (-0500), Amy Marrich wrote: > It's what I could get running on City Network's generously > provided infrastructure. I wasn't getting the same results > installing there as locally, for instance had to turn off etcd but > I don't have to on my local virtual box instance. I'd get > partially through master and stein installs and then errors so I > kind of stopped at a good installation and quickly made a 'golden' > image and moved on to working on the workshop itself. I also > attempted packstack and had errors as well so it just made more > sense to move on vs continuously pestering Florian.:). > > Note: Errors could definitely be a result of me trying to run as > lean as possible and not take advantage of the resources being > donated. [...] Ahh, sorry to hear it was a struggle! I'm definitely not questioning your expedient choices, just want to be sure that any bugs you encountered get tracked somewhere so we can try to fix them. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mnaser at vexxhost.com Tue Sep 3 20:13:21 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Sep 2019 16:13:21 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # General changes - Added kayobe as a deliverable of the kolla project: https://review.opendev.org/#/c/669299/ Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amy at demarco.com Tue Sep 3 20:22:39 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 3 Sep 2019 15:22:39 -0500 Subject: [Horizon] Help making custom theme In-Reply-To: <20190903201308.gfhyg6p7eybtnsuw@yuggoth.org> References: <20190903194546.rug3bagpglhzdyio@yuggoth.org> <20190903201308.gfhyg6p7eybtnsuw@yuggoth.org> Message-ID: Jeremy, Ha you should know me better then that:) I just couldn't be sure if it was something I was doing or not and I didn't want to keep bumping up RAM and cores on the instances as I do try to be a good guest:). I do think the etcd thing is interesting as I never ran into that on Rackspace or on my local VM, but it could be a difference in the Ubuntu being used even though the same version. Amy (spotz) On Tue, Sep 3, 2019 at 3:14 PM Jeremy Stanley wrote: > On 2019-09-03 15:05:24 -0500 (-0500), Amy Marrich wrote: > > It's what I could get running on City Network's generously > > provided infrastructure. I wasn't getting the same results > > installing there as locally, for instance had to turn off etcd but > > I don't have to on my local virtual box instance. I'd get > > partially through master and stein installs and then errors so I > > kind of stopped at a good installation and quickly made a 'golden' > > image and moved on to working on the workshop itself. I also > > attempted packstack and had errors as well so it just made more > > sense to move on vs continuously pestering Florian.:). > > > > Note: Errors could definitely be a result of me trying to run as > > lean as possible and not take advantage of the resources being > > donated. > [...] > > Ahh, sorry to hear it was a struggle! I'm definitely not questioning > your expedient choices, just want to be sure that any bugs you > encountered get tracked somewhere so we can try to fix them. Thanks! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Tue Sep 3 20:31:15 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 3 Sep 2019 20:31:15 +0000 Subject: Nova causes MySQL timeouts Message-ID: It looks like nova is keeping mysql connections open until they time out. How are others responding to this issue? Do you just ignore the mysql errors, or is it possible to change configuration so that nova closes and reopens connections before they time out? Or is there a way to stop mysql from logging these aborted connections without hiding real issues? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan.trellu at incloudus.com Tue Sep 3 20:36:47 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Tue, 03 Sep 2019 16:36:47 -0400 Subject: Nova causes MySQL timeouts In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From openstack at fried.cc Tue Sep 3 21:12:19 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 3 Sep 2019 16:12:19 -0500 Subject: [nova][ptl] Eric Fried candidacy for Nova U PTL Message-ID: <918a3e7d-fbf5-51da-44bc-5e4e7501b094@fried.cc> I would be honored to continue serving as the Nova PTL in the Ussuri release [0]. Please note that I will not be in Shanghai. As Train PTL, I am working to delegate the project update [1]. If reelected for Ussuri, I intend to do the same for PTG responsibilities, including doing as much as possible via "virtual pre-PTG" on the mailing list. Being PTL for Train has been a growth experience. It has forced me to take a broader view of the project versus my previous focus on my topics of interest [2]. The flip-side is that I have had less time to devote to those things, and that has been a sacrifice. As such, I intend to be bolder about delegating this time around. In my stump speech for Train [3] I expressed a desire to grow contributor participation. I feel we have seen positive movement with new and existing non-cores showing improved code and review activity. Let's maintain the encouraging atmosphere and continue to grow in this space. However, core participation has not seen the same health, and it shows in the relatively low volume of feature work that has been accomplished to date in Train (more on this below). This has been one of my main frustrations as Nova Cat-Herder: like cats, Nova cores are mysterious beings motivated by forces beyond my ability to control. I would like to find ways to make core review activity more consistent as a step toward being able to predict more accurately what we can expect to get done in a cycle. This should make everyone's (project) managers happier, a delicious treat made with real tuna. Feature-wise, I was disappointed in the lack of progress exploiting nested resource providers. The Placement team worked hard to deliver the dependencies [4] to allow us to express things like subtree affinity for NUMA, but Nova missed the boat (train) due to lack of resource [5] and inability to agree on how to move forward [6]. Expressing NUMA in Placement is going to be the next major inflection point for scheduling robustness and performance; we need to get serious about making it a priority. But first we should finish what we started, closing on the many almost-there features that are looking risky for Train. We should be conservative about committing to new features until those are done. Thanks, Eric Fried (efried) (say it like "freed") [0] https://review.opendev.org/679862 [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8459 [2] meaning things my employer cares about, areas where I have background/expertise, and things that sound fun and/or further some mission of Nova/OpenStack [3] https://opendev.org/openstack/election/src/branch/master/candidates/train/Nova/openstack at fried.cc [4] https://docs.openstack.org/placement/latest/specs/train/approved/2005575-nested-magic-1.html [5] insert Placement joke here [6] https://review.opendev.org/#/c/650963/ From tpb at dyncloud.net Tue Sep 3 21:30:11 2019 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 3 Sep 2019 17:30:11 -0400 Subject: [manila][ptl] Non-candidacy for the Ussuri Cycle Message-ID: <20190903213011.dxj5254weftgw3tp@barron.net> I want to thank the Manila community for allowing me to serve as PTL for the last three cycles, but it is time for us to change it up! I know it's late for this announcement, but I wanted first to make sure that I won't be leaving an unfilled vacancy and events conspired such that that took longer than anticipated :) So stay tuned for an announcment shortly of a nomination for a new PTL for Ussuri -- I'm sure you'll be as pleased as I am with what you'll read. I'm not going anywhere. I'll be working on Manila itself and helping the new PTL, as well as working with actual Manila deployments and the use of Manila as open infrastructure by adjacent communities. Thanks again! -- Tom Barron From cboylan at sapwetik.org Tue Sep 3 21:35:01 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Sep 2019 14:35:01 -0700 Subject: review.opendev.org Outage September 16 to Perform Project Renames Message-ID: <25fb5c5d-4ee5-47c6-80d0-df1f857856d0@www.fastmail.com> Hello, We will be taking a Gerrit outage of about an hour on September 16 at 14:00 UTC to perform project renames. Please let us know if this scheduling does not work for some reason. We have tried to schedule this for a quiet time at the end of OpenStack's Train release cycle. Also, if you'd like to rename a project, now is the time to start prepping for that. Feel free to ask us any questions you have or bring up your concerns with us. Thank you for your patience, Clark From anlin.kong at gmail.com Tue Sep 3 21:45:34 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 4 Sep 2019 09:45:34 +1200 Subject: Need help trigger aodh alarm In-Reply-To: References: Message-ID: On Wed, Sep 4, 2019 at 3:57 AM Anmar Salih wrote: > Hey all, > > I need help trigger aodh alarm to execute a simple function. I am > following the instructions here > but it does't > work. > Hi Anmar, Could you please provide more information? e.g. does Qinling webhook itself work? Is the alarm created successfully? Is the python script in the guide executed successfully? Any related error logs? - Best regards, Lingxian Kong Catalyst Cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Sep 3 22:00:59 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 3 Sep 2019 17:00:59 -0500 Subject: [qa][nova][migrate]CPU doesn't have compatibility In-Reply-To: References: <59101c82.4fb9.16cf4ee15c0.Coremail.chx769467092@163.com> <45468d23-f9c2-bfd6-2021-7129db8afc07@gmail.com> <4d965462.63d8.16cf53223a7.Coremail.chx769467092@163.com> Message-ID: On 9/2/2019 10:40 PM, Wesley Peng wrote: >> We can migrate the vm from compute102 to compute101 Successfully. >> compute101 to compute102 ERROR info: the CPU is incompatible with host >> CPU: Host CPU does not provide required features: f16c, rdrand, >> fsgsbase, smep, erms >> >> > > The error has said, cpu of compute101 is lower than compute102, some > incompatible issues happened. To live migrate, you'd better have all > hosts with the same hardwares, including cpu/mem/disk etc [1] and [2] may be helpful. [1] https://www.openstack.org/videos/summits/berlin-2018/effective-virtual-cpu-configuration-in-nova [2] https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#specify-the-cpu-model-of-kvm-guests -- Thanks, Matt From adriant at catalyst.net.nz Tue Sep 3 23:12:06 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 4 Sep 2019 11:12:06 +1200 Subject: [adjutant][ptl] Adrian Turjak as Adjutant U cycle PTL Message-ID: <2f91b121-023a-299c-aeac-8939761da50a@catalyst.net.nz> Hello OpenStackers, I'm submitting myself as the PTL for Adjutant during the U cycle. At this time I think I'm still the best suited to continue leading the project, with the best understanding of the codebase and the direction that the service is taking. The Train cycle was sadly not as productive as I'd have liked, purely because of how much big refactor work we've been in the middle of. The progress of that though has been good, and it should lay the groundwork for a very productive U cycle. The planned work for the next cycle is: - introduce partial policy support rather than relying on hardcoded decorators. - finish the long planned support for sub-project management - add project (and resource) termination logic - rework the identity manager as a pluggable construct. Cheers, Adrian Turjak From gouthampravi at gmail.com Tue Sep 3 23:47:56 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 3 Sep 2019 16:47:56 -0700 Subject: [manila][ptl][election] PTL candidacy for Ussuri Message-ID: Greetings Zorillas & other Stackers, I would like to submit my candidacy to be the PTL of Manila for the Ussuri cycle. I have been a contributor to OpenStack since the Liberty release and a maintainer of Manila and its associated deliverables since the Ocata release. I have had the opportunity to work closely with, and learn from, two stellar engineers who have served as PTLs so far. I've also had the privilege of collaborating with contributors from varied backgrounds. This taught me the technical aspects of orchestrating Open Infrastructure Storage at cloud scale. I attribute the tremendous growth of the project to each of us in the project internalizing and espousing the "OpenStack Way" of upstream open-source development. My strongest qualification for this job is that I wake up excited about the problems we're solving. As an engineer I see features left to implement; as an ambassador, I see untapped use cases; as a maintainer, I see new contributors and technical debt. So, if you'll have me, as the PTL, I will work towards maturing Manila, tackling its technical debt, advocating its usage and sustaining its neutrality. I'll also continue doing the thing I love most: mentoring new members and preserving this well-knit community. In the near term, I propose that you and I: - Continue hard on the path to growing contributors: Stein/Train was an exciting time for us; we worked hard on this goal! We lowered the barrier of entry for new contributors by relaxing our review norms [1] and provided quick and easy tutorials [2] to bootstrap with our free and open source storage drivers, among many other things. We had an opportunity to mentor interns under Outreachy [3], Google Summer of Code [4] and the Open University of Israel [5] internship programs. Let's do more of this and ensure we have able successors. Let's also mentor reviewers and create more maintainers. - Complete integration to openstackclient/openstacksdk and evolve manila-csi by reaching feature parity to the rich feature-set we already provide. - Continue the work on reliability, availability and fault tolerance of individual components and allow for more flexible deployment scenarios. - Gather feedback from edge/telco/scientific computing consumers and address pain points. Thank you for your support, Goutham Pacha Ravi IRC: gouthamr [1] https://docs.openstack.org/manila/latest/contributor/manila-review-policy.html [2] https://docs.openstack.org/manila/latest/contributor/development-environment-devstack.html [3] https://www.outreachy.org/apply/rounds/may-2019-august-2019-outreachy-internships/#openstack-openstack-manila-integration-with-openstack-cli-os [4] https://summerofcode.withgoogle.com/projects/#5067835716403200 [5] https://review.opendev.org/#/q/committer:gilboa.nir%2540gmail.com+status:merged [6] Candidacy submission: https://review.opendev.org/679881 From gouthampravi at gmail.com Tue Sep 3 23:55:20 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 3 Sep 2019 16:55:20 -0700 Subject: [manila][ptl] Non-candidacy for the Ussuri Cycle In-Reply-To: <20190903213011.dxj5254weftgw3tp@barron.net> References: <20190903213011.dxj5254weftgw3tp@barron.net> Message-ID: Thank you so much for your tireless service as our fearless leader Tom. While we'll allow you to retire as PTL for a term or few, I'm glad we'll retain you as a guide and mentor. Thanks for reposing faith in your protégées, I'm only one of many. I'm going to attempt to steal your shoes and try them out. On Tue, Sep 3, 2019 at 2:33 PM Tom Barron wrote: > I want to thank the Manila community for allowing me to serve as PTL > for the last three cycles, but it is time for us to change it up! I > know it's late for this announcement, but I wanted first to make sure > that I won't be leaving an unfilled vacancy and events conspired such > that that took longer than anticipated :) > > So stay tuned for an announcment shortly of a nomination for a new PTL > for Ussuri -- I'm sure you'll be as pleased as I am with what you'll > read. > > I'm not going anywhere. I'll be working on Manila itself and helping > the new PTL, as well as working with actual Manila deployments and the > use of Manila as open infrastructure by adjacent communities. > > Thanks again! > > -- Tom Barron > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Wed Sep 4 01:02:46 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Wed, 4 Sep 2019 09:02:46 +0800 Subject: [kolla-ansible] Correct way to add/remove nodes. In-Reply-To: References: Message-ID: OK, I think I found the probably answer about removing controller. Refer from Oracle. I know there're different between kollacli and kolla-ansible, but I think the few mechanism are the same. To remove the controller that having a problem (the controller already dead caused by hardware failure for example): 1. Remove the node from inventory. 2. Do kolla-ansible reconfigure. Then check if all information has updated. To add the new controller, just do the same as addition from previous mail. For now I have not enough machines to try this, but this is what I approach. Please correct me if there's something wrong. Many thanks, Eddie. Eddie Yen 於 2019年9月3日 週二 下午3:51寫道: > Hi, > > I wanna know the correct way to add/remove nodes since I can't find the > completely document or tutorial about this part. > > Here's what I know for now. > > For addition: > 1. Install OS and setting up network on new servers. > 2. Add new server's information into /etc/hosts and inventory file > 3. Do bootstrapping to these servers by using bootstrap-servers with > --limit option > 4. (For Ceph OSD node) Add disk label to the disks that will become OSD. > 5. Deploy again. > > > For deletion (Compute): > 1. Do migration if there're VMs exist on target node. > 2. Set nova-compute service down on target node. Then remove the service > from nova cluster. > 3. Disable all Neutron agents on target node and remove from Neutron > cluster. > 4. Using kolla-ansible to stop all containers on target node. > 5. Cleanup all containers and left settings by using cleanup-containers > and cleanup-host script. > > > For deletion (Ceph OSD node): > 1. Remove all OSDs on target node by following Ceph tutorial. > 2. Using kolla-ansible to stop all containers on target node. > 3. Cleanup all containers and left settings by using cleanup-containers > and cleanup-host script. > > > > Now I'm not sure about Controller if there's one controller down and want > to add another one into HA cluster. My thought is that add into cluster > first, then delete the informations about corrupted controller. But I have > no clue about the details. Only about Ceph controller (mon, rgw, mds,. etc) > > Does anyone has experience about this? > > > Many thanks, > Eddie. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 4 01:07:23 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Sep 2019 10:07:23 +0900 Subject: [cyborg][election][ptl] PTL candidacy for Ussuri In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5276073B3@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5276073B3@fmsmsx122.amr.corp.intel.com> Message-ID: <16cf9cfd6af.c725c38b180568.3968014926200808575@ghanshyammann.com> Hi Sundar, I think you missed to add the nomination on gerrit. - https://governance.openstack.org/election/#how-to-submit-a-candidacy The nomination period is passed now. -gmann ---- On Mon, 02 Sep 2019 13:52:24 +0900 Nadathur, Sundar wrote ---- > > Hello all, > I would like to announce my candidacy for the PTL role of Cyborg for the Ussuri cycle. > > I have been involved with Cyborg since 2018 Rocky PTG, and have had the privilege of serving as Cyborg PTL for the Train cycle. > > In the Train cycle, Cyborg saw some important developments. We reached an agreement on integration with Nova at the PTG, and the spec that I wrote based on that agreement has been merged. We have seen new developers join the community. We have seen existing Cyborg drivers getting updated and new Cyborg drivers being proposed. We are also in the process of developing a tempest plugin for Cyborg. > > In the U cycle, I’d aim to build on this foundation. While we may support a certain set of VM operations with accelerators with Nova in Train, we can expand on that set in U. We should also focus on Day 2 operations like performance monitoring and health monitoring for accelerator devices. I would like to formalize and expand on the driver addition/development process. > > Thank you for your support. > > Regards, > Sundar > > From fungi at yuggoth.org Wed Sep 4 02:49:41 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 4 Sep 2019 02:49:41 +0000 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results Message-ID: <20190904024941.qaapsjuddklree26@yuggoth.org> Thank you to all candidates who put their name forward for Project Team Lead (PTL) and Technical Committee (TC) in this election. A healthy, open process breeds trust in our decision making capability thank you to all those who make this process possible. Now for the results of the PTL election process, please join me in extending congratulations to the following PTLs: * Adjutant : Adrian Turjak * Barbican : Douglas Mendizábal * Blazar : Pierre Riteau * Cinder : Brian Rosmaita * Cloudkitty : Luka Peschke * Congress : Eric Kao * Documentation : Alexandra Settle * Ec2 Api : Andrey Pavlov * Freezer : geng chc * Glance : Abhishek Kekane * Heat : Rico Lin * Horizon : Akihiro Motoki * Infrastructure : Clark Boylan * Ironic : Julia Kreger * Karbor : Pengju Jiao * Keystone : Colleen Murphy * Kolla : Mark Goddard * Kuryr : Michał Dulko * Loci : Pete Birley * Magnum : Feilong Wang * Manila : Goutham Pacha Ravi * Masakari : Sampath Priyankara * Mistral : Renat Akhmerov * Monasca : Witek Bedyk * Murano : Rong Zhu * Neutron : Sławek Kapłoński * Nova : Eric Fried * Octavia : Adam Harwell * OpenStack Charms : Frode Nordahl * Openstack Chef : Jens Harbott * OpenStack Helm : Pete Birley * OpenStackAnsible : Mohammed Naser * OpenStackClient : Dean Troyer * Oslo : Ben Nemec * Packaging Rpm : Javier Peña * Puppet OpenStack : Shengping Zhong * Qinling : Lingxian Kong * Quality Assurance : Ghanshyam Mann * Rally : Andrey Kurilin * Release Management : Sean McGinnis * Requirements : Matthew Thode * Sahara : Jeremy Freudberg * Searchlight : Trinh Nguyen * Senlin : XueFeng Liu * Solum : Rong Zhu * Storlets : Kota Tsuyuzaki * Swift : Tim Burke * Tacker : dharmendra kushwaha * Telemetry : Rong Zhu * Tricircle : chi zhang * Tripleo : Wes Hayutin * Trove : Lingxian Kong * Vitrage : Eyal Bar-Ilan * Watcher : canwei li * Zaqar : wang hao * Zun : Feng Shengqin Also please join me in congratulating the 6 newly elected members of the TC: Ghanshyam Mann (gmann) Jean-Philippe Evrard (evrardjp) Jay Bryant (jungleboyj) Kevin Carter (cloudnull) Kendall Nelson (diablo_rojo) Nate Johnston (njohnston) Full results: because there were only as many TC candidates as open seats, no poll was held and all candidates were acclaimed Elections: Election process details and results are also available here: https://governance.openstack.org/election/ -- Jeremy Stanley, on behalf of the OpenStack Technical Election Officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From anmar.salih1 at gmail.com Wed Sep 4 02:51:34 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Tue, 3 Sep 2019 22:51:34 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Hi Lingxian, First of all, I would like to apologize because the email is pretty long. I listed all the steps I went through just to make sure that I did everything correctly. Here are the configurations of the environment I am using: * Operating system (Ubuntu16 server) running on virtual machine. * Openstack version 3.19.0 * Aodh version 1.2.0 ( I executed *aodh --version* command and got response, so I am assuming aodh is working ) * Here is local.conf file I used to install devstack. * Here is a list for all of components I have in my environment after installation. 1- First step is to add the runtime environment by openstack runtime create --name python27 openstackqinling/python-runtime. One minute later the status of runtime switched to available. 2- Creating *hello_world.py* function ( exactly as mentioned at the website) . 3- Creating qinling function by openstack function create --runtime eaeeb0b6-4257-4f17-a336-892c3ec28a3e --entry hello_world.main --file hello_world.py . I got a response that is the function is created. Exactly as mentioned at the website. 4- Creating the webhook for the function by: openstack webhook create --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen capture for the response. I tried to copy and paste the webhook_url " http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into my internet browser, so I got 404 not found. I am not sure if this is normal response or I have something wrong here. 5- Next step is to create an event alarm in Aodh by: aodh alarm create --name qinling-alarm --type event --alarm-action http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke --repeat-action false --event-type compute.instance.create . The response is a little bit different than the one at the website. 6- Simulating an event trigger . 7- Downloading the script and modify the project and file id. by: curl -sSO https://raw.githubusercontent.com/lingxiankong/qinling_utils/master/aodh_notifier_simulator.py . So I have the following config and file id . 8- Executing the aodh alarm simulator by: python aodh_notifier_simulator.py . So I got this response : No handlers could be found for logger "oslo_messaging.notify.messaging" Message sent 9- Checking aodh alarm history by aodh alarm-history show ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response 10- Last step is to check the function execution in qinling and here is the response . (empty bracket). I am not sure what is the problem. Best wishes. Anmar Salih. On Tue, Sep 3, 2019 at 5:45 PM Lingxian Kong wrote: > On Wed, Sep 4, 2019 at 3:57 AM Anmar Salih wrote: > >> Hey all, >> >> I need help trigger aodh alarm to execute a simple function. I am >> following the instructions here >> but it does't >> work. >> > > Hi Anmar, > > Could you please provide more information? e.g. does Qinling webhook > itself work? Is the alarm created successfully? Is the python script in the > guide executed successfully? Any related error logs? > > - > Best regards, > Lingxian Kong > Catalyst Cloud > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Sep 4 02:57:54 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Sep 2019 11:57:54 +0900 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190904024941.qaapsjuddklree26@yuggoth.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <16cfa35052b.edce4b75181025.6895418677759907250@ghanshyammann.com> Thanks Jeremy and all election official for another flawless job. -gmann ---- On Wed, 04 Sep 2019 11:49:41 +0900 Jeremy Stanley wrote ---- > Thank you to all candidates who put their name forward for Project > Team Lead (PTL) and Technical Committee (TC) in this election. A > healthy, open process breeds trust in our decision making capability > thank you to all those who make this process possible. > > Now for the results of the PTL election process, please join me in > extending congratulations to the following PTLs: > > * Adjutant : Adrian Turjak > * Barbican : Douglas Mendizábal > * Blazar : Pierre Riteau > * Cinder : Brian Rosmaita > * Cloudkitty : Luka Peschke > * Congress : Eric Kao > * Documentation : Alexandra Settle > * Ec2 Api : Andrey Pavlov > * Freezer : geng chc > * Glance : Abhishek Kekane > * Heat : Rico Lin > * Horizon : Akihiro Motoki > * Infrastructure : Clark Boylan > * Ironic : Julia Kreger > * Karbor : Pengju Jiao > * Keystone : Colleen Murphy > * Kolla : Mark Goddard > * Kuryr : Michał Dulko > * Loci : Pete Birley > * Magnum : Feilong Wang > * Manila : Goutham Pacha Ravi > * Masakari : Sampath Priyankara > * Mistral : Renat Akhmerov > * Monasca : Witek Bedyk > * Murano : Rong Zhu > * Neutron : Sławek Kapłoński > * Nova : Eric Fried > * Octavia : Adam Harwell > * OpenStack Charms : Frode Nordahl > * Openstack Chef : Jens Harbott > * OpenStack Helm : Pete Birley > * OpenStackAnsible : Mohammed Naser > * OpenStackClient : Dean Troyer > * Oslo : Ben Nemec > * Packaging Rpm : Javier Peña > * Puppet OpenStack : Shengping Zhong > * Qinling : Lingxian Kong > * Quality Assurance : Ghanshyam Mann > * Rally : Andrey Kurilin > * Release Management : Sean McGinnis > * Requirements : Matthew Thode > * Sahara : Jeremy Freudberg > * Searchlight : Trinh Nguyen > * Senlin : XueFeng Liu > * Solum : Rong Zhu > * Storlets : Kota Tsuyuzaki > * Swift : Tim Burke > * Tacker : dharmendra kushwaha > * Telemetry : Rong Zhu > * Tricircle : chi zhang > * Tripleo : Wes Hayutin > * Trove : Lingxian Kong > * Vitrage : Eyal Bar-Ilan > * Watcher : canwei li > * Zaqar : wang hao > * Zun : Feng Shengqin > > Also please join me in congratulating the 6 newly elected members of > the TC: > > Ghanshyam Mann (gmann) > Jean-Philippe Evrard (evrardjp) > Jay Bryant (jungleboyj) > Kevin Carter (cloudnull) > Kendall Nelson (diablo_rojo) > Nate Johnston (njohnston) > > Full results: because there were only as many TC candidates as open > seats, no poll was held and all candidates were > acclaimed > > Elections: > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > -- > Jeremy Stanley, on behalf of the OpenStack Technical Election Officials > From andre at florath.net Wed Sep 4 06:07:31 2019 From: andre at florath.net (Andreas Florath) Date: Wed, 04 Sep 2019 08:07:31 +0200 Subject: [heat] Resource handling in Heat stacks Message-ID: Hello! Can please anybody tell me, if all resources which are created within a Heat stack belong to the stack in the way that all the resources are freed / deleted when the stack is deleted? IMHO all resources which are created during the initial creation or update of a stack, even if they are ephemeral or only internal created, must be deleted when the stack is deleted by OpenStack Heat itself. Correct? My question might see obvious, but I did not find an explicit hint in the documentation stating this. The reason for my question: I have a Heat template which uses two images to create a server (using block_device_mapping_v2). Every time I run an 'openstack stack create' and 'openstack stack delete' cycle one ephemeral volume is left over / gets not deleted. For me this sounds like a problem in OpenStack (Heat). (It looks that this is at least similar to https://review.opendev.org/#/c/341008/ which never made it into master.) Kind regards Andre From ramishra at redhat.com Wed Sep 4 06:34:40 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 4 Sep 2019 12:04:40 +0530 Subject: [heat] Resource handling in Heat stacks In-Reply-To: References: Message-ID: On Wed, Sep 4, 2019 at 11:41 AM Andreas Florath wrote: > Hello! > > > Can please anybody tell me, if all resources which are created > within a Heat stack belong to the stack in the way that > all the resources are freed / deleted when the stack is deleted? > > IMHO all resources which are created during the initial creation or > update of a stack, even if they are ephemeral or only internal > created, must be deleted when the stack is deleted by OpenStack Heat > itself. Correct? > > My question might see obvious, but I did not find an explicit hint in > the documentation stating this. > > > The reason for my question: I have a Heat template which uses two > images to create a server (using block_device_mapping_v2). Every time > I run an 'openstack stack create' and 'openstack stack delete' cycle > one ephemeral volume is left over / gets not deleted. > I think it's due toe delete_on_termination[1] property of bdmv2 which is interpreted as 'False', if not specified. You can set it to 'True' to delete the volumes along with server. I've not checked if it's different from how nova api behaves though. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server-prop-block_device_mapping_v2-*-delete_on_termination > For me this sounds like a problem in OpenStack (Heat). > (It looks that this is at least similar to > https://review.opendev.org/#/c/341008/ > which never made it into master.) > > > Kind regards > > Andre > > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Sep 4 07:29:04 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 04 Sep 2019 09:29:04 +0200 Subject: [openstack-ansible] weekly office hours In-Reply-To: References: Message-ID: On Tue, 2019-09-03 at 16:11 -0400, Mohammed Naser wrote: > Hi everyone, > > Here’s the update of what happened in this week’s OpenStack Ansible > Office Hours. > > - The 42.3 clean up and job state matrix are still pending. > - We decided to retire Ocata and are thinking of retiring Pike since > they are not being used or maintained anymore. > - We also discussed the future of OSA. We noticed a lot of operators > are going more towards containers or kubernetes and the contributions > and traction for OSA have decreased, so we’re wondering if we should > start thinking about adopting a new direction in the future. > > I suggest that you read the eavesdrop for the last point and would > like to ask for input. > > Thanks! > > Regards, > Mohammed > Thanks for the summary! I totally enjoy those every week. I hope some other will step up and say they like them too, or better, write those to free some of your time! :) Regards, JP From andre at florath.net Wed Sep 4 07:51:01 2019 From: andre at florath.net (Andreas Florath) Date: Wed, 04 Sep 2019 09:51:01 +0200 Subject: [heat] Resource handling in Heat stacks In-Reply-To: References: Message-ID: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> Many thanks! Works like a charm! Suggestion: document default value of 'delete_on_termination'. 😉 Kind regards Andre On Wed, 2019-09-04 at 12:04 +0530, Rabi Mishra wrote: > On Wed, Sep 4, 2019 at 11:41 AM Andreas Florath > wrote: > > Hello! > > > > > > > > > > > > Can please anybody tell me, if all resources which are created > > > > within a Heat stack belong to the stack in the way that > > > > all the resources are freed / deleted when the stack is deleted? > > > > > > > > IMHO all resources which are created during the initial creation or > > > > update of a stack, even if they are ephemeral or only internal > > > > created, must be deleted when the stack is deleted by OpenStack > > Heat > > > > itself. Correct? > > > > > > > > My question might see obvious, but I did not find an explicit hint > > in > > > > the documentation stating this. > > > > > > > > > > > > The reason for my question: I have a Heat template which uses two > > > > images to create a server (using block_device_mapping_v2). Every > > time > > > > I run an 'openstack stack create' and 'openstack stack delete' > > cycle > > > > one ephemeral volume is left over / gets not deleted. > > > > > I think it's due toe delete_on_termination[1] property of bdmv2 which > is interpreted as 'False', if not specified. You can set it to 'True' > to delete the volumes along with server. I've not checked if it's > different from how nova api behaves though. > > [1] > https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server-prop-block_device_mapping_v2-*-delete_on_termination > > > For me this sounds like a problem in OpenStack (Heat). > > > > (It looks that this is at least similar to > > > > https://review.opendev.org/#/c/341008/ > > > > which never made it into master.) > > > > > > > > > > > > Kind regards > > > > > > > > Andre > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andres at opennodecloud.com Wed Sep 4 08:43:49 2019 From: andres at opennodecloud.com (Andres Toomsalu) Date: Wed, 4 Sep 2019 11:43:49 +0300 Subject: [ops] Introducing Waldur - open-source platform for openstack operators Message-ID: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> I would like to introduce open-source Waldur platform - targeted for openstack cloud operators and implementing service delivery pipeline towards end customers. It uses modular approach and has the following core features: * integrated marketplace and self-service for the end users (REST API and webapp) * built-in accounting, integrations with backend billing systems and payment gateways * built-in customer support/servicedesk functionality and integrations with backend servicedesk systems (Atlassian Servicedesk for example) * built-in organisation membership and user management * multi-cloud (ie multiple openstack API endpoints) support inside customer project containers This diagram provides overview about available modules and their relations: https://waldur.com/assets/doc/waldur-diagram.pdf Waldur openstack support is fairly mature - it has been successfully used in production deployments since 2015. Its web-based self-service for end users implements somewhat opiniated openstack tenant and resource management - based on our real-world experience and some best practices delivered from it. But also Horizon access provisioning and out-of-band tenant changes sync are supported. More details about Waldur platform and its openstack support can be found here: https://waldur.com/#openstack Source code is available from OpenNode github repositories: https://github.com/opennode/waldur-mastermind https://github.com/opennode/waldur-homeport Documentation is available here: http://docs.waldur.com All the best, Andres Toomsalu andres at opennodeloud.com From wesley.peng1 at googlemail.com Wed Sep 4 08:51:09 2019 From: wesley.peng1 at googlemail.com (Wesley Peng) Date: Wed, 4 Sep 2019 16:51:09 +0800 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> Message-ID: on 2019/9/4 16:43, Andres Toomsalu wrote: > I would like to introduce open-source Waldur platform - targeted for > openstack cloud operators and implementing service delivery pipeline > towards end customers. It uses modular approach and has the following > core features: > > * integrated marketplace and self-service for the end users (REST API > and webapp) > * built-in accounting, integrations with backend billing systems and > payment gateways > * built-in customer support/servicedesk functionality and integrations > with backend servicedesk systems (Atlassian Servicedesk for example) > * built-in organisation membership and user management > * multi-cloud (ie multiple openstack API endpoints) support inside > customer project containers Nice to know it,thanks. btw, does it support registrar's stuff? like domain management, DNS operations, CDN setup etc. regards. From andres at opennodecloud.com Wed Sep 4 08:59:03 2019 From: andres at opennodecloud.com (Andres Toomsalu) Date: Wed, 4 Sep 2019 11:59:03 +0300 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> Message-ID: <418a774c-1460-ea22-c853-e3869b86d6fb@opennodecloud.com> Not yet -  but as Waldur is modular and open-source there are several ways to achieve this :) Wesley Peng wrote on 04/09/2019 11:51: > > > on 2019/9/4 16:43, Andres Toomsalu wrote: >> I would like to introduce open-source Waldur platform - targeted for >> openstack cloud operators and implementing service delivery pipeline >> towards end customers. It uses modular approach and has the following >> core features: >> >> * integrated marketplace and self-service for the end users (REST API >> and webapp) >> * built-in accounting, integrations with backend billing systems and >> payment gateways >> * built-in customer support/servicedesk functionality and >> integrations with backend servicedesk systems (Atlassian Servicedesk >> for example) >> * built-in organisation membership and user management >> * multi-cloud (ie multiple openstack API endpoints) support inside >> customer project containers > > Nice to know it,thanks. > btw, does it support registrar's stuff? like domain management, DNS > operations, CDN setup etc. > > regards. > -- ---------------------------------------------- Andres Toomsalu,andres at opennodecloud.com http://www.opennodecloud.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesley.peng1 at googlemail.com Wed Sep 4 09:12:52 2019 From: wesley.peng1 at googlemail.com (Wesley Peng) Date: Wed, 4 Sep 2019 17:12:52 +0800 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> Message-ID: <2a90e6ac-6e46-2080-c80f-f8ecb49dbe56@googlemail.com> Hi on 2019/9/4 16:43, Andres Toomsalu wrote: > All the best, > > Andres Toomsalu > andres at opennodeloud.com Your signature's domain is typo. Should it be: opennodecloud.com, a "c" gets lost. regards. From andres at opennodecloud.com Wed Sep 4 09:16:36 2019 From: andres at opennodecloud.com (Andres Toomsalu) Date: Wed, 4 Sep 2019 12:16:36 +0300 Subject: [ops] Introducing Waldur - open-source platform for openstack operators In-Reply-To: <2a90e6ac-6e46-2080-c80f-f8ecb49dbe56@googlemail.com> References: <2e1770e6-e975-8090-4e35-5ae6c434a29e@opennodecloud.com> <2a90e6ac-6e46-2080-c80f-f8ecb49dbe56@googlemail.com> Message-ID: <11fc43e8-2faa-ead9-6f47-1a53e76b491c@opennodecloud.com>  Correct - its andres at opennodecloud.com yes. Thank you for spotting! Wesley Peng wrote on 04/09/2019 12:12: > Hi > > on 2019/9/4 16:43, Andres Toomsalu wrote: >> All the best, >> >> Andres Toomsalu >> andres at opennodeloud.com > > Your signature's domain is typo. > Should it be: opennodecloud.com, a "c" gets lost. > > regards. > From cdent+os at anticdent.org Wed Sep 4 09:32:59 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 4 Sep 2019 10:32:59 +0100 (BST) Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190904024941.qaapsjuddklree26@yuggoth.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: On Wed, 4 Sep 2019, Jeremy Stanley wrote: > Thank you to all candidates who put their name forward for Project > Team Lead (PTL) and Technical Committee (TC) in this election. A > healthy, open process breeds trust in our decision making capability > thank you to all those who make this process possible. Congratulations and thank you to the people taking on these roles. We need to talk about the fact that there was no opportunity to vote in these "elections" (PTL or TC) because there were insufficient candidates. No matter the quality of new leaders (this looks like a good group), something is amiss. We danced around these issue for the two years I was on the TC, but we never did anything concrete to significantly change things, carrying on doing things in the same way in a world where those ways no longer seemed to fit. We can't claim any "seem" about it any more: OpenStack governance and leadership structures do not fit and we need to figure out the necessary adjustments. I haven't got any new ideas (which is part of why I left the TC). My position has always been that with a vendor and enterprise led project like OpenStack, where those vendors and enterprises are operating in a huge market, staffing the commonwealth in a healthy fashion is their responsibility. In large part because they are responsible for making OpenStack resistant to "casual" contribution in the first place (e.g., "hardware defined software"). We get people, sometimes, but it is not healthy: i may see different cross-sections of the community than others do, but i feel like there's been a strong tone of burnout since 2012 [1] We drastically need to change the expectations we place on ourselves in terms of velocity. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-09-04.log.html#t2019-09-04T00:26:35 > Ghanshyam Mann (gmann) > Jean-Philippe Evrard (evrardjp) > Jay Bryant (jungleboyj) > Kevin Carter (cloudnull) > Kendall Nelson (diablo_rojo) > Nate Johnston (njohnston) Since there was no need to vote, there was no need to campaign, which means we will be missing out on the Q&A period. I've found those very useful for understanding the issues that are present in the community and for generating ideas on what to about them. I think it is good to have that process anyway so I'll start: What do you think we, as a community, can do about the situation described above? What do you as a TC member hope to do yourself? Thanks -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From pawel.konczalski at everyware.ch Wed Sep 4 09:46:35 2019 From: pawel.konczalski at everyware.ch (Pawel Konczalski) Date: Wed, 4 Sep 2019 11:46:35 +0200 Subject: Octavia LB flavor recommendation for Amphora VMs Message-ID: Hello everyone / Octavia Team, what is your experience / recommendation for a Octavia flavor with is used to deploy Amphora VM for small / mid size setups? (RAM / Cores / HDD) BR Pawel -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5227 bytes Desc: not available URL: From mark at stackhpc.com Wed Sep 4 10:00:27 2019 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 4 Sep 2019 11:00:27 +0100 Subject: [kolla] Cancelling today's meeting Message-ID: Hi, I can't make today's meeting, and we're missing a number of cores so I'll cancel. Please get in touch on IRC if there is anything to update. Cheers, Mark From jean-philippe at evrard.me Wed Sep 4 10:27:37 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 04 Sep 2019 12:27:37 +0200 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <5adcf773a6f9a4e5771eebe2e801a3ea77692e74.camel@evrard.me> On Wed, 2019-09-04 at 10:32 +0100, Chris Dent wrote: > > We need to talk about the fact that there was no opportunity to vote > in these "elections" (PTL or TC) because there were insufficient > candidates. > (snipped) I think people agreed on reducing the TC members to 9. This will not change things fundamentally, but will open the chance for elections. > We can't claim any "seem" about it any more: OpenStack governance > and leadership structures do not fit and we need to figure out > the necessary adjustments. I will propose a series of adjustments, but these are not crazy ideas. I would like to brainstorm that with you, as I might have some more crazy ideas. > We drastically need to change the expectations we place on ourselves > in terms of velocity. I think there are a few ideas floating around. OpenStack is more stable nowadays too. I want to bring more fun and less pressure in OpenStack. This is something the TC will need to speak with the foundation, as it might impact them (impact on events for example). Good that we have some members on the foundation onboard :) > Since there was no need to vote, there was no need to campaign, > which means we will be missing out on the Q&A period. In fact I was looking forward the Q&A. I am weirdly not considering myself elected without this! AMA :) > What do you think we, as a community, can do about the situation > described above? What do you as a TC member hope to do yourself? This is by far too big to answer in a single email, and I would prefer if we split that into a different thread(s), if you don't mind :) My candidacy letter also wants to address some of those points, but not all of them, so I am glad you're raising them. What I would like to see: changes in the TC, changes in the release cadence, tech debt reduction, make the code (more) fun to deal with, allow us to try new things. Regards, JP From allprog at gmail.com Wed Sep 4 10:32:46 2019 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Wed, 4 Sep 2019 12:32:46 +0200 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team Message-ID: I would like to invite Oleg Ovcharuk to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. Thank you, Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Sep 4 12:18:34 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 4 Sep 2019 14:18:34 +0200 Subject: [keystone][edge] Edge Hacking Days - September 6,9, 13 In-Reply-To: <017B8545-8153-42E0-99B1-CF3775DBD4CA@gmail.com> References: <017B8545-8153-42E0-99B1-CF3775DBD4CA@gmail.com> Message-ID: Hi, I hope you had a great Summer and ready to dive back into edge computing with some new energy! :) We have three potential days for September based on the Doodle poll: September 6, 9, 13 and we are using the same etherpad for tracking and ideas: https://etherpad.openstack.org/p/osf-edge-hacking-days As a reminder, the Edge Hacking Days initiative is a remote gathering to work on edge computing related items, such as the reference architecture work or feature development or bug fixing items in relevant OpenStack services. __Please sign up on the etherpad for the days when you are available with time slots (including your time zone) when you are planning to be around if you’re interested in joining.__ Let me know if you have any questions. Thanks and Best Regards, Ildikó > On 2019. Aug 15., at 15:40, Ildiko Vancsa wrote: > > Hi, > > It is a friendly reminder that we are having the second edge hacking days in August this Friday (August 16). > > The dial-in information is the same, you can find the details here: https://etherpad.openstack.org/p/osf-edge-hacking-days > > If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. > > We will keep on working on two items: > * Keystone to Keystone federation testing in DevStack > * Building the centralized edge reference architecture on Packet HW using TripleO > > Please let me know if you have any questions. > > See you on Friday! :) > > Thanks, > Ildikó From a.settle at outlook.com Wed Sep 4 12:30:08 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 4 Sep 2019 12:30:08 +0000 Subject: [all] [tc] PDF goal change Message-ID: Hi all, Further work on the PDF generation showed that many projects have project logos as svg files with the name PROJECT.svg, those get converted to PROJECT.pdf and collide with the default name used here. We have now changed the default to "doc-PROJECT.pdf" to have a unique name. The review [1] merged yesterday, so please be aware of this change when you are working on the goal. Thanks, Alex [1] https://review.opendev.org/#/c/679777 -- Alexandra Settle IRC: asettle From gaetan.trellu at incloudus.com Wed Sep 4 13:27:23 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Wed, 04 Sep 2019 09:27:23 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: Message-ID: <56d312af-2b52-49e4-afbc-446162cb08c8@email.android.com> An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Wed Sep 4 13:53:49 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 4 Sep 2019 09:53:49 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <16cfa35052b.edce4b75181025.6895418677759907250@ghanshyammann.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <16cfa35052b.edce4b75181025.6895418677759907250@ghanshyammann.com> Message-ID: <20190904135349.3vlueuttca6quztv@bishop> On Wed, Sep 04, 2019 at 11:57:54AM +0900, Ghanshyam Mann wrote: > Thanks Jeremy and all election official for another flawless job. > > -gmann I agree - thanks to the election officials! I had a short stint as an election official and I can say, it is a far more complicated job than it appears. And their work is essential to the functioning of the community. Nate > ---- On Wed, 04 Sep 2019 11:49:41 +0900 Jeremy Stanley wrote ---- > > Thank you to all candidates who put their name forward for Project > > Team Lead (PTL) and Technical Committee (TC) in this election. A > > healthy, open process breeds trust in our decision making capability > > thank you to all those who make this process possible. > > > > Now for the results of the PTL election process, please join me in > > extending congratulations to the following PTLs: > > > > * Adjutant : Adrian Turjak > > * Barbican : Douglas Mendizábal > > * Blazar : Pierre Riteau > > * Cinder : Brian Rosmaita > > * Cloudkitty : Luka Peschke > > * Congress : Eric Kao > > * Documentation : Alexandra Settle > > * Ec2 Api : Andrey Pavlov > > * Freezer : geng chc > > * Glance : Abhishek Kekane > > * Heat : Rico Lin > > * Horizon : Akihiro Motoki > > * Infrastructure : Clark Boylan > > * Ironic : Julia Kreger > > * Karbor : Pengju Jiao > > * Keystone : Colleen Murphy > > * Kolla : Mark Goddard > > * Kuryr : Michał Dulko > > * Loci : Pete Birley > > * Magnum : Feilong Wang > > * Manila : Goutham Pacha Ravi > > * Masakari : Sampath Priyankara > > * Mistral : Renat Akhmerov > > * Monasca : Witek Bedyk > > * Murano : Rong Zhu > > * Neutron : Sławek Kapłoński > > * Nova : Eric Fried > > * Octavia : Adam Harwell > > * OpenStack Charms : Frode Nordahl > > * Openstack Chef : Jens Harbott > > * OpenStack Helm : Pete Birley > > * OpenStackAnsible : Mohammed Naser > > * OpenStackClient : Dean Troyer > > * Oslo : Ben Nemec > > * Packaging Rpm : Javier Peña > > * Puppet OpenStack : Shengping Zhong > > * Qinling : Lingxian Kong > > * Quality Assurance : Ghanshyam Mann > > * Rally : Andrey Kurilin > > * Release Management : Sean McGinnis > > * Requirements : Matthew Thode > > * Sahara : Jeremy Freudberg > > * Searchlight : Trinh Nguyen > > * Senlin : XueFeng Liu > > * Solum : Rong Zhu > > * Storlets : Kota Tsuyuzaki > > * Swift : Tim Burke > > * Tacker : dharmendra kushwaha > > * Telemetry : Rong Zhu > > * Tricircle : chi zhang > > * Tripleo : Wes Hayutin > > * Trove : Lingxian Kong > > * Vitrage : Eyal Bar-Ilan > > * Watcher : canwei li > > * Zaqar : wang hao > > * Zun : Feng Shengqin > > > > Also please join me in congratulating the 6 newly elected members of > > the TC: > > > > Ghanshyam Mann (gmann) > > Jean-Philippe Evrard (evrardjp) > > Jay Bryant (jungleboyj) > > Kevin Carter (cloudnull) > > Kendall Nelson (diablo_rojo) > > Nate Johnston (njohnston) > > > > Full results: because there were only as many TC candidates as open > > seats, no poll was held and all candidates were > > acclaimed > > > > Elections: > > > > Election process details and results are also available here: > > https://governance.openstack.org/election/ > > > > -- > > Jeremy Stanley, on behalf of the OpenStack Technical Election Officials > > > > From amotoki at gmail.com Wed Sep 4 14:06:11 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 4 Sep 2019 23:06:11 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> Message-ID: On Wed, Sep 4, 2019 at 12:43 AM Ian Y. Choi wrote: > > Akihiro Motoki wrote on 9/3/2019 11:12 PM: > > On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: > >> > >> > >>> On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: > >>> > >>> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: > >>>>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: > >>>>> > >>>>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: > >>>>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: > >>>>> [snip] > >>>>> > >>>>>>> When the goal is defined the docs team thought the doc gate job can > >>>>>>> handle the PDF build > >>>>>>> without extra tox env and zuul job configuration. However, during > >>>>>>> implementing the zuul job support > >>>>>>> it turns out at least a new tox env or an extra zuul job configuration > >>>>>>> is required in each project > >>>>>>> to make the docs job fail when PDF build failure is detected. As a > >>>>>>> result, we changes the approach > >>>>>>> and the new tox target is now required in each project repo. > >>>>>> The whole point of structuring the goal the way we did was that we do > >>>>>> not want to update every single repo this cycle so we could roll out > >>>>>> PDF building transparently. We said we would allow the job to pass > >>>>>> even if the PDF build failed, because this was phase 1 of making all > >>>>>> of this work. > >>>>>> > >>>>>> The plan was to 1. extend the current job to make PDF building > >>>>>> optional; 2. examine the results to see how many repos need > >>>>>> significant work; 3. add a feature flag via a setting somewhere in > >>>>>> the repo to control whether the job fails if PDFs cannot be built. > >>>>>> That avoids a second doc job running in parallel, and still allows us > >>>>>> to roll out the PDF build requirement over time when we have enough > >>>>>> information to do so. > >>>>> Unfortunately when we tried to implement this we found that virtually > >>>>> every project we looked at required _some_ amount of tweaks just to > >>>>> build, let alone look sensible. This was certainly true of the big > >>>>> service projects (nova, neutron, cinder, ...) which all ran afoul of a > >>>>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous > >>>>> approach, such as the inability to easily reproduce locally and the > >>>>> general "hackiness" of the thing, along with the fact that we now had > >>>>> to submit changes against projects anyway, a collective decision was > >>>>> made [2] to drop that plan and persue the 'pdfdocs' tox target > >>>>> approach. > >>>> We wanted to avoid making a bunch of the same changes to projects just to > >>>> add the PDF building instructions. If the *content* of a project’s documentation > >>>> needs work, that’s different. We should make those changes. > >>> I thought the only reason to hack the docs venv in a Zuul job was to > >>> avoid having to mass patch projects to add tox configuration? As such, > >>> if we're already having to mass patch projects because they don't build > >>> otherwise, why wouldn't we add the tox configuration? Was there another > >>> reason to pursue the zuul-only approach that I've forgotten about/never > >>> knew? > >> I expected to need to fix formatting (even up to the point of commenting things > >> out, like we found with the giant config sample files). Those are content changes, > >> and would be mostly unique across projects. > >> > >> I wanted to avoid a large number of roughly identical changes to add tox environments, > >> zuul jobs, etc. because having a lot of patches like that across all the repos makes > >> extra work for small gain, especially when we can get the same results with a small > >> number of changes in one repository. > >> > >> The approach we discussed was to update the docs job to run some extra steps using > >> scripts that lived in the openstackdocstheme repository. That shouldn’t require > >> adding any extra software or otherwise modifying the tox environments. Did that approach > >> not work out? > > We explored ways only to update the docs job to run extra commands to > > build PDF docs, > > but there is one problem that the job cannot know whether PDF build is > > ready or not. > > If we ignore an error from PDF build, it works for repositories which > > are not ready for PDF build, > > but we cannot prevent PDF build failure again for repositories ready > > for PDF build > > As my project team hat of neutron team, we don't want to have PDF > > build failure again > > once the PDF build starts to work. > > To avoid this, stephenfin, asettle, AJaeger and I agree that some flag > > to determine if the PDF build > > is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. > > Another way we considered is a variable in openstack-tox-docs job, but > > we cannot pass a variable > > to zuul project template, so we didn't use this way. > > If there is a more efficient way, I am happy to use it. > > > > Thanks, > > Akihiro > > > Hello, > > > Sorry for joining in this thread late, but to I first would like to try > to figure out the current status regarding the current discussion on the > thread: > > - openstackdocstheme has docstheme-build-pdf script [1] > > - build-pdf-docs Zuul job in openstack-zuul-jobs pre-installs all > required packages [2] > > - Current guidance for project repos is that 1) is to just add to > latex_documents settings [3] and add pdf-docs environment for trigger [4] > > - Project repos additionally need to change more for successful PDF > builds like adding more options on conf.py [5] and changing more on rst > files to explictly options like [6] . Thanks Ian. Your understanding on the current situations is correct. Good summary, thanks. > > > Now my questions from comments are: > > a) How about checking an option in somewhere else like .zuul.yaml or > using grep in docs env part, not doing grep to check the existance of > "pdf-docs" tox env [3]? I am not sure how your suggestion works more efficiently than the current pdf-docs tox env approach. We explored an option to introduce a flag variable to the openstack-tox-docs job but we use a zuul project-template which wraps openstack-tox-docs job and another job. The current zuul project-template does not accept a variable and projects who want to specify a flag explicitly needs to copy the content of the project-template. Considering this we gave up this route. Regarding "using grep in docs env part", I haven't understood what you think, but it looks similar to the current approach. > > b) Can we call docstheme-build-pdf in openstackdocstheme [1] instead of > direct Sphinx & make commands in "pdf-docs" environment [4]? It can, but I am not sure whether we need to update the current proposed patches. The only advantage of using docstheme-build-pdf is that we don't need to change project repositories when we update the command lines in future, but it sounds a matter of taste. > > c) Ultimately, would executing docstheme-build-pdf command in > build-pdf-docs Zuul job with another kind of trigger like bullet a) be > feasible and/or be implemented by the end of this cycle? We can, but again it is a matter of taste to me and most important thing is how we handle a flag to enable PDF build. Thanks, Akihiro > > > > With many thanks, > > > /Ian > > > [1] https://review.opendev.org/#/c/665163/ > > [2] > https://review.opendev.org/#/c/664555/25/roles/prepare-build-pdf-docs/tasks/main.yaml at 3 > > [3] https://review.opendev.org/#/c/678393/4/doc/source/conf.py > > [4] https://review.opendev.org/#/c/678393/4/tox.ini > > [5] https://review.opendev.org/#/c/678747/1/doc/source/conf.py at 270 > > [6] https://review.opendev.org/#/c/678747/1/doc/source/index.rst at 13 > From skaplons at redhat.com Wed Sep 4 14:37:05 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 4 Sep 2019 16:37:05 +0200 Subject: [neutron] CI issues Message-ID: <2BBD3139-A073-42D1-8A2A-A4847F9CBA4D@redhat.com> Hi neutrinos, We are currently having some issues in our gate. Please see [1], [2] and [3] for details. If Your Neutron patch failed on neutron-functional, neutron-functional-python27 or networking-ovn-tempest-dsvm-ovs-release jobs, please don’t recheck before all those issues will be solved. Recheck will not help and You will only use infra resources. [1] https://bugs.launchpad.net/neutron/+bug/1842659 [2] https://bugs.launchpad.net/neutron/+bug/1842482 [3] https://bugs.launchpad.net/bugs/1842657 — Slawek Kaplonski Senior software engineer Red Hat From dtantsur at redhat.com Wed Sep 4 15:24:14 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 4 Sep 2019 17:24:14 +0200 Subject: [ironic] opensuse-15 jobs are temporary non-voting on bifrost Message-ID: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> Hi all, JFYI we had to disable opensuse-15 jobs because they kept failing with repository issues. Help with debugging appreciated. Dmitry From mnaser at vexxhost.com Wed Sep 4 15:54:37 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Sep 2019 11:54:37 -0400 Subject: [tc] monthly meeting agenda Message-ID: Hi everyone, Here’s the agenda for our monthly TC meeting. It will happen tomorrow (Thursday the 5th) at 1400 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Follow up on past action items ** mnaser to contact Alan to mention that TC will have some presence at Shanghai leadership meeting ** ricolin update SIG guidelines to simplify the process for new SIGs ** ttx contact interested parties in a new 'large scale' SIG (help with mnaser, jroll reaching out to Verizon Media) * Active Initiatives ** mugsie to sync with dhellmann or release-team to resolve proposal bot for project-template patches ** Shanghai TC sessions: https://etherpad.openstack.org/p/PVG-TC-brainstorming ** Forum selection commitee: http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008188.html ** Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/ ) Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Wed Sep 4 16:20:12 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Sep 2019 12:20:12 -0400 Subject: [ansible-sig] weekly meetings Message-ID: Hi everyone, For those interested in getting involved, the ansible-sig meetings will be held weekly on Fridays at 2:00 pm UTC starting next week (13 September 2019). Looking forward to discussing details and ideas with all of you! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From chris at openstack.org Wed Sep 4 16:23:58 2019 From: chris at openstack.org (Chris Hoge) Date: Wed, 4 Sep 2019 09:23:58 -0700 Subject: Thank you Stackers for five amazing years! Message-ID: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Hi everyone, After more than nine years working in cloud computing and on OpenStack, I've decided that it is time for a change and will be moving on from the OpenStack Foundation. For the last five years I've had the honor of helping to support this vibrant community, and I'm going to deeply miss being a part of it. OpenStack has been a central part of my life for so long that it's hard to imagine a work life without it. I'm proud to have helped in some small way to create a lasting project and community that has, and will continue to, transform how infrastructure is managed. September 12 will officially be my last day with the OpenStack Foundation. As I make the move away from my responsibilities, I'll be working with community members to help ensure continuity of my efforts. Thank you to everyone for building such an incredible community filled with talented, smart, funny, and kind people. You've built something special here, and we're all better for it. I'll still be involved with open source. If you ever want to get in touch, be it with questions about work I've been involved with or to talk about some exciting new tech or to just catch up over a tasty meal, I'm just a message away in all the usual places. Sincerely, Chris chris at hogepodge.com Twitter/IRC/everywhere else: @hogepodge From mnaser at vexxhost.com Wed Sep 4 16:30:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Sep 2019 12:30:36 -0400 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: On Wed, Sep 4, 2019 at 12:26 PM Chris Hoge wrote: > > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. Thanks for being such a great asset in our community, your work across many different communities and involvement (specifically within the interaction across other projects, likes Kubernetes) has definitely left a long left impact! > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From juliaashleykreger at gmail.com Wed Sep 4 16:38:55 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 4 Sep 2019 12:38:55 -0400 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Chris, Thank you for everything you've done! We wouldn't be here without your hard work! -Julia On Wed, Sep 4, 2019 at 12:28 PM Chris Hoge wrote: > > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge From jungleboyj at gmail.com Wed Sep 4 17:15:01 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 4 Sep 2019 12:15:01 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <3148cdbd-f232-4247-a40c-a0f8c2614df4@gmail.com> Chris, Thank you for your questions.  I agree that not having the election deprived the community of a chance to get to know the candidates better so I am happy to help out here.  :-) Hope my thoughts in-line below make sense! Jay On 9/4/2019 5:32 AM, Chris Dent wrote: > On Wed, 4 Sep 2019, Jeremy Stanley wrote: > >> Thank you to all candidates who put their name forward for Project >> Team Lead (PTL) and Technical Committee (TC) in this election. A >> healthy, open process breeds trust in our decision making capability >> thank you to all those who make this process possible. > > Congratulations and thank you to the people taking on these roles. > > We need to talk about the fact that there was no opportunity to vote > in these "elections" (PTL or TC) because there were insufficient > candidates. No matter the quality of new leaders (this looks like a > good group), something is amiss. We danced around these issue for > the two years I was on the TC, but we never did anything concrete to > significantly change things, carrying on doing things in the same > way in a world where those ways no longer seemed to fit. > > We can't claim any "seem" about it any more: OpenStack governance > and leadership structures do not fit and we need to figure out > the necessary adjustments. > I was surprised that we didn't have any PTL elections.  I don't know that this is all bad.  At least in the case of the Cinder team it seems to be a process that we have just kind-of internalized.  I got my chance to be PTL and was ready for a break.  I had reached out to Brian Rosmaita some time ago and had been grooming him to take over.  I had discussions with other people knew Brian was interested, so we went forward that way. I think this is a natural progression for where OpenStack is at right now.  There isn't a lot of contention over how the project needs to be  run right now.  In the future that may change and I think having our election process is important for if and when that happens. > I haven't got any new ideas (which is part of why I left the TC). > My position has always been that with a vendor and enterprise led > project like OpenStack, where those vendors and enterprises are > operating in a huge market, staffing the commonwealth in a healthy > fashion is their responsibility. In large part because they are > responsible for making OpenStack resistant to "casual" contribution > in the first place (e.g., "hardware defined software"). > > We get people, sometimes, but it is not healthy: > >     i may see different cross-sections of the community than others >     do, but i feel like there's been a strong tone of burnout since >     2012 [1] > This is a very real concern for me.  We do have a very few people who have taken over a lot of responsibility for OpenStack and are getting burned out.  We also need to have more companies start investing in OpenStack again.  We can't, however, force them to participate. I know from my last year or so at Lenovo that there are customers with real interest in OpenStack.  OpenStack is running in the real world.  I don't know if it is just working for people or if the customers are modifying it themselves and not contributing back. It would be interesting to get numbers on this.  Not sure how we can do that.  I am afraid, in the past, that the community got a reputation of being 'too hard to contribute to'.  If that perception is still hurting us now it is something that we need to address. I think that some of the lack of participation is also due to cultural differences in the geos where OpenStack has been expanding.  That is a very hard problem to address. > We drastically need to change the expectations we place on ourselves > in terms of velocity. > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-09-04.log.html#t2019-09-04T00:26:35 > >>  Ghanshyam Mann (gmann) >>  Jean-Philippe Evrard (evrardjp) >>  Jay Bryant (jungleboyj) >>  Kevin Carter (cloudnull) >>  Kendall Nelson (diablo_rojo) >>  Nate Johnston (njohnston) > > Since there was no need to vote, there was no need to campaign, > which means we will be missing out on the Q&A period. I've found > those very useful for understanding the issues that are present in > the community and for generating ideas on what to about them. I > think it is good to have that process anyway so I'll start: > > What do you think we, as a community, can do about the situation > described above? What do you as a TC member hope to do yourself? > I addressed this a bit in my candidacy note.  I think that we need to continue to improve our education and on-boarding processes.  Though I don't think it is hard to contribute successfully to OpenStack, there is a lot of tribal knowledge required to be successful in OpenStack.  Documenting those things will help. I would like to work with the foundation to reach out to companies and find out why they are less likely to participate than they used to be.  People are using OpenStack ... why aren't they contributing.  Perhaps it is a question that we could add to the user survey.  I know when I had the foundation reach out to companies that were about to lose their drivers from Cinder, we got responses.  So, I think that is a path we could consider. > Thanks > From Albert.Braden at synopsys.com Wed Sep 4 17:18:43 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 4 Sep 2019 17:18:43 +0000 Subject: Nova causes MySQL timeouts In-Reply-To: References: Message-ID: We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: https://docs.openstack.org/keystone/stein/configuration/config-options.html Document says: [api_database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. [database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? My settings are default: [api_database]: #connection_recycle_time = 3600 #max_overflow = #max_pool_size = [database]: #connection_recycle_time = 3600 #min_pool_size = 1 #max_overflow = 50 #max_pool_size = 5 It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? From: Gaëtan Trellu Sent: Tuesday, September 3, 2019 1:37 PM To: Albert Braden Cc: openstack-discuss at lists.openstack.org Subject: Re: Nova causes MySQL timeouts Hi Albert, It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. Keep in mind than more workers you will have more connections will be opened on the database. Gaetan (goldyfruit) On Sep 3, 2019 4:31 PM, Albert Braden > wrote: It looks like nova is keeping mysql connections open until they time out. How are others responding to this issue? Do you just ignore the mysql errors, or is it possible to change configuration so that nova closes and reopens connections before they time out? Or is there a way to stop mysql from logging these aborted connections without hiding real issues? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Sep 4 17:21:03 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 4 Sep 2019 12:21:03 -0500 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: <592533e2-c4e0-ff37-14cf-e00ebcfec832@gmail.com> Chris, Thank you for all you have done!  Sorry to see you go. Wishing you the best of luck with your future endeavors! Jay On 9/4/2019 12:23 PM, Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge From amy at demarco.com Wed Sep 4 17:28:17 2019 From: amy at demarco.com (Amy Marrich) Date: Wed, 4 Sep 2019 12:28:17 -0500 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Thanks for everything you've done over the years you will be missed! Amy (spotz) On Wed, Sep 4, 2019 at 11:24 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 4 19:35:28 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 4 Sep 2019 12:35:28 -0700 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc Message-ID: Hello :) Wanted to split the question Chris Dent asked here[1] into its own thread so people down the road and those tracking it now can find more easily. To kind of rephrase for everyone (Chris, correct me if I am wrong or not getting all of it): What do you think we, as a community, can do about the lack of candidates for roles like TC or PTL? How can we adjust, as a community, to make our governance structures fit better? In what wasy can we address and prevent burnout? I think JP[2] and Jay[3] already started to enumerate good ideas on the other thread, so to summarize/ expand/ add to their lists: - Reducing the number of TC members to 9, and maybe someday down to 7. When we were having polls for every election (maybe not every project) it was at a time where the electorate (and theoretically the number of possible candidates) was also huge. Since we have move past the hype curve and stabilized as a project, the number of polls we've (I say we because I used to be and still plan to help with elections) had to make have decreased. It seems to be a matter of proportions. - Continuing to improve education and onboarding process. Agreed 100%, but this should be an ongoing focus for everyone too- every contributor TC, PTL, or otherwise. The best way to get more people involved faster is a lower barrier to entry, but we all know that. Yes some things like gerrit and IRC are hard for people to get past and likely won't be changing for our community any time soon, but there are things like that with every community (I don't know if you have ever tried to push patches to k8s but their tagging of PRs is something they are working on making less complicated and better documented). Breaking down the onboarding process we have at the moment into smaller modules and clearly documenting the progression through those modules for new comers to easily find and work through is important. Also, though, having that be the only place that we, as a community, point to (meaning no duplicate information in multiple places like we have today) when new contributors have issues. - Better documentation of tribal knowledge. I proposed as a community goal for the U release[4], to formalize project specific onboarding information (some teams have already done this) and project specific guides for PTLs (I know we already have the broad strokes for all PTLs documented fairly well, but there's always project specific stuff) so that when there is a turn over mid release, its easier for someone new to step up. - Utilize the user survey to gather info about how/why contribution is happening or why they aren't contributing if that's the case. There are already several questions there from the TC about this topic in the survey, but perhaps they can be re-framed if we aren't getting the info we want from them. As a reminder, here they are: -- To which projects does your organization contribute maintenance resources, such as patches for bug fixes and code reviews on master or stable branches? -- What prevents you or your organization from contributing more maintenance resources, or makes contributing difficult? -- How do members of your organization contribute to OpenStack? I think the real issue is getting larger vendors of OpenStack to get their users to take the user survey. We have a pretty solid reach as it is, but there are a lot of people using OpenStack that don't take the survey that we don't know about even because they are confidential (their results can still be confidential if they take the survey). - Longer release cycle. I know this has come up a dozen or more times (and I'm a little sorry for bringing it up again), but I think OpenStack has stabilized enough that 6 months is a little short and now may finally be the time to lengthen things a bit. 9 months might be a better fit. With longer release cycles comes more time to get work done as well which I've heard has been a complaint of more part time contributors when this discussion has come up in the past. - Co-PTL type position? I've noticed and talked to several PTLs on a variety of projects that need a little extra help with the role. They either don't feel like they have all the experience they need to be PTL yet and so they want the previous PTL to help out still or maybe they want to do it, but there are enough variables in their day to day work (or lack of overlap tz wise with most of the other contributors to that project), that having a backup person to help out and backfill when they need help. - Talking to each other. I really honestly think just talking to one another could help too. When you find yourself in a conversation with someone about how unmotivated they are because they have a ton of work to do. You might offer to take something off their plate. Or help them see maybe they need to not take on anything new till some other work gets wrapped up. We are a community that succeeds together, so if you see someone burning themselves out do what you can to help lighten their load (helping directly is great, but there are plenty of other people in our community that you could call on to help too). Hopefully goes without saying, but don't burn yourself out trying to help someone else either. Some of these things are more actionable, others are still high level and need to have concrete actions tied to them, but I think there are plenty of things we can do to make progress here. -Kendall (diablo_rojo) [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009084.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009087.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009101.html [4] https://etherpad.openstack.org/p/PVG-u-series-goals -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Sep 4 19:36:45 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 4 Sep 2019 12:36:45 -0700 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <3148cdbd-f232-4247-a40c-a0f8c2614df4@gmail.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <3148cdbd-f232-4247-a40c-a0f8c2614df4@gmail.com> Message-ID: Started a new thread to organize all this info better: http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009105.html -Kendall (diablo_rojo) On Wed, Sep 4, 2019 at 10:16 AM Jay Bryant wrote: > Chris, > > Thank you for your questions. I agree that not having the election > deprived the community of a chance to get to know the candidates better > so I am happy to help out here. :-) > > Hope my thoughts in-line below make sense! > > Jay > > On 9/4/2019 5:32 AM, Chris Dent wrote: > > On Wed, 4 Sep 2019, Jeremy Stanley wrote: > > > >> Thank you to all candidates who put their name forward for Project > >> Team Lead (PTL) and Technical Committee (TC) in this election. A > >> healthy, open process breeds trust in our decision making capability > >> thank you to all those who make this process possible. > > > > Congratulations and thank you to the people taking on these roles. > > > > We need to talk about the fact that there was no opportunity to vote > > in these "elections" (PTL or TC) because there were insufficient > > candidates. No matter the quality of new leaders (this looks like a > > good group), something is amiss. We danced around these issue for > > the two years I was on the TC, but we never did anything concrete to > > significantly change things, carrying on doing things in the same > > way in a world where those ways no longer seemed to fit. > > > > We can't claim any "seem" about it any more: OpenStack governance > > and leadership structures do not fit and we need to figure out > > the necessary adjustments. > > > I was surprised that we didn't have any PTL elections. I don't know > that this is all bad. At least in the case of the Cinder team it seems > to be a process that we have just kind-of internalized. I got my chance > to be PTL and was ready for a break. I had reached out to Brian > Rosmaita some time ago and had been grooming him to take over. I had > discussions with other people knew Brian was interested, so we went > forward that way. > > I think this is a natural progression for where OpenStack is at right > now. There isn't a lot of contention over how the project needs to be > run right now. In the future that may change and I think having our > election process is important for if and when that happens. > > > I haven't got any new ideas (which is part of why I left the TC). > > My position has always been that with a vendor and enterprise led > > project like OpenStack, where those vendors and enterprises are > > operating in a huge market, staffing the commonwealth in a healthy > > fashion is their responsibility. In large part because they are > > responsible for making OpenStack resistant to "casual" contribution > > in the first place (e.g., "hardware defined software"). > > > > We get people, sometimes, but it is not healthy: > > > > i may see different cross-sections of the community than others > > do, but i feel like there's been a strong tone of burnout since > > 2012 [1] > > > This is a very real concern for me. We do have a very few people who > have taken over a lot of responsibility for OpenStack and are getting > burned out. We also need to have more companies start investing in > OpenStack again. We can't, however, force them to participate. > > I know from my last year or so at Lenovo that there are customers with > real interest in OpenStack. OpenStack is running in the real world. I > don't know if it is just working for people or if the customers are > modifying it themselves and not contributing back. It would be > interesting to get numbers on this. Not sure how we can do that. I am > afraid, in the past, that the community got a reputation of being 'too > hard to contribute to'. If that perception is still hurting us now it > is something that we need to address. > > I think that some of the lack of participation is also due to cultural > differences in the geos where OpenStack has been expanding. That is a > very hard problem to address. > > > We drastically need to change the expectations we place on ourselves > > in terms of velocity. > > > > [1] > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-09-04.log.html#t2019-09-04T00:26:35 > > > >> Ghanshyam Mann (gmann) > >> Jean-Philippe Evrard (evrardjp) > >> Jay Bryant (jungleboyj) > >> Kevin Carter (cloudnull) > >> Kendall Nelson (diablo_rojo) > >> Nate Johnston (njohnston) > > > > Since there was no need to vote, there was no need to campaign, > > which means we will be missing out on the Q&A period. I've found > > those very useful for understanding the issues that are present in > > the community and for generating ideas on what to about them. I > > think it is good to have that process anyway so I'll start: > > > > What do you think we, as a community, can do about the situation > > described above? What do you as a TC member hope to do yourself? > > > I addressed this a bit in my candidacy note. I think that we need to > continue to improve our education and on-boarding processes. Though I > don't think it is hard to contribute successfully to OpenStack, there is > a lot of tribal knowledge required to be successful in OpenStack. > Documenting those things will help. > > I would like to work with the foundation to reach out to companies and > find out why they are less likely to participate than they used to be. > People are using OpenStack ... why aren't they contributing. Perhaps it > is a question that we could add to the user survey. I know when I had > the foundation reach out to companies that were about to lose their > drivers from Cinder, we got responses. So, I think that is a path we > could consider. > > > Thanks > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Sep 4 20:53:20 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 4 Sep 2019 16:53:20 -0400 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: Message-ID: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> > On Sep 4, 2019, at 3:35 PM, Kendall Nelson wrote: > > - Talking to each other. I really honestly think just talking to one another could help too. When you find yourself in a conversation with someone about how unmotivated they are because they have a ton of work to do. You might offer to take something off their plate. Or help them see maybe they need to not take on anything new till some other work gets wrapped up. We are a community that succeeds together, so if you see someone burning themselves out do what you can to help lighten their load (helping directly is great, but there are plenty of other people in our community that you could call on to help too). Hopefully goes without saying, but don't burn yourself out trying to help someone else either. I would take this a step further, and remind everyone in leadership positions that your job is not to do things *for* anyone, but to enable others to do things *for themselves*. Open source is based on collaboration, and ensuring there is a healthy space for that collaboration is your responsibility. You are neither a free workforce nor a charity. By all means, you should help people to achieve their goals in a reasonable way by reducing barriers, simplifying processes, and making tools reusable. But do not for a minute believe that you have to do it all for them, even if you think they have a great idea. Make sure you say “yes, you should do that” more often than “yes, I will do that." Doug From mthode at mthode.org Wed Sep 4 21:23:11 2019 From: mthode at mthode.org (Matthew Thode) Date: Wed, 4 Sep 2019 16:23:11 -0500 Subject: [zaqar][requirements] - release zaqarclient please? Message-ID: <20190904212311.6ruqv3vxqopw6ohb@mthode.org> Hi Zaqar team, I tried to contact you via IRC but that didn't seem to go very well. The requirements team is looking for a release of the client so that we can move on updating jsonschema. We'd like to update it before the freeze time (starts Monday the 9th) so that all the other projects that use it can use it for a while in gate before release. held back because waiting for zaqarclient -jsonschema===3.0.2 +jsonschema===2.6.0 Is it possible to release a new version of the client (for instance as novaclient just recently did (among others))? Thanks, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From morgan.fainberg at gmail.com Wed Sep 4 23:20:58 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Wed, 4 Sep 2019 16:20:58 -0700 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Chris, Thanks for all the hard work and being an amazing part of this community. I hope we continue to run across each other professionally (conferences or otherwise). Best wishes and good luck on your new endeavors, --Morgan On Wed, Sep 4, 2019 at 9:26 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Sep 5 00:53:23 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 04 Sep 2019 19:53:23 -0500 Subject: Open Infrastructure Summit Shanghai: Forum Submissions Open Message-ID: <5D705C83.8020203@openstack.org> Hello Everyone! We are now accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Shanghai [2]. Please submit your ideas through the Summit CFP tool [3] through September20th. Don't forget to put your brainstorming etherpad up on the Shanghai Forum page [4]. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. Keep in mind, Forum submissions are for discussions, not presentations. The timeline for submissions is as follows: Sep 4th | Formal topic submission tool opens: https://cfp.openstack.org. Sep 20th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. Sep 30th | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. Oct 7th | Scheduling committee final meeting Oct 14th | Forum schedule final Nov 4-6| Forum Time! If you have questions or concerns, please reach out to speakersupport at openstack.org . Cheers, Jimmy [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/shanghai-2019/ [3] https://cfp.openstack.org [4] https://wiki.openstack.org/wiki/Forum/Shanghai2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesley.peng1 at googlemail.com Thu Sep 5 01:12:32 2019 From: wesley.peng1 at googlemail.com (Wesley Peng) Date: Thu, 5 Sep 2019 09:12:32 +0800 Subject: [ansible-sig] weekly meetings In-Reply-To: References: Message-ID: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> Hi on 2019/9/5 0:20, Mohammed Naser wrote: > For those interested in getting involved, the ansible-sig meetings > will be held weekly on Fridays at 2:00 pm UTC starting next week (13 > September 2019). > > Looking forward to discussing details and ideas with all of you! Is it a onsite meeting? where is the location? thanks. From kevin at cloudnull.com Thu Sep 5 01:21:39 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Wed, 4 Sep 2019 20:21:39 -0500 Subject: [ansible-sig] weekly meetings In-Reply-To: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> References: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> Message-ID: Thanks Mohammed, I've added it to my calendar and look forward to getting started. -- Kevin Carter IRC: Cloudnull On Wed, Sep 4, 2019 at 8:17 PM Wesley Peng wrote: > Hi > > on 2019/9/5 0:20, Mohammed Naser wrote: > > For those interested in getting involved, the ansible-sig meetings > > will be held weekly on Fridays at 2:00 pm UTC starting next week (13 > > September 2019). > > > > Looking forward to discussing details and ideas with all of you! > > Is it a onsite meeting? where is the location? > This is a good question, I assume the meeting will be on IRC, on freenode, but what channel will we be using? #openstack-ansible-sig ? > > thanks. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nitin.Uikey at nttdata.com Thu Sep 5 02:54:20 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Thu, 5 Sep 2019 02:54:20 +0000 Subject: [dev][tacker] Steps to setup tacker for testing VNF packages Message-ID: Hi All, Please find below the steps to set-up tacker for managing vnf packages. Steps to set-up tacker for managing vnf packages:- 1. Api-paste.ini [composite:tacker] /vnfpkgm/v1: vnfpkgmapi_v1 [composite:vnfpkgmapi_v1] use = call:tacker.auth:pipeline_factory noauth = request_id catch_errors extensions vnfpkgmapp_v1 keystone = request_id catch_errors authtoken keystonecontext extensions vnfpkgmapp_v1 [app:vnfpkgmapp_v1] paste.app_factory = tacker.api.vnfpkgm.v1.router:VnfpkgmAPIRouter.factory You can also copy api-paste.ini available in patch : https://review.opendev.org/#/c/675593 2. Configuration options changes : tacker.conf a) Periodic task to delete the vnf package artifacts from nodes and glance store. default configuration in tacker/tacker/conf/conductor.py vnf_package_delete_interval = 1800 b) Path to store extracted CSAR file on compute node default configuration in tacker/conf/vnf_package.py vnf_package_csar_path = /var/lib/tacker/vnfpackages/ vnf_package_csar_path should have Read and Write access (+rw) c) Path to store CSAR file at glance store default configuration in /devstack/lib/tacker filesystem_store_datadir = /var/lib/tacker/csar_files filesystem_store_datadir should have Read and Write access (+rw) 3. Apply python-tackerclient patches https://review.opendev.org/#/c/679956/ https://review.opendev.org/#/c/679957/ https://review.opendev.org/#/c/679958/ 4. Apply tosca parser changes https://review.opendev.org/#/c/675561/ 5. Sample CSAR file to create VNF package tacker/tacker/samples/vnf_packages/sample_vnf_pkg.zip 6. Commands to manage VNF packages To create a VNF package - openstack vnfpack create —user-data key=value will be generated by this command which will be used in other commands to manage VNF Package. To upload the CSAR file 1. using direct path - openstack vnfpack upload --upload-method direct-file --path 2. using web - openstack vnfpack upload --upload-method web-download --path To list all the VNF Package - openstack vnfpack list To show a VNF package details - openstack vnfpack show To delete a VNF package - openstack vnfpack delete use `openstack vnfpack --help` command for more information Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From renat.akhmerov at gmail.com Thu Sep 5 04:32:38 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 5 Sep 2019 11:32:38 +0700 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: References: Message-ID: Andras, You just went one step ahead of me! I was going to promote Oleg in the end of this week :) I’m glad that we coincided at this. Thanks! I’m for it with my both hands! Renat Akhmerov @Nokia On 4 Sep 2019, 17:33 +0700, András Kövi , wrote: > I would like to invite Oleg Ovcharuk to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. > > Thank you, > Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nitin.Uikey at nttdata.com Thu Sep 5 06:05:50 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Thu, 5 Sep 2019 06:05:50 +0000 Subject: [dev][tacker] Steps to setup tacker for testing VNF packages In-Reply-To: References: Message-ID: Hi All, Small correction. Added `default_backend = file` because `default_store` option is deprecated. >c) Path to store CSAR file at glance store >default configuration in /devstack/lib/tacker >filesystem_store_datadir = /var/lib/tacker/csar_files default_backend = file >filesystem_store_datadir should have Read and Write access (+rw) Regards, Nitin Uikey From: Uikey, Nitin Sent: Thursday, September 5, 2019 11:54 AM To: openstack-discuss at lists.openstack.org Subject: [dev][tacker] Steps to setup tacker for testing VNF packages Hi All, Please find below the steps to set-up tacker for managing vnf packages. Steps to set-up tacker for managing vnf packages:- 1. Api-paste.ini [composite:tacker] /vnfpkgm/v1: vnfpkgmapi_v1 [composite:vnfpkgmapi_v1] use = call:tacker.auth:pipeline_factory noauth = request_id catch_errors extensions vnfpkgmapp_v1 keystone = request_id catch_errors authtoken keystonecontext extensions vnfpkgmapp_v1 [app:vnfpkgmapp_v1] paste.app_factory = tacker.api.vnfpkgm.v1.router:VnfpkgmAPIRouter.factory You can also copy api-paste.ini available in patch : https://review.opendev.org/#/c/675593 2. Configuration options changes : tacker.conf a) Periodic task to delete the vnf package artifacts from nodes and glance store. default configuration in tacker/tacker/conf/conductor.py vnf_package_delete_interval = 1800 b) Path to store extracted CSAR file on compute node default configuration in tacker/conf/vnf_package.py vnf_package_csar_path = /var/lib/tacker/vnfpackages/ vnf_package_csar_path should have Read and Write access (+rw) c) Path to store CSAR file at glance store default configuration in /devstack/lib/tacker filesystem_store_datadir = /var/lib/tacker/csar_files filesystem_store_datadir should have Read and Write access (+rw) 3. Apply python-tackerclient patches https://review.opendev.org/#/c/679956/ https://review.opendev.org/#/c/679957/ https://review.opendev.org/#/c/679958/ 4. Apply tosca parser changes https://review.opendev.org/#/c/675561/ 5. Sample CSAR file to create VNF package tacker/tacker/samples/vnf_packages/sample_vnf_pkg.zip 6. Commands to manage VNF packages To create a VNF package - openstack vnfpack create —user-data key=value will be generated by this command which will be used in other commands to manage VNF Package. To upload the CSAR file 1. using direct path - openstack vnfpack upload --upload-method direct-file --path 2. using web - openstack vnfpack upload --upload-method web-download --path To list all the VNF Package - openstack vnfpack list To show a VNF package details - openstack vnfpack show To delete a VNF package - openstack vnfpack delete use `openstack vnfpack --help` command for more information Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From Nitin.Uikey at nttdata.com Thu Sep 5 07:32:51 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Thu, 5 Sep 2019 07:32:51 +0000 Subject: [dev][tosca-parser] review of toscadefinition1.2-support code Message-ID: Dear core-reviewers, We have submitted one patch regarding `Add support for tosca definition version 1.2` under topic `toscadefinition1.2-support`. Ref: https://review.opendev.org/#/c/675561/ We are adding a new feature in tacker to implement ETSI specs. In the beginning adding interface to manage vnf packages. For this feature spec [1] to merge, we need tosca-parser patch to be merged. We would appreciate if you can take a look at the patch and give your feedback. Thank you in advance. [1] : https://review.opendev.org/#/c/582930 Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From dirk at dmllr.de Thu Sep 5 08:11:55 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Thu, 5 Sep 2019 10:11:55 +0200 Subject: [zaqar][requirements] - release zaqarclient please? In-Reply-To: <20190904212311.6ruqv3vxqopw6ohb@mthode.org> References: <20190904212311.6ruqv3vxqopw6ohb@mthode.org> Message-ID: Hi Matthew, thanks for raising the topic. I created a review for this, requires approval from PTL / release liason: https://review.opendev.org/#/c/679842/ Greetings, Dirk From thierry at openstack.org Thu Sep 5 09:59:22 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 5 Sep 2019 11:59:22 +0200 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> Message-ID: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Chris Dent wrote: > [...] > We need to talk about the fact that there was no opportunity to vote > in these "elections" (PTL or TC) because there were insufficient > candidates. No matter the quality of new leaders (this looks like a > good group), something is amiss. The reality is, with less hype around OpenStack, it's just harder to justify the time you spend on "stewardship" positions. The employer does not value having their employees hold those positions as much as they used to. That affects things like finding volunteers to officiate elections, finding candidates for the TC, and also finding PTLs for every project. As far as PTL/TC elections are concerned I'd suggest two things: - reduce the number of TC members from 13 to 9 (I actually proposed that 6 months ago at the PTG but that was not as popular then). A group of 9 is a good trade-off between the difficulty to get enough people to do project stewardship and the need to get a diverse set of opinions on governance decision. - allow "PTL" role to be multi-headed, so that it is less of a superhuman and spreading the load becomes more natural. We would not elect/choose a single person, but a ticket with one or more names on it. From a governance perspective, we still need a clear contact point and a "bucket stops here" voice. But in practice we could (1) contact all heads when we contact "the PTL", and (2) consider that as long as there is no dissent between the heads, it is "the PTL voice". To actually make it work in practice I'd advise to keep the number of heads low (think 1-3). > [...] > We drastically need to change the expectations we place on ourselves > in terms of velocity. In terms of results, train cycle activity (as represented by merged commits/day) is globally down 9.6% compared to Stein. Only considering "core" projects, that's down 3.8%. So maybe we still have the same expectations, but we are definitely reducing our velocity... Would you say we need to better align our expectations with our actual speed? Or that we should reduce our expectations further, to drive velocity further down? -- Thierry Carrez (ttx) From cdent+os at anticdent.org Thu Sep 5 10:04:39 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 5 Sep 2019 11:04:39 +0100 (BST) Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: On Thu, 5 Sep 2019, Thierry Carrez wrote: > So maybe we still have the same expectations, but we are definitely reducing > our velocity... Would you say we need to better align our expectations with > our actual speed? Or that we should reduce our expectations further, to drive > velocity further down? We should slow down enough that the vendors and enterprises start to suffer. If they never notice, then it's clear we're trying too hard and can chill out. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From gmann at ghanshyammann.com Thu Sep 5 10:33:02 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 05 Sep 2019 19:33:02 +0900 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent wrote ---- > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > So maybe we still have the same expectations, but we are definitely reducing > > our velocity... Would you say we need to better align our expectations with > > our actual speed? Or that we should reduce our expectations further, to drive > > velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. +1 on this but instead of slow down and make vendors suffer we need the proper way to notify or make them understand about the future cutoff effect on OpenStack as software. I know we have been trying every possible way but I am sure there are much more managerial steps can be taken. I expect Board of Director to come forward on this as an accountable entity. TC should raise this as high priority issue to them (in meetings, joint leadership meeting etc). I am sure this has been brought up before, can we make OpenStack membership company to have a minimum set of developers to maintain upstream. With the current situation, I think it make sense to ask them to contribute manpower also along with membership fee. But again this is more of BoD and foundation area. I agree on ttx proposal to reduce the TC number to 9 or 7, I do not think this will make any difference or slow down on any of the TC activity. 9 or 7 members are enough in TC. As long as we get PTL(even without an election) we are in a good position. This time only 7 leaderless projects (6 actually with Cyborg PTL missing to propose nomination in election repo and only on ML) are not so bad number. But yes this is a sign of taking action before it goes into more worst situation. -gmann > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From anlin.kong at gmail.com Thu Sep 5 10:32:54 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 5 Sep 2019 22:32:54 +1200 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Thank you for all the amazing work you've done, either in OpenStack or in k8s/cloud-provider-openstack. We will miss you! - Best regards, Lingxian Kong Catalyst Cloud On Thu, Sep 5, 2019 at 4:27 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Thu Sep 5 11:36:36 2019 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 5 Sep 2019 07:36:36 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> Message-ID: <20190905113636.qwxa4fjxnju7tmip@barron.net> On 05/09/19 19:33 +0900, Ghanshyam Mann wrote: > ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent wrote ---- > > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > > > So maybe we still have the same expectations, but we are definitely reducing > > > our velocity... Would you say we need to better align our expectations with > > > our actual speed? Or that we should reduce our expectations further, to drive > > > velocity further down? > > > > We should slow down enough that the vendors and enterprises start to > > suffer. If they never notice, then it's clear we're trying too hard > > and can chill out. > >+1 on this but instead of slow down and make vendors suffer we need the proper >way to notify or make them understand about the future cutoff effect on OpenStack >as software. I know we have been trying every possible way but I am sure there are >much more managerial steps can be taken. I expect Board of Director to come forward >on this as an accountable entity. TC should raise this as high priority issue to them (in meetings, >joint leadership meeting etc). > >I am sure this has been brought up before, can we make OpenStack membership company >to have a minimum set of developers to maintain upstream. With the current situation, I think >it make sense to ask them to contribute manpower also along with membership fee. But again >this is more of BoD and foundation area. +1 IIUC Gold Membership in the Foundation provides voting privileges at a cost of $50-200K/year and Corporate Sponsorship provides these plus various marketing benefits at a cost of $10-25K/year. So far as I can tell there is not a requirement of a commitment of contributors and maintainers with the exception of the (currently closed) Platinum Membership, which costs $500K/year and requires at least 2 FTE equivalents contributing to OpenStack. In general I see requirements for annual cash expenditure to the Foundation, as for membership in any joint commercial enterprise, but little that ensures the availability of skilled labor for ongoing maintenance of our projects. -- Tom Barron > >I agree on ttx proposal to reduce the TC number to 9 or 7, I do not think this will make any >difference or slow down on any of the TC activity. 9 or 7 members are enough in TC. > >As long as we get PTL(even without an election) we are in a good position. This time only >7 leaderless projects (6 actually with Cyborg PTL missing to propose >nomination in election repo and only on ML) are >not so bad number. But yes this is a sign of taking action before it goes into more worst situation. > >-gmann > > > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent > > From smooney at redhat.com Thu Sep 5 11:41:29 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 05 Sep 2019 12:41:29 +0100 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: On Thu, 2019-09-05 at 11:04 +0100, Chris Dent wrote: > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > So maybe we still have the same expectations, but we are definitely reducing > > our velocity... Would you say we need to better align our expectations with > > our actual speed? Or that we should reduce our expectations further, to drive > > velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. well openstack has already slowed alot. i think i dont really agree with Thierry's assertion that lack of participation is driven by vendors being less interested in openstack. i have not felt that at least in my time at redhat. when i was at intel i did feel that part of the reason that the investment that was been made was reducing was not driven by the lack of hype but by how slow adding some feature that really mattered were in openstack already. there are still feature that i had working in lab envs that were proposed upstream and are only now finally being addressed/fix that have been in flight for 4+ years. im not trying to pick on any project in particular with that comment because i have experience several multi cycle delays acorss several project either directly or via the people i work with day to day, in the time i have been working on openstack. our core teams to a lot of really good work, they do land alot of important feature and have been driving to improve the quality of the code and our documentation. Asking a core to also take on the durties of PTL is a lot on top of that. Until recently i assumed as i think many did that to run for PTL you had to be a core team member, not that i was really considering it in anycase but similarly many people assume to be a stable core you have to a core or to be on the technical commit you have to be well technical. part of the lack of engagement might be that not everyone knows they can tack part in some of the governance activities be they technical or organisational. i comment on TC and governace topics from time to time but i also personally feel that getting involed with either a PTL role or TC role would be a daunting task, even though i know many of the people invovled, it would still be out of my comfort zone. which is why if feel comfortable engaging with the campaigns and voting in the election but have never self nominated. spreading the load would help with that. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From gmann at ghanshyammann.com Thu Sep 5 11:46:43 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 05 Sep 2019 20:46:43 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-6 Update Message-ID: <16d013f861a.ccea0963228871.1777128122392549699@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R6 week. I am preparing the legacy base job for IPv6 deployment. NOTE: As first step, I am going to set up the job with all IPv6 deployment setting and basic verification whether service listens on IPv6 or not. As the second step, we will add post-run script to audit the tcpdump/logs etc for unwanted IPv4 traffic. Summary: * Number of Ipv6 jobs proposed Projects: 28 * Number of pass projects: 18 ** Number of project merged out of pass project: 13 * Number of failing projects: 10 Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 Current status: ============ 1. Cinder and devstack fix are merged for cinder IPv6 job and I did recheck on cinder patch. 2. Preparing the legacy base job with IPv6 setting - https://review.opendev.org/#/c/680233/ 3. Zun, Watcher, Telemetry(thanks to zhurong) are merged. I have proposed to run telemetry ipv6 job on Panko and Aodh gate also. 4. This week new projects ipv6 jobs patch and status: - Tacker: link: https://review.opendev.org/#/c/676918/ status: Current functional jobs are n-v so I am not sure IPv6 job will pass or not. waiting for gate result. Need Help from the project team: 1. Monasca: waiting for new kafka client patches merge - https://review.opendev.org/#/c/674814/2 2. Sahara: https://review.opendev.org/#/c/676903/ Job is failing to start the sahara service. I could not find the logs for sahara service(it shows an empty log under apache). Need help from sahara team. 3. Searchlight: https://review.opendev.org/#/c/678391/ python-searchlightclient error, Trinh will be looking into this. 4. Senlin: https://review.opendev.org/#/c/676910/ Not able to connect on auth url - https://zuul.opendev.org/t/openstack/build/0ad3b4aac0424ad78171ca7546421f5e/log/job-output.txt#43011 5. qinling: https://review.opendev.org/#/c/673506/1 logs are not there so i did recheck to get the fresh log for debugging. IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/677524/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific verification then it can be added in project side job as post-run playbooks as described in wiki page[1]. [1] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From marek.lycka at ultimum.io Thu Sep 5 11:52:24 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Thu, 5 Sep 2019 13:52:24 +0200 Subject: [Horizon] Paging and Angular... Message-ID: Hi all, I took apart the Horizon paging mechanism while working on [1] and have a few of findings: - Paging is unimplemented/turned off for many (if not most) panels, not just Routers and Networks - Currently, single page data loads could potentially bump up against API hard limits - Sorting is also broken in places where paging is enabled (Old images...), see [2] - The Networks table loads data via three API calls due to neutron API limitations, which makes the marker based mechanism unusable - There is at least one more minor bug which breaks pagination, there may be more While some of these things may be fixable in different hacky and/or inefficient ways, we already have Angular implementations which fix many of them and make improving and fixing the rest easier. Since Angular ports would help with other unrelated issues as well and allow us to start deprecating old code, I was wondering two things: 1) What would it take to increase the priority of Angularization in general? 2) Can the Code Review process be modified/improved to increase the chance for Angularization changes to be code reviewed and merged if they do happen? My previous attempts in this area have failed because of lack of code reviewers... Since full Angularization is still the goal for Horizon as far as I know, I'd rather spend time doing that than hacking solutions to different problems in legacy code which is slated deprecation. Best Regards, Marek [1] https://bugs.launchpad.net/horizon/+bug/1746184 [2] https://bugs.launchpad.net/horizon/+bug/1782732 -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 5 13:24:57 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 5 Sep 2019 09:24:57 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <65A0A6F8-CF5A-4403-B4D7-54B4B37A23AE@doughellmann.com> > On Sep 5, 2019, at 6:04 AM, Chris Dent wrote: > > On Thu, 5 Sep 2019, Thierry Carrez wrote: > >> So maybe we still have the same expectations, but we are definitely reducing our velocity... Would you say we need to better align our expectations with our actual speed? Or that we should reduce our expectations further, to drive velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. As much as I support the labor movement, I don’t think *starting* from an adversarial “we’ll show them!” position with our employers and potential contributors is the most effective way to establish the sort of change we want. It would much more likely instill the idea that this community won’t work with new contributors, which isn’t going to be any healthier than the current situation over the long term. That said, I do agree with the “chill out” approach. Do what you can and then emphasize collaboration over doing things for non-contributors, to turn them into contributors. Be honest about the need for help, and clear about what sort of help is needed, so that someone who *is* motivated can get involved. And make it easy for others to join and fulfill those needs, so the bureaucracy doesn’t demotivate them into looking for other communities to join instead. Also, accept that either approach is going to mean things will not be done, and that is OK. Look for ways to minimize the amount of effort for tasks that must be done, but let “good ideas” go. If they’re good enough, and you make it possible for others to contribute, someone will step up. But if that doesn’t happen, it should not be a source of stress for anyone. That means the “good idea” doesn’t meet the bar of economic viability. Doug From doug at doughellmann.com Thu Sep 5 14:59:06 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 5 Sep 2019 10:59:06 -0400 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance Message-ID: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> Following the U cycle election there was no candidate for either the winstackers or powervmstackers team PTL role. This is the second cycle in a row where that problem has occurred for both teams, which indicates that the teams are not active in the community. During the TC meeting today [1] we discussed removing the teams from governance, so I have proposed the patches to do that [2][3]. Doug [1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-148 [2] https://review.opendev.org/680438 remove powervmstackers team [3] https://review.opendev.org/680439 remove winstackers team From adrianc at mellanox.com Thu Sep 5 15:10:17 2019 From: adrianc at mellanox.com (Adrian Chiris) Date: Thu, 5 Sep 2019 15:10:17 +0000 Subject: [tc][neutron] Supported Linux distributions and their kernel Message-ID: Greetings, I was wondering what is the guideline in regards to which kernels are supported by OpenStack in the various Linux distributions. Looking at [1], Taking for example latest CentOS major (7): Every "minor" version is released with a different kernel version, the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) While I understand that OpenStack projects are expected to support all CentOS 7.x releases. Does the same applies for the kernels they originally came out with? The reason I'm asking, is because I was working on doing some cleanup in neutron [2] for a workaround introduced because of an old kernel bug, It is unclear to me if it is safe to introduce this change. [1] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions [2] https://review.opendev.org/#/c/677095/ Thanks, Adrian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Sep 5 15:10:33 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 5 Sep 2019 11:10:33 -0400 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> Message-ID: <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> > On Sep 5, 2019, at 10:59 AM, Doug Hellmann wrote: > > Following the U cycle election there was no candidate for either the winstackers or powervmstackers team PTL role. This is the second cycle in a row where that problem has occurred for both teams, which indicates that the teams are not active in the community. During the TC meeting today [1] we discussed removing the teams from governance, so I have proposed the patches to do that [2][3]. > > Doug > > [1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-148 > [2] https://review.opendev.org/680438 remove powervmstackers team > [3] https://review.opendev.org/680439 remove winstackers team I neglected to mention that we did consider both teams as good candidates for SIGs, but will leave it up to the contributors on those teams to propose creating the SIGs, if they choose to do so. Doug From cboylan at sapwetik.org Thu Sep 5 15:20:43 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 05 Sep 2019 08:20:43 -0700 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: Message-ID: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> On Thu, Sep 5, 2019, at 8:10 AM, Adrian Chiris wrote: > > Greetings, > > I was wondering what is the guideline in regards to which kernels are > supported by OpenStack in the various Linux distributions. > > > Looking at [1], Taking for example latest CentOS major (7): > > Every “minor” version is released with a different kernel version, > > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and > the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > > While I understand that OpenStack projects are expected to support all > CentOS 7.x releases. It is my understanding that CentOS (and RHEL?) only support the current/latest point release of their distro [3]. We only test against that current point release. I don't expect we can be expected to support a distro release which the distro doesn't even support. All that to say I would only worry about the most recent point release. > > Does the same applies for the kernels they _originally_ came out with? > > > The reason I’m asking, is because I was working on doing some cleanup > in neutron [2] for a workaround introduced because of an old kernel bug, > > It is unclear to me if it is safe to introduce this change. > > > [1] > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > [2] https://review.opendev.org/#/c/677095/ [3] https://wiki.centos.org/FAQ/General#head-dcca41e9a3d5ac4c6d900a991990fd11930867d6 From gmann at ghanshyammann.com Thu Sep 5 15:23:14 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Sep 2019 00:23:14 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position Message-ID: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Hello Everyone, With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. We would like to know if anyone from Placement core team, Nova core team or PTL (as placement main consumer) of any other interested/related developer is interested to take the PTL position? [1] https://governance.openstack.org/election/results/ussuri/ptl.html [2] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-250 -TC (gmann) From cdent+os at anticdent.org Thu Sep 5 16:20:39 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 5 Sep 2019 17:20:39 +0100 (BST) Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. > In today TC meeting[2], we discussed the few possibilities and decided to reach out to the > eligible candidates to serve the PTL position. Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From mthode at mthode.org Thu Sep 5 16:25:17 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 5 Sep 2019 11:25:17 -0500 Subject: [keystone][horizon][zaqar][tempest][requirements] library updates breaking projects Message-ID: <20190905162516.mxdxg4dl3epwwwfi@mthode.org> I emailed a while ago about problem updates and wanted to give an update. I'm hoping we can get these fixed before the freeze which is on Monday iirc. horizon This is a newer issue which e0ne and amotoki know about but no existing review to fix it. please test against https://review.opendev.org/680457 -semantic-version===2.8.1 +semantic-version===2.6.0 tempest STILL has failures I thought the following commit would fix it, but nope https://github.com/mtreinish/stestr/commit/136027c005fc437341bc23939a18a5f3314194f1 -stestr===2.5.1 +stestr===2.4.0 python-zaqarclient waiting on https://review.opendev.org/679842 may be merging today -jsonschema===3.0.2 +jsonschema===2.6.0 keystone a review is out there that seems to have tests passing https://review.opendev.org/677511/ -oauthlib===3.1.0 +oauthlib===3.0.2 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Sep 5 16:57:01 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 5 Sep 2019 11:57:01 -0500 Subject: [release] Release countdown for week R-5, September 9-13 Message-ID: <20190905165701.GA29404@sm-workstation> Development Focus ----------------- We are getting close to the end of the Train cycle! Next week on September 12 is the train-3 milestone, also known as feature freeze. It's time to wrap up feature work in the services and their client libraries, and defer features that won't make it to the Ussuri cycle. General Information ------------------- This coming week is the deadline for client libraries: their last feature release needs to happen before "Client library freeze" on September 12. Only bugfix releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/train branching request with the review (as an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2) September 12 is also the deadline for feature work in all OpenStack deliverables following the cycle-with-rc model. To help those projects produce a first release candidate in time, only bugfixes should be allowed in the master branch beyond this point. Any feature work past that deadline has to be approved by the team PTL. Finally, feature freeze is also the deadline for submitting a first version of your cycle-highlights. Cycle highlights are the raw data hat helps shape what is communicated in press releases and other release activity at the end of the cycle, avoiding direct contacts from marketing folks. See https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights for more details. Upcoming Deadlines & Dates -------------------------- Train-3 milestone (feature freeze): September 12 (R-5 week) RC1 deadline: September 26 (R-3 week) Train final release: October 16 Forum+PTG at Shanghai summit: November 4 From dirk at dmllr.de Thu Sep 5 18:09:58 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Thu, 5 Sep 2019 20:09:58 +0200 Subject: [ironic] opensuse-15 jobs are temporary non-voting on bifrost In-Reply-To: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> References: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> Message-ID: Hi Dmitry, Am Mi., 4. Sept. 2019 um 17:25 Uhr schrieb Dmitry Tantsur : > JFYI we had to disable opensuse-15 jobs because they kept failing with > repository issues. Help with debugging appreciated. The nodeset is incorrect, https://review.opendev.org/680450 should get you help started. Greetings, Dirk From bansalnehal26 at gmail.com Wed Sep 4 12:30:10 2019 From: bansalnehal26 at gmail.com (Nehal Bansal) Date: Wed, 4 Sep 2019 18:00:10 +0530 Subject: [Tacker] [Mistral] Regarding Inputs in Network Service Descriptors Message-ID: Hi, I have been trying to create a Network Service Descriptor which takes flavor, image, network_name as inputs from a parameter file and then passes it on to the VNF Descriptor but so far my attempts have been unsuccessful. Is there a standard template available because I could not find even a single one which took image_name or flavor_name from a parameter file. Thank you. Regards, Nehal Bansal -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Sep 5 18:48:31 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 5 Sep 2019 11:48:31 -0700 Subject: [all][PTL] Call for Cycle Highlights for Train Message-ID: Hello Everyone! As you may or may not have read last week in the release update from Sean, its time to call out 'cycle-highlights' in your deliverables! As PTLs, you probably get many pings towards the end of every release cycle by various parties (marketing, management, journalists, etc) asking for highlights of what is new and what significant changes are coming in the new release. By putting them all in the same place it makes them easy to reference because they get compiled into a pretty website like this from Rocky[1] or this one for Stein[2]. We don't need a fully fledged marketing message, just a few highlights (3-4 ideally), from each project team. *The deadline for cycle highlights is the end of the R-5 week [3] on Sept 13th.* How To Reminder: ------------------------- Simply add them to the deliverables/train/$PROJECT.yaml in the openstack/releases repo similar to this: cycle-highlights: - Introduced new service to use unused host to mine bitcoin. The formatting options for this tag are the same as what you are probably used to with Reno release notes. Also, you can check on the formatting of the output by either running locally: tox -e docs And then checking the resulting doc/build/html/train/highlights.html file or the output of the build-openstack-sphinx-docs job under html/train/ highlights.html. Thanks :) -Kendall Nelson (diablo_rojo) [1] https://releases.openstack.org/rocky/highlights.html [2] https://releases.openstack.org/stein/highlights.html [3] https://releases.openstack.org/train/schedule.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Sep 5 19:13:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Sep 2019 19:13:10 +0000 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> Message-ID: <20190905191310.jacwzbion5zf3jhv@yuggoth.org> On 2019-09-04 16:53:20 -0400 (-0400), Doug Hellmann wrote: > > On Sep 4, 2019, at 3:35 PM, Kendall Nelson wrote: [...] > > Hopefully goes without saying, but don't burn yourself out > > trying to help someone else either. This is the point in the flight safety demonstration where we remind passengers to affix their own oxygen masks before assisting others. > I would take this a step further, and remind everyone in > leadership positions that your job is not to do things *for* > anyone, but to enable others to do things *for themselves*. Open > source is based on collaboration, and ensuring there is a healthy > space for that collaboration is your responsibility. You are > neither a free workforce nor a charity. By all means, you should > help people to achieve their goals in a reasonable way by reducing > barriers, simplifying processes, and making tools reusable. But do > not for a minute believe that you have to do it all for them, even > if you think they have a great idea. Make sure you say “yes, you > should do that” more often than “yes, I will do that." And also, as has been suggested to some extent in other responses on this thread, if there are expected things go undone because there's nobody who has available time to do them, then it's a distinct possibility those things weren't necessary in the first place. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Thu Sep 5 20:57:55 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 5 Sep 2019 22:57:55 +0200 Subject: [neutron] Open Infrastructure Summit Shanghai - forum topics ideas Message-ID: Hi Neutrinos, We want to collect some ideas about potential topics which can be proposed as sessions on Forum in Shanghai. I created etherpad [1]. If You have any idea for such potential topic, please add it there. If You don’t have any ideas, please also check etherpad - maybe You will be interested in one of topics proposed by others. We don’t have much time for that as deadline for CFP is 20th of September, so please don’t wait too long with writing there Your ideas :) [1] https://etherpad.openstack.org/p/neutron-shanghai-forum-brainstorming — Slawek Kaplonski Senior software engineer Red Hat From johnsomor at gmail.com Thu Sep 5 21:45:45 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 5 Sep 2019 14:45:45 -0700 Subject: Octavia LB flavor recommendation for Amphora VMs In-Reply-To: References: Message-ID: Hi Pawel, For small deployments, the 1GB RAM, 1vCPU, 2GB disk (3GB with centos, etc) should work fine for you. You might even be able to drop the RAM lower if you will not be doing TLS. For example, my devstack amphora instance is allocated 1GB RAM, but is only using less than half that. (just because the flavor says 1GB it doesn't mean it uses all of that all of the time) Kernel page de-duplication will also help with actual consumption as the amphora images are mostly the same. If you are doing really large numbers of connections, and you are logging the tenant traffic flows locally, you might want to increase the available disk. Normal workloads will be fine with a smaller disk as the amphora do include log rotation. If you do not need the flow logs, there is a configuration setting to disable them. The main tuning you might want to do is setting the maximum amount of RAM it can consume. If you have a very large number of concurrent connections or are using TLS offloading, you might want to consider increasing the amount of RAM the amphora can consume. The HAProxy documentation states that it normally(non-TLS offload) uses around 32kB of RAM per established connection. You might start with that and see how that aligns to your application/use case. In testing I have done, adding additional vCPUs has very little impact on the performance(a small bump with the second CPU as the NIC interrupts can be split from the HAProxy processes). You can get pretty high throughput with a single vCPU. We expect once HAProxy 2.0 stabilizes and is available (the distros are not yet shipping it), we will look at enabling the threading support to vertically scale the amphora by adding vCPUs. Versions prior to 2.0 did not have good threading and the multi-process model breaks a bunch of features. If you really need more CPU now, you can always build a custom image with 2.0.x in it and use the "custom HAProxy template" configuration setting to add the threading settings. Now with Octavia flavors, you can define flavors that select different nova flavors for the amphora at load balancer creation. For example, you can have a "bronze", "silver", "gold", each with different RAM allocations. We would also love to hear what you find with your deployment and applications. Michael On Wed, Sep 4, 2019 at 2:49 AM Pawel Konczalski wrote: > > Hello everyone / Octavia Team, > > what is your experience / recommendation for a Octavia flavor with is > used to deploy Amphora VM for small / mid size setups? (RAM / Cores / HDD) > > BR > > Pawel From davidmnoriega at gmail.com Thu Sep 5 21:58:42 2019 From: davidmnoriega at gmail.com (David M Noriega) Date: Thu, 5 Sep 2019 14:58:42 -0700 Subject: zuul and nodepool ansible roles Message-ID: How do I go about contributing to the zuul and nodepool roles? They do not have either a launchpad or storyboard page. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Sep 5 22:07:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Sep 2019 22:07:51 +0000 Subject: zuul and nodepool ansible roles In-Reply-To: References: Message-ID: <20190905220751.arp4rj4c5kfek33r@yuggoth.org> On 2019-09-05 14:58:42 -0700 (-0700), David M Noriega wrote: > How do I go about contributing to the zuul and nodepool roles? > They do not have either a launchpad or storyboard page. To which zuul and nodepool roles are you referring? If you mean the ones which make up the Zuul project's standard library, you're looking for the https://opendev.org/zuul/zuul-jobs repository documented at https://zuul-ci.org/docs/zuul-jobs/ . Changes to content there can be proposed to the Gerrit service at review.opendev.org, and the Zuul community can be found in the #zuul channel on the Freenode IRC network or via the zuul-discuss at lists.zuul-ci.org mailing list. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Sep 5 22:12:59 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Sep 2019 22:12:59 +0000 Subject: zuul and nodepool ansible roles In-Reply-To: <20190905220751.arp4rj4c5kfek33r@yuggoth.org> References: <20190905220751.arp4rj4c5kfek33r@yuggoth.org> Message-ID: <20190905221259.qlhqbzeko4dk7a2g@yuggoth.org> On 2019-09-05 22:07:51 +0000 (+0000), Jeremy Stanley wrote: > On 2019-09-05 14:58:42 -0700 (-0700), David M Noriega wrote: > > How do I go about contributing to the zuul and nodepool roles? > > They do not have either a launchpad or storyboard page. > > To which zuul and nodepool roles are you referring? If you mean the > ones which make up the Zuul project's standard library, you're > looking for the https://opendev.org/zuul/zuul-jobs repository > documented at https://zuul-ci.org/docs/zuul-jobs/ . Changes to > content there can be proposed to the Gerrit service at > review.opendev.org, and the Zuul community can be found in the #zuul > channel on the Freenode IRC network or via the > zuul-discuss at lists.zuul-ci.org mailing list. I was just reminded in #zuul that https://zuul-ci.org/community.html is probably the best place to start. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Thu Sep 5 22:15:35 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 5 Sep 2019 18:15:35 -0400 Subject: zuul and nodepool ansible roles In-Reply-To: References: Message-ID: <20190905221535.GA4782@localhost.localdomain> On Thu, Sep 05, 2019 at 02:58:42PM -0700, David M Noriega wrote: > How do I go about contributing to the zuul and nodepool roles? They do not > have either a launchpad or storyboard page. This is true, have not done the steps to set this up. I think we could do launchpad or storyboard if wanted. That send, I usually hang out in #openstack-windmill to answer some questions or watch for new patches to be created from time to time. For the moment, I would suggest IRC as even if bug trackers were enabled, I am not sure how often I'd be able to check them. -Paul From nate.johnston at redhat.com Thu Sep 5 22:31:37 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 5 Sep 2019 18:31:37 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <20190905223137.i72s7n4tibkgypqf@bishop> On Thu, Sep 05, 2019 at 11:59:22AM +0200, Thierry Carrez wrote: > Chris Dent wrote: > > [...] > > We need to talk about the fact that there was no opportunity to vote > > in these "elections" (PTL or TC) because there were insufficient > > candidates. No matter the quality of new leaders (this looks like a > > good group), something is amiss. > > The reality is, with less hype around OpenStack, it's just harder to justify > the time you spend on "stewardship" positions. The employer does not value > having their employees hold those positions as much as they used to. That > affects things like finding volunteers to officiate elections, finding > candidates for the TC, and also finding PTLs for every project. > > As far as PTL/TC elections are concerned I'd suggest two things: > > - reduce the number of TC members from 13 to 9 (I actually proposed that 6 > months ago at the PTG but that was not as popular then). A group of 9 is a > good trade-off between the difficulty to get enough people to do project > stewardship and the need to get a diverse set of opinions on governance > decision. > > - allow "PTL" role to be multi-headed, so that it is less of a superhuman > and spreading the load becomes more natural. We would not elect/choose a > single person, but a ticket with one or more names on it. From a governance > perspective, we still need a clear contact point and a "bucket stops here" > voice. But in practice we could (1) contact all heads when we contact "the > PTL", and (2) consider that as long as there is no dissent between the > heads, it is "the PTL voice". To actually make it work in practice I'd > advise to keep the number of heads low (think 1-3). I think there was already an effort to allow the PTL to shed some of their duties, in the form of the Cross Project Liaisons [1] project. I thought that was a great way for more junior members of the community to get involved with stewardship and be recognized for that contribution, and perhaps be mentored up as they take a bit of load off the PTL. I think if we expand the roles to include more of the functions that PTLs feel the need to do themselves, then by doing so we (of necessity) document those parts of the job so that others can handle them. And perhaps projects can cooperate and pool resources - for example, the same person who is a liaison for Neutron to Oslo could probably be on the look out for issues of interest to Octavia as well, and so on. I think that this looks different for projects of different size; large projects can spread it out a bit, while for smaller ones more of a "triumvirate" approach would likely develop. Nate [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons for those not familiar > > [...] > > We drastically need to change the expectations we place on ourselves > > in terms of velocity. > > In terms of results, train cycle activity (as represented by merged > commits/day) is globally down 9.6% compared to Stein. Only considering > "core" projects, that's down 3.8%. > > So maybe we still have the same expectations, but we are definitely reducing > our velocity... Would you say we need to better align our expectations with > our actual speed? Or that we should reduce our expectations further, to > drive velocity further down? > > -- > Thierry Carrez (ttx) > From openstack at fried.cc Thu Sep 5 22:32:38 2019 From: openstack at fried.cc (Eric Fried) Date: Thu, 5 Sep 2019 17:32:38 -0500 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> Message-ID: <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> There are other factors at play here that arguably justify this action, but I'd like to posit that failure to put forward a PTL for teams of this nature should not by itself be grounds for de-governance-ification. Cf. the "no placement PTL" thread for discussion of leaderlessness being not only possible but potentially beneficial. efried From anlin.kong at gmail.com Thu Sep 5 22:55:49 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 6 Sep 2019 10:55:49 +1200 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Hi Anmar, Please see my comments in-line below. - Best regards, Lingxian Kong Catalyst Cloud On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih wrote: > Hi Lingxian, > > First of all, I would like to apologize because the email is pretty long. > I listed all the steps I went through just to make sure that I did > everything correctly. > No need to apologize, more information is always helpful to solve the problem. > 4- Creating the webhook for the function by: openstack webhook create > --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen capture > for the response. I tried to copy and paste > the webhook_url " > http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into > my internet browser, so I got 404 not found. I am not sure if this is > normal response or I have something wrong here. > Like Gaetan said, the webhook is supposed to be invoked by http POST. 9- Checking aodh alarm history by aodh alarm-history show > ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response > > > 10- Last step is to check the function execution in qinling and here is > the response . (empty bracket). I am not sure > what is the problem. > Yeah, from the output of alarm history, the alarm is not triggered, as a result, there won't be execution created by the webhook. Seems like the aodh-listener didn't receive the message or the message was ignored. Could you paste the aodh-listener log but make sure: 1. `debug = True` in /etc/aodh/aodh.conf 2. Trigger the python script again > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Sep 5 23:11:16 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 05 Sep 2019 16:11:16 -0700 Subject: Root Ara report removed from Zuul Jobs Message-ID: <67d5aea3-2d92-4378-9af1-a9dc2bcad0cc@www.fastmail.com> Hello Everyone, We have removed the top level Zuul job Ara reports from our Zuul jobs. This was done to reduce the total number of objects we are uploading to our swift/ceph object stores as some clouds have indicated the total object volume is a bit high. Analysis showed that Ara represented a significant chunk of that data. We did not remove that information though. Zuul's build dashboard is able to render a similar report for builds. You can find that by clicking on the "Console" tab of a build. For exampe, here is one for a nova tox job: http://zuul.openstack.org/build/8e581b24d38b4e5c8ff046be081c4525/console We hope this makes our log storage easier to support while still providing the information you need to debug your jobs. Note jobs that run a nested ara-report are not affected by this. I think TripleO, OSA, and others do this. Thank you, Clark From gmann at ghanshyammann.com Fri Sep 6 00:26:13 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Sep 2019 09:26:13 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: <16d03f6dcc3.b88b3ffe13390.5078407317661040921@ghanshyammann.com> ---- On Fri, 06 Sep 2019 01:20:39 +0900 Chris Dent wrote ---- > On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. > > In today TC meeting[2], we discussed the few possibilities and decided to reach out to the > > eligible candidates to serve the PTL position. > > Thanks for being concerned about this, but it would have been useful > if you included me (as the current PTL) and the rest of the > Placement team in the discussion or at least confirmed plans with me > before starting this seek-volunteers process. > > There are a few open questions we are still trying to resolve > before we should jump to any decisions: > > * We are currently waiting to see if Tetsuro is available (he's been > away for a few days). If he is, he'll be great, but we don't know > yet if he can or wants to. Thanks Chris. we discussed it in yesterday TC meeting and there is no hurry or leaving placement team away from the discussion. You as Train PTL and other placement members are the only ones to decide and help to select the right candidate. I am also waiting to hear from Tetsuro about his planning. > > * We've started, informally, discussing the option of pioneering the > option of leaderless projects within Placement (we pioneer many > other things there, may as well add that to the list) but without > more discussion from the whole team (which can't happen because we > don't have quorum of the actively involved people) and the TC it's > premature. Leaderless would essentially mean consensually > designating release liaisons and similar roles but no specific > PTL. I think this is easily possible in a small in number, > focused, and small feature-queue [1] group like Placement but > would much harder in one of the larger groups like Nova. This is an interesting idea and needs more discussions seems. I am not against of Leaderless project approach with right point of contacts for TC/release team etc but this is going to be the new process under current governance. Because there are other projects (winstackers and PowerVMStackers in U) are in the queue of being removed from governance because continuously lacking the leader since a couple of cycles. So if we go for Leaderless approach then, those projects should be removed based on general-in-active projects not because of no PTL. Anyways IMO, let's first check all possibility if anyone from placement team (or nova as it is an almost same team) can serve as PTL. If no then we discuss about your idea. -gmann > > * We have several reluctant people who _can_ do it, but don't want > to. Once we've explored the other ideas here and any others we can > come up with, we can dredge one of those people up as a stand-in > PTL, keeping the slot open. Because of [1] there's not much on the > agenda for U. > > Since the Placement team is not planning to have an active presence > at the PTG, nor planning to have much of a pre-PTG (as no one has > stepped up with any feature ideas) we have some days or even weeks > before it matters who the next PTL (if any) is, so if possible, > let's not rush this. > > [1] It's been a design goal of mine from the start that Placement > would quickly reach a position of stability and maturity that I > liked to call "being done". By the end of Train we are expecting to > be feature complete for any features that have been actively > discussed in the recent past [2]. The main tasks in U will be > responding to bug fixes and requests-for-explanations for the > features that already exist (because people asked for them) but are > not being used yet and getting the osc-placement client caught up. > > [2] The biggest thing that has been discussed as a "maybe we should > do" for which there are no immediate plans is "resource provider > sharding" or "one placement, many clouds". That's a thing we > imagined people might ask for, but haven't yet, so there's little > point doing it. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From Albert.Braden at synopsys.com Fri Sep 6 00:37:32 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 6 Sep 2019 00:37:32 +0000 Subject: Nova causes MySQL timeouts In-Reply-To: References: Message-ID: After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? From: Albert Braden Sent: Wednesday, September 4, 2019 10:19 AM To: openstack-discuss at lists.openstack.org Cc: Gaëtan Trellu Subject: RE: Nova causes MySQL timeouts We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: https://docs.openstack.org/keystone/stein/configuration/config-options.html Document says: [api_database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. [database] connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? My settings are default: [api_database]: #connection_recycle_time = 3600 #max_overflow = #max_pool_size = [database]: #connection_recycle_time = 3600 #min_pool_size = 1 #max_overflow = 50 #max_pool_size = 5 It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? From: Gaëtan Trellu > Sent: Tuesday, September 3, 2019 1:37 PM To: Albert Braden > Cc: openstack-discuss at lists.openstack.org Subject: Re: Nova causes MySQL timeouts Hi Albert, It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. Keep in mind than more workers you will have more connections will be opened on the database. Gaetan (goldyfruit) On Sep 3, 2019 4:31 PM, Albert Braden > wrote: It looks like nova is keeping mysql connections open until they time out. How are others responding to this issue? Do you just ignore the mysql errors, or is it possible to change configuration so that nova closes and reopens connections before they time out? Or is there a way to stop mysql from logging these aborted connections without hiding real issues? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nitin.Uikey at nttdata.com Fri Sep 6 02:40:13 2019 From: Nitin.Uikey at nttdata.com (Uikey, Nitin) Date: Fri, 6 Sep 2019 02:40:13 +0000 Subject: [dev][tacker] Steps to setup tacker for testing VNF packages In-Reply-To: References: , Message-ID: Hi All, DB upgrade steps was missing in my previous mail. Sharing all the steps again. Steps to set-up tacker for managing vnf packages:- 1. Api-paste.ini [composite:tacker] /vnfpkgm/v1: vnfpkgmapi_v1 [composite:vnfpkgmapi_v1] use = call:tacker.auth:pipeline_factory noauth = request_id catch_errors extensions vnfpkgmapp_v1 keystone = request_id catch_errors authtoken keystonecontext extensions vnfpkgmapp_v1 [app:vnfpkgmapp_v1] paste.app_factory = tacker.api.vnfpkgm.v1.router:VnfpkgmAPIRouter.factory You can also copy api-paste.ini available in patch : https://review.opendev.org/#/c/675593 2. Configuration options changes : tacker.conf a) Periodic task to delete the vnf package artifacts from nodes and glance store. default configuration in tacker/tacker/conf/conductor.py vnf_package_delete_interval = 1800 b) Path to store extracted CSAR file on compute node default configuration in tacker/conf/vnf_package.py vnf_package_csar_path = /var/lib/tacker/vnfpackages/ vnf_package_csar_path should have Read and Write access (+rw) c) Path to store CSAR file at glance store default configuration in /devstack/lib/tacker default_backend = file filesystem_store_datadir = /var/lib/tacker/csar_files filesystem_store_datadir should have Read and Write access (+rw) 3. Apply python-tackerclient patches https://review.opendev.org/#/c/679956/ https://review.opendev.org/#/c/679957/ https://review.opendev.org/#/c/679958/ 4. Apply tosca parser changes https://review.opendev.org/#/c/675561/ 5. Upgrade the tacker Database to 9d425296f2c3 version tacker-db-manage --config-file /etc/tacker/tacker.conf upgrade 9d425296f2c3 6. Sample CSAR file to create VNF package tacker/tacker/samples/vnf_packages/sample_vnf_pkg.zip 7. Commands to manage VNF packages To create a VNF package - openstack vnfpack create —user-data key=value will be generated by this command which will be used in other commands to manage VNF Package. To upload the CSAR file 1. using direct path - openstack vnfpack upload --upload-method direct-file --path 2. using web - openstack vnfpack upload --upload-method web-download --path To list all the VNF Package - openstack vnfpack list To show a VNF package details - openstack vnfpack show To delete a VNF package - openstack vnfpack delete use `openstack vnfpack --help` command for more information Regards, Nitin Uikey Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From dangtrinhnt at gmail.com Fri Sep 6 04:07:36 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 6 Sep 2019 13:07:36 +0900 Subject: [all][ptl][tc][docs] Develope a code-review practices document Message-ID: Hi all, I find it's hard sometimes to handle situations in code-review, something likes solving conflicts while not upsetting developers, or suggesting a change to a patchset while still encouraging the committer, etc. I know there are already documents that guide us on how to do a code-review [2] and even projects develope their own procedures but I find they're more about technical issues rather than human communication. Currently reading Google's code-review practices [1] give me some inspiration to develop more human-centric code-review guidelines for OpenStack projects. IMO, it could be a great way to help project teams develop stronger relationship as well as encouraging newcomers. When the document is finalized, I then encourage PTLs to refer to that document in the project's docs. Let me know what you think and I will put a patchset after one or two weeks. [1] https://google.github.io/eng-practices/review/ [2] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html [3] https://docs.openstack.org/doc-contrib-guide/docs-review.html [4] https://docs.openstack.org/nova/rocky/contributor/code-review.html [5] https://docs.openstack.org/neutron/pike/contributor/policies/code-reviews.html Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dikonoor at in.ibm.com Fri Sep 6 05:42:07 2019 From: dikonoor at in.ibm.com (Divya K Konoor) Date: Fri, 6 Sep 2019 11:12:07 +0530 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> Message-ID: Missing the deadline for a PTL nomination cannot be the reason for removing governance. PowerVMStackers continue to be an active project and would want to be continued to be governed under OpenStack. For PTL, an eligible candidate can still be appointed . Regards, D i v y a K K o n o o r -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 14142563.gif Type: image/gif Size: 558 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From luka.peschke at objectif-libre.com Fri Sep 6 08:28:20 2019 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Fri, 06 Sep 2019 10:28:20 +0200 Subject: [cloudkitty] Shift IRC meeting of september 6th Message-ID: <949db3c0.AM8AAEwUf8UAAAAAAAAAAAQR_QkAAAAAZtYAAAAAAAzbjABdchil@mailjet.com> Hello, Some CK cores are unavailable today, so we've decided to move today's meeting to next friday (the 13th) at 15h UTC / 17h CEST . Regards, Luka Peschke From thierry at openstack.org Fri Sep 6 08:48:02 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Sep 2019 10:48:02 +0200 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190905223137.i72s7n4tibkgypqf@bishop> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <20190905223137.i72s7n4tibkgypqf@bishop> Message-ID: <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> Nate Johnston wrote: > On Thu, Sep 05, 2019 at 11:59:22AM +0200, Thierry Carrez wrote: >> - allow "PTL" role to be multi-headed, so that it is less of a superhuman >> and spreading the load becomes more natural. We would not elect/choose a >> single person, but a ticket with one or more names on it. From a governance >> perspective, we still need a clear contact point and a "bucket stops here" >> voice. But in practice we could (1) contact all heads when we contact "the >> PTL", and (2) consider that as long as there is no dissent between the >> heads, it is "the PTL voice". To actually make it work in practice I'd >> advise to keep the number of heads low (think 1-3). > > I think there was already an effort to allow the PTL to shed some of their > duties, in the form of the Cross Project Liaisons [1] project. I thought that > was a great way for more junior members of the community to get involved with > stewardship and be recognized for that contribution, and perhaps be mentored up > as they take a bit of load off the PTL. I think if we expand the roles to > include more of the functions that PTLs feel the need to do themselves, then by > doing so we (of necessity) document those parts of the job so that others can > handle them. And perhaps projects can cooperate and pool resources - for > example, the same person who is a liaison for Neutron to Oslo could probably be > on the look out for issues of interest to Octavia as well, and so on. Cross-project liaisons are a form of delegation. So yes, PTLs already can (and probably should) delegate most of their duties. And in a lot of teams it already works like that. But we have noticed that it can be harder to delegate tasks than share tasks. Basically, once someone is the PTL, it is tempting to just have them do all the PTL stuff (since they will do it by default if nobody steps up). That makes the job a bit intimidating, and it is sometimes hard to find candidates to fill it. If it's clear from day 0 that two or three people will share the tasks and be collectively responsible for those tasks to be covered, it might be less intimidating (easier to find 2 x 50% than 1 x 100% ?). -- Thierry Carrez (ttx) From thierry at openstack.org Fri Sep 6 09:05:16 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Sep 2019 11:05:16 +0200 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> Message-ID: Divya K Konoor wrote: > Missing the deadline for a PTL nomination cannot be the reason for > removing governance. I agree with that, but missing the deadline twice in a row is certainly a sign of some disconnect with the rest of the OpenStack community. Project teams require a minimal amount of reactivity and presence, so it is fair to question whether PowerVMStackers should continue as a project team in the future. > PowerVMStackers continue to be an active project > and would want to be continued to be governed under OpenStack. For PTL, > an eligible candidate can still be appointed . There is another option, to stay under OpenStack governance but without the constraints of a full project team: PowerVMStackers could be made an OpenStack SIG. I already proposed that 6 months ago (last time there was no PTL nominee for the team), on the grounds that interest in PowerVM was clearly a special interest, and a SIG might be a better way to regroup people interested in supporting PowerVM in OpenStack. The objection back then was that PowerVMStackers maintained a number of PowerVM-related code, plugins and drivers that should ideally be adopted by their consuming project teams (nova, neutron, ceilometer), and that making it a SIG would endanger that adoption process. I still think it makes sense to consider PowerVMStackers as a Special Interest Group. As long as the PowerVM-related code is not adopted by the consuming projects, it is arguably a special interest, and not a completely-integrated part of OpenStack components. The only difference in being a SIG (compared to being a project team) would be to reduce the amount of mandatory tasks (like designating a PTL every 6 months). You would still be able to own repositories, get room at OpenStack events, vote on TC election... It would seem to be the best solution in your case. -- Thierry Carrez (ttx) From marek.lycka at ultimum.io Fri Sep 6 09:33:40 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Fri, 6 Sep 2019 11:33:40 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: Hi, > we need people familiar with Angular and Horizon's ways of using Angular (which seem to be very > non-standard) that would be willing to write and review code. Unfortunately the people who originally > introduced Angular in Horizon and designed how it is used are no longer interested in contributing, > and there don't seem to be any new people able to handle this. I've been working with Horizon's Angular for quite some time and don't mind keeping at it, but it's useless unless I can get my code merged, hence my original message. As far as attracting new developers goes, I think that removing some barriers to entry couldn't hurt - seeing commits simply lost to time being one of them. I can see it as being fairly demoralizing. > Personally, I think that a better long-time strategy would be to remove all > Angular-based views from Horizon, and focus on maintaining one language and one set of tools. Removing AngularJS wouldn't remove JavaScript from horizon. We'd still be left with a home-brewish framework (which is buggy as is). I don't think removing js completely is realistic either: we'd lose functionality and worsen user experience. I think that keeping Angular is the better alternative: 1) A lot of work has already been put into Angularization, solving many problems 2) Unlike legacy js, Angular code is covered by automated tests 3) Arguably, improvments are, on average, easier to add to Angular than pure js implementations Whatever reservations there may be about the current implementation can be identified and addressed, but all in all, I think removing it at this point would be counterproductive. M. čt 5. 9. 2019 v 14:28 odesílatel Radomir Dopieralski napsal: > Both of your questions have one answer: we need people familiar with > Angular and Horizon's ways of using Angular (which seem to be very > non-standard) that would be willing to write and review code. Unfortunately > the people who originally introduced Angular in Horizon and designed how it > is used are no longer interested in contributing, and there don't seem to > be any new people able to handle this. Personally, I think that a better > long-time strategy would be to remove all Angular-based views from Horizon, > and focus on maintaining one language and one set of tools. > > On Thu, Sep 5, 2019 at 1:52 PM Marek Lyčka wrote: > >> Hi all, >> >> I took apart the Horizon paging mechanism while working on [1] and have a >> few of findings: >> >> - Paging is unimplemented/turned off for many (if not most) panels, not >> just Routers and Networks >> - Currently, single page data loads could potentially bump up against API >> hard limits >> - Sorting is also broken in places where paging is enabled (Old >> images...), see [2] >> - The Networks table loads data via three API calls due to neutron API >> limitations, which makes the marker based mechanism unusable >> - There is at least one more minor bug which breaks pagination, there may >> be more >> >> While some of these things may be fixable in different hacky and/or >> inefficient ways, >> we already have Angular implementations which fix many of them and make >> improving >> and fixing the rest easier. >> >> Since Angular ports would help with other unrelated issues as well and >> allow us to >> start deprecating old code, I was wondering two things: >> >> 1) What would it take to increase the priority of Angularization in >> general? >> 2) Can the Code Review process be modified/improved to increase the >> chance for >> Angularization changes to be code reviewed and merged if they do >> happen? >> My previous attempts in this area have failed because of lack of code >> reviewers... >> >> Since full Angularization is still the goal for Horizon as far as I know, >> I'd rather >> spend time doing that than hacking solutions to different problems in >> legacy code >> which is slated deprecation. >> >> Best Regards, >> Marek >> >> [1] https://bugs.launchpad.net/horizon/+bug/1746184 >> [2] https://bugs.launchpad.net/horizon/+bug/1782732 >> >> -- >> Marek Lyčka >> Linux Developer >> >> Ultimum Technologies s.r.o. >> Na Poříčí 1047/26, 11000 Praha 1 >> Czech Republic >> >> marek.lycka at ultimum.io >> *https://ultimum.io * >> > -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Sep 6 09:36:38 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Sep 2019 11:36:38 +0200 Subject: [i18n][tc] The future of I18n Message-ID: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> Hi! The I18n project team had no PTL candidates for Ussuri, so the TC needs to decide what to do with it. It just happens that Ian kindly volunteered to be an election official, and therefore could not technically run for I18n PTL. So if Ian is still up for taking it, we could just go and appoint him. That said, I18n evolved a lot, to the point where it might fit the SIG profile better than the project team profile. As a reminder, project teams are responsible for producing OpenStack-the-software, and since they are all integral in the production of the software that we want to release on a time-based schedule, they come with a number of mandatory tasks (like designating a PTL every 6 months). SIGs (special interest groups) are OpenStack teams that work on a mission that is not directly producing a piece of the OpenStack release. SIG members are bound by their mission, rather than by a specific OpenStack release deliverable. There is no mandatory task, as it is OK if the group goes dormant for a while. The I18n team regroups translators, with an interest of making OpenStack (in general, not just the software) more accessible to non-English speakers. They currently try to translate the OpenStack user survey, the Horizon dashboard messages, and key documentation. It could still continue as a project team (since it still produces Horizon translations), but I'd argue that at this point it is not what defines them. The fact that they are translators is what defines them, which IMHO makes them fit the SIG profile better than the project team profile. They can totally continue proposing translation files for Horizon as a I18n SIG, so there would be no technical difference. Just less mandatory tasks for the team. Thoughts ? -- Thierry Carrez (ttx) From amotoki at gmail.com Fri Sep 6 10:59:39 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 6 Sep 2019 19:59:39 +0900 Subject: [keystone][horizon][zaqar][tempest][requirements] library updates breaking projects In-Reply-To: <20190905162516.mxdxg4dl3epwwwfi@mthode.org> References: <20190905162516.mxdxg4dl3epwwwfi@mthode.org> Message-ID: On Fri, Sep 6, 2019 at 1:26 AM Matthew Thode wrote: > > I emailed a while ago about problem updates and wanted to give an > update. I'm hoping we can get these fixed before the freeze which is on > Monday iirc. > > horizon > This is a newer issue which e0ne and amotoki know about but no existing > review to fix it. > please test against https://review.opendev.org/680457 > -semantic-version===2.8.1 > +semantic-version===2.6.0 I proposed a fix at https://review.opendev.org/#/c/680631/. It passes unit tests and a failure in the integration tests looks unrelated to the fix. -- Akihiro Motoki (amotoki) > > tempest STILL has failures > I thought the following commit would fix it, but nope > https://github.com/mtreinish/stestr/commit/136027c005fc437341bc23939a18a5f3314194f1 > -stestr===2.5.1 > +stestr===2.4.0 > > python-zaqarclient > waiting on https://review.opendev.org/679842 may be merging today > -jsonschema===3.0.2 > +jsonschema===2.6.0 > > keystone > a review is out there that seems to have tests passing > https://review.opendev.org/677511/ > -oauthlib===3.1.0 > +oauthlib===3.0.2 > > -- > Matthew Thode From cdent+os at anticdent.org Fri Sep 6 11:04:04 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 12:04:04 +0100 (BST) Subject: [placement] update 19-35 Message-ID: HTML: https://anticdent.org/placement-update-19-35.html Let's have a placement update 19-35. Feature freeze is this week. We have a feature in progress (consumer types, see below) but it is not critical. # Most Important Three main things we should probably concern ourselves with in the immediate future: * We are currently without a PTL for Ussuri. There's some discussion about the options for dealing with this in an [email thread](http://lists.openstack.org/pipermail/openstack-discuss/2019-September/thread.html#9131). If you have ideas (or want to put yourself forward), please share. * We need to work on useful documentation for the features developed this cycle. * We need to create some [cycle highlights](http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009137.html). To help with that I've started [an etherpad](https://etherpad.openstack.org/p/placement-train-cycle-highlights). If I've forgotten anything, please make additions. # What's Changed * osc-placement 1.7.0 has been [released](https://pypi.org/project/osc-placement/). This adds support for [managing allocation ratios](https://review.opendev.org/#/q/topic:allocation-ratios+(status:open+OR+status:merged)) via aggregates, but adding a few different commands and args for inventory manipulation. * Work on consumer types exposed that placement needed to be first class in grenade to make sure database migrations are run. That [change has merged](https://review.opendev.org/679655). Until then placement was upgraded as part of nova. # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 24 (-1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 5 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 6 (0). # osc-placement * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * `--amend` and `--aggregate` on resource provider inventory has merged and been release 1.7.0 (see above). # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * I took this through to microversion and api-ref docs, so it is ready for wider review. If this doesn't make it in for Train, that's okay. The goal is to have it ready for Nova to start working with it when Nova is able. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. Performance related explorations continue: * Refactor initialization of research context. This puts the code that might cause an exit earlier in the process so we can avoid useless work. One outcome of the performance work needs to be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). * Merge request log and request id middlewares is worth attention. It makes sure that _all_ log message from a single request use a global and local request id. There are three [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And zero [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users This week (because of feature freeze) I will not be adding new finds to the list, just updating what was already on the list. * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * Nova: libvirt: Start reporting PCPU inventory to placement A part of * Nova: support move ops with qos ports * nova: Support filtering of hosts by forbidden aggregates * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement * Nova: cross cell resize * Nova: Scheduler translate properties to traits * Nova: single pass instance info fetch in host manager * Nova: using provider config file for custom resource providers # End 🐎 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From balazs.gibizer at est.tech Fri Sep 6 12:00:20 2019 From: balazs.gibizer at est.tech (=?utf-8?B?QmFsw6F6cyBHaWJpemVy?=) Date: Fri, 6 Sep 2019 12:00:20 +0000 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: <1567771216.28660.0@smtp.office365.com> On Thu, Sep 5, 2019 at 6:20 PM, Chris Dent wrote: On Fri, 6 Sep 2019, Ghanshyam Mann wrote: With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. Cheers, gibi Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Fri Sep 6 12:00:22 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Fri, 6 Sep 2019 14:00:22 +0200 Subject: [monasca] Review Priority flag Message-ID: <1ba4b730-2e6f-4b09-5fb1-1f20ef4b7970@suse.com> Hello Team, now that we have the possibility to label our code changes with Review-Priority I would like to start the discussion about formalizing its usage. Right now every core reviewer can set its value, but we haven't defined any rules on how to use it. I suggest a process of proposing the changes which should be prioritized in weekly team meeting or in the mailing list. Any core reviewer, preferably from a different company, could confirm such proposed change by setting RV +1. I hope it's simple enough. What do you think? Another topic is exposing the prioritized code changes to the reviewers. We can list them using the filter [1]. We could add the link to this filter to Contributor Guide [2] and Priorities page [3]. We should also go through the list every week in the meeting. Any other ideas? Thanks Witek [1] https://review.opendev.org/#/q/(projects:openstack/monasca+OR+project:openstack/python-monascaclient)+label:Review-Priority+is:open [2] https://docs.openstack.org/monasca-api/latest/contributor/index.html [3] http://specs.openstack.org/openstack/monasca-specs/ From openstack at sheep.art.pl Fri Sep 6 12:00:30 2019 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Fri, 6 Sep 2019 14:00:30 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka wrote: > Hi, > > > we need people familiar with Angular and Horizon's ways of using Angular > (which seem to be very > > non-standard) that would be willing to write and review code. > Unfortunately the people who originally > > introduced Angular in Horizon and designed how it is used are no longer > interested in contributing, > > and there don't seem to be any new people able to handle this. > > I've been working with Horizon's Angular for quite some time and don't > mind keeping at it, but > it's useless unless I can get my code merged, hence my original message. > > As far as attracting new developers goes, I think that removing some > barriers to entry couldn't hurt - > seeing commits simply lost to time being one of them. I can see it as > being fairly demoralizing. > We can't review your patches, because we don't understand them. For the patches to be merged, we need more than one person, so that they can review each other's patches. > > Personally, I think that a better long-time strategy would be to remove > all > > Angular-based views from Horizon, and focus on maintaining one language > and one set of tools. > > Removing AngularJS wouldn't remove JavaScript from horizon. We'd still be > left with a home-brewish > framework (which is buggy as is). I don't think removing js completely is > realistic either: we'd lose > functionality and worsen user experience. I think that keeping Angular is > the better alternative: > > 1) A lot of work has already been put into Angularization, solving many > problems > 2) Unlike legacy js, Angular code is covered by automated tests > 3) Arguably, improvments are, on average, easier to add to Angular than > pure js implementations > > Whatever reservations there may be about the current implementation can be > identified and addressed, but > all in all, I think removing it at this point would be counterproductive. > JavaScript is fine. We all know how to write and how to review JavaScript code, and there doesn't have to be much of it — Horizon is not the kind of tool that has to bee all shiny and animated. It's a tool for getting work done. AngularJS is a problem, because you can't tell what the code does just by looking at the code, and so you can neither review nor fix it. There has been a lot of work put into mixing Horizon with Angular, but I disagree that it has solved problems, and in fact it has introduced a lot of regressions. Just to take a simple example, the translations are currently broken for en.AU and en.GB languages, and date display is not localized. And nobody cares. We had automated tests before Angular. There weren't many of them, because we also didn't have much JavaScript code. If I remember correctly, those tests were ripped out during the Angularization. Arguably, improvements are, on average, impossible to add to Angular, because the code makes no sense on its own. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Sep 6 12:27:53 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 13:27:53 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> Message-ID: On Wed, 4 Sep 2019, Doug Hellmann wrote: > I would take this a step further, and remind everyone in > leadership positions that your job is not to do things *for* > anyone, but to enable others to do things *for themselves*. Open > source is based on collaboration, and ensuring there is a healthy > space for that collaboration is your responsibility. You are > neither a free workforce nor a charity. By all means, you should > help people to achieve their goals in a reasonable way by reducing > barriers, simplifying processes, and making tools reusable. But do > not for a minute believe that you have to do it all for them, even > if you think they have a great idea. Make sure you say “yes, you > should do that” more often than “yes, I will do that." This is very true, but I think it underestimates the many different forces that are involved in "doing work in OpenStack". These are, of course, very different from person to person, but I've got some observations (of course I do, everyone should). I suspect some of these are unique to my experience, but I suspect some of them are not. It would be useful (to me at least) to know where some of us have had similar experiences. Most people work on OpenStack because it is their job or is closely related to their job. But because it is "open source" and "a community" and "collaborative" doing what people ask for and helping others achieve what they need is but one small piece of the motivation and action calculus. Making "it" (various things: code, community, product, experiences of various kinds) "better" (again, very subjective and multi-dimensional) is very complicated. And it is further complicated by the roadblocks that can come up in the community. In the other thread that started this Sean said: the reason that the investment that was been made was reducing was not driven by the lack of hype but by how slow adding some feature that really mattered [1] One aspect of burn out comes from the combination of weathering these roadblocks and having a kind of optimism that says "I can, somehow change this or overcome this." Another is simply a dedication to quality, no matter the obstacles. This is tightly coupled with Sean's comments above. Improving the "developer experience" is rarely a priority and gets pushed on the back burner unless you dedicate the time to being core or PTL, which grants some license to "getting code merged". For some projects that is a _huge_ undertaking. My relatively good success at overcoming the obstacles but limited (that is, constrained to a small domain) at changing the root causes is why I'm now advocating chilling out. This is risky because the latency between code and related work done now and any feedback is insanely high. The improvements we've made recently to placement won't be in common use for 6 months to 3 years, depending on how we measure "common". Detaching or chilling out now doesn't have an impact for some time. That feedback latency also means figuring out what "better" or "quality" mean for a project is a guessing game. Making cycles longer will make that worse. A year ago when we started extracting placement I tried to make real the idea that full time cores should rarely write feature code and primarily be involved in helping "people to achieve their goals in a reasonable way by reducing barriers, simplifying processes, and making tools reusable". This only sort of worked. There were issues: * There were feature goals, but few people to do the work. * Our (OpenStack's) long term standards for what is or is not a barrier, good process and tooling are historically so low that bring them up to spec requires a vast amount of work. To me, the Placement changes made in Train were needed so that Placement could make a respectable claim to being "good". 75% of the changes (counting by commit) were made by 4 people. 43% by one. [2] The large amount of time required to be core, PTL or "get their code merged pretty easily" (in some projects) is a big portion of any job and given the contraction of interest in the community (but not in the product) from plenty of companies, there is lurking fear that the luxury of making that commitment, of being a "unicorn 100% upstream", will go away at any time. This increases the need to do all those "make it better" things _now_. Which, obviously, is a trap, and people who feel like that would be better off if they chilled out, but I would guess that people who feel that way do so because making it better (whatever it is) is important in and of itself. Especially when the lack of commitment from enterprises is waning: they don't care, so I must, because I care. In other projects, there's simply no one there to become core or a reluctance to get into leadership because it is perceived to be too time consuming (because for many people in leadership, the time consumption is very visible). Similarly, when there's a sense of waning interest, the guessing game described above for determining what matters is pressurized. "If I get this wrong, the risk of our irrelevance or even demise is increased, don't mess this up!". Also a trap. But both traps are compelling. I think we need to investigate changing our governance and leadership structures. We should have evolved away from them, but we haven't because power always strives to maintain itself even when it is no longer fit for purpose. TC, PTL, Core and even "projects" all need rigorous review and reconsideration to see if they are really supporting us ("us" in this case is "the people who make OpenStack") the way they should. If we are unable or unwilling to do that, then we need to enforce "contributing" enterprises to contribute sufficient resources to prop up the old power structures. [3] [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009123.html [2] That is, assuming stackalytics is correct today, it often isn't. [3] Perversely, I think this option (companies paying up) is the fundamentally right one from an economic standpoint, but that is because I don't believe that OpenStack is (currently and through the peak of its history) open source. It is collaborative inter-enterprise development that allows large companies to have a market in which they make a load of money. That takes money and people. If OpenStack were simpler and more contained and tried less hard to satisfy everyone, it could operate as an underlay (much like Linux) to some other market but for now it is the market. The pains we are having now may be the signs of a need for a shift to being an underlay (k8s and others are the new keen market now). If that's the case we can accelerate that shift by narrowing what we do. Trimming the fat. Making OpenStack much more turnkey with far fewer choices. But again, the current batch of engaged enterprises have not shown signs of wanting that narrowing. So they either need to change what they want or cough up the resources to support what they want in a healthy fashion. What we should do is strive to be healthy, whatever else happens. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From dtantsur at redhat.com Fri Sep 6 12:30:59 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 6 Sep 2019 14:30:59 +0200 Subject: [ironic] opensuse-15 jobs are temporary non-voting on bifrost In-Reply-To: References: <979bbec8-1f94-458a-aab0-f4d6327078ab@redhat.com> Message-ID: <55cfa2ca-4f8b-1613-9321-80cf1eccae75@redhat.com> On 9/5/19 8:09 PM, Dirk Müller wrote: > Hi Dmitry, > > Am Mi., 4. Sept. 2019 um 17:25 Uhr schrieb Dmitry Tantsur : > >> JFYI we had to disable opensuse-15 jobs because they kept failing with >> repository issues. Help with debugging appreciated. > > The nodeset is incorrect, https://review.opendev.org/680450 should get > you help started. Thank you! > > > Greetings, > Dirk > From cdent+os at anticdent.org Fri Sep 6 12:50:52 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 13:50:52 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: Message-ID: On Wed, 4 Sep 2019, Kendall Nelson wrote: > To kind of rephrase for everyone (Chris, correct me if I am wrong or not > getting all of it): What do you think we, as a community, can do about the > lack of candidates for roles like TC or PTL? How can we adjust, as a > community, to make our governance structures fit better? In what wasy can > we address and prevent burnout? That's a useful and sufficient summary. Thanks for extracting things out like this. Very good email hygiene. > - Longer release cycle. I know this has come up a dozen or more times (and > I'm a little sorry for bringing it up again), but I think OpenStack has > stabilized enough that 6 months is a little short and now may finally be > the time to lengthen things a bit. 9 months might be a better fit. With > longer release cycles comes more time to get work done as well which I've > heard has been a complaint of more part time contributors when this > discussion has come up in the past. As I said in my other message in this thread, in response to Doug, I think that this might be counterproductive in terms of easing burnout. It's probably good for providing more time to get some things done, but it aggravates the pressure and risks involved in trying to predict what matters. Since I've already said enough over on that message, I'll not add more here. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From doug at stackhpc.com Fri Sep 6 13:05:05 2019 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 6 Sep 2019 14:05:05 +0100 Subject: [monasca] Review Priority flag In-Reply-To: <1ba4b730-2e6f-4b09-5fb1-1f20ef4b7970@suse.com> References: <1ba4b730-2e6f-4b09-5fb1-1f20ef4b7970@suse.com> Message-ID: On 06/09/2019 13:00, Witek Bedyk wrote: > Hello Team, > > now that we have the possibility to label our code changes with > Review-Priority I would like to start the discussion about formalizing > its usage. Right now every core reviewer can set its value, but we > haven't defined any rules on how to use it. > > I suggest a process of proposing the changes which should be > prioritized in weekly team meeting or in the mailing list. Any core > reviewer, preferably from a different company, could confirm such > proposed change by setting RV +1. > > I hope it's simple enough. What do you think? Sounds good to me. > > Another topic is exposing the prioritized code changes to the > reviewers. We can list them using the filter [1]. We could add the > link to this filter to Contributor Guide [2] and Priorities page [3]. > We should also go through the list every week in the meeting. Any > other ideas? I think that is a good plan. Perhaps we could have a more general Gerrit dashboard which also includes a Review Priority section. Something like this perhaps? http://www.tinyurl.com/monasca > > Thanks > Witek > > > [1] > https://review.opendev.org/#/q/(projects:openstack/monasca+OR+project:openstack/python-monascaclient)+label:Review-Priority+is:open > [2] https://docs.openstack.org/monasca-api/latest/contributor/index.html > [3] http://specs.openstack.org/openstack/monasca-specs/ > From fungi at yuggoth.org Fri Sep 6 13:10:54 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Sep 2019 13:10:54 +0000 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> Message-ID: <20190906131053.rofnz7zeoudctoif@yuggoth.org> On 2019-09-06 13:27:53 +0100 (+0100), Chris Dent wrote: [...] > Most people work on OpenStack because it is their job or is closely > related to their job. But because it is "open source" and "a > community" and "collaborative" doing what people ask for and helping > others achieve what they need is but one small piece of the > motivation and action calculus. [...] I don't know that this captures my motivation, at least. I chose my job so that I could assist in the creation and maintenance of OpenStack and similar free software, not the other way around. Maybe I'm in a minority within the community, but I suspect there are more folks than just me who feel the same. > I don't believe that OpenStack is (currently and through the peak > of its history) open source. It is collaborative inter-enterprise > development that allows large companies to have a market in which > they make a load of money. [...] Yes, making these tasks easier and less expensive for "large companies" like CERN, SKA, MOC, and all manner of other research and educational organizations is what causes this work to be worthwhile for me. I like that what we do provides a positive contribution to the sum total knowledge of our species. I personally think this aspect can't be overstated. What we do matters beyond the desire and ability for some self-serving commercial enterprises to take and give nothing back. The nature of modern business is exploitation, but it's not as if the commons of free software is the only resource they're exploiting to their own gain. I'm all for the people of our planet coming together to fight injustice or abuse by corporate and political powers, but the problem extends far, far beyond our community and pretending we can solve such abuse and oppression within OpenStack without looking at the bigger picture is short-sighted and naive. I'm disappointed that you don't think the software you're making is open source. I think the software I'm making is open source, and if I didn't I wouldn't be here. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Fri Sep 6 13:15:06 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Sep 2019 14:15:06 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190906131053.rofnz7zeoudctoif@yuggoth.org> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: On Fri, 6 Sep 2019, Jeremy Stanley wrote: > I'm disappointed that you don't think the software you're making is > open source. I think the software I'm making is open source, and if > I didn't I wouldn't be here. I'm disappointed too, I hope I've made that obvious. As I said at the start: everyone has different experiences. You and I have different ones, that is _good_. The reason I have stayed in OpenStack is because I've wanted to make it more "open source". So I think we're working to similar ends, but starting from different points. Again: that is _good_. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From corey.bryant at canonical.com Fri Sep 6 13:30:20 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 6 Sep 2019 09:30:20 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-1) Message-ID: This is the goal-1 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There is only 1 week remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == We're down to 8 projects with failing tests that need fixing, and 3 projects with successful tests that should be ready to merge. I've been working to contact PTLs for these projects to help finish them up. Thank you to all who have contributed their time and fixes to enable patches to land! == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Sep 6 13:34:41 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 6 Sep 2019 09:34:41 -0400 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: <1567771216.28660.0@smtp.office365.com> References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> <1567771216.28660.0@smtp.office365.com> Message-ID: On Fri, Sep 6, 2019 at 8:04 AM Balázs Gibizer wrote: > > > > On Thu, Sep 5, 2019 at 6:20 PM, Chris Dent wrote: > > On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. > > Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. > > > I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. If it's okay with you, would you like to share what are some of the priorities and work that you feel is placed on a PTL which makes you reluctant? PS, by no means I am trying to push for you to be PTL if you're not currently interested, but I want to hear some of the community thoughts about this (and feel free to reply privately) > Cheers, > gibi > > Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From jon at csail.mit.edu Fri Sep 6 13:37:59 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Fri, 6 Sep 2019 09:37:59 -0400 Subject: [i18n][tc] The future of I18n In-Reply-To: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> Message-ID: <20190906133759.obgszlvqexgam5n3@csail.mit.edu> I'd be lead by how the people working in the space want to organize, but... Seems like SIG would be a good fit as I18N is extremely cross project, presumably everything has text output even if it's just logging and not enduser focused. my 2¢ -Jon On Fri, Sep 06, 2019 at 11:36:38AM +0200, Thierry Carrez wrote: :Hi! : :The I18n project team had no PTL candidates for Ussuri, so the TC needs to :decide what to do with it. It just happens that Ian kindly volunteered to be :an election official, and therefore could not technically run for I18n PTL. :So if Ian is still up for taking it, we could just go and appoint him. : :That said, I18n evolved a lot, to the point where it might fit the SIG :profile better than the project team profile. : :As a reminder, project teams are responsible for producing :OpenStack-the-software, and since they are all integral in the production of :the software that we want to release on a time-based schedule, they come with :a number of mandatory tasks (like designating a PTL every 6 months). : :SIGs (special interest groups) are OpenStack teams that work on a mission :that is not directly producing a piece of the OpenStack release. SIG members :are bound by their mission, rather than by a specific OpenStack release :deliverable. There is no mandatory task, as it is OK if the group goes :dormant for a while. : :The I18n team regroups translators, with an interest of making OpenStack (in :general, not just the software) more accessible to non-English speakers. They :currently try to translate the OpenStack user survey, the Horizon dashboard :messages, and key documentation. : :It could still continue as a project team (since it still produces Horizon :translations), but I'd argue that at this point it is not what defines them. :The fact that they are translators is what defines them, which IMHO makes :them fit the SIG profile better than the project team profile. They can :totally continue proposing translation files for Horizon as a I18n SIG, so :there would be no technical difference. Just less mandatory tasks for the :team. : :Thoughts ? : :-- :Thierry Carrez (ttx) : From jfrancoa at redhat.com Fri Sep 6 13:38:17 2019 From: jfrancoa at redhat.com (Jose Luis Franco Arza) Date: Fri, 6 Sep 2019 15:38:17 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: References: <20190830122850.GA5248@holtby> Message-ID: +1 with my eyes closed! I though he was already core. On Tue, Sep 3, 2019 at 3:55 PM Carter, Kevin wrote: > +1 > > -- > > Kevin Carter > IRC: Cloudnull > > > On Fri, Aug 30, 2019 at 7:33 AM Michele Baldessari > wrote: > >> Hi all, >> >> Damien (dciabrin on IRC) has always been very active in all HA things in >> TripleO and I think it is overdue for him to have core rights on this >> topic. So I'd like to propose to give him core permissions on any >> HA-related code in TripleO. >> >> Please vote here and in a week or two we can then act on this. >> >> Thanks, >> -- >> Michele Baldessari >> C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Sep 6 14:34:25 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Sep 2019 09:34:25 -0500 Subject: [oslo] Nova causes MySQL timeouts In-Reply-To: References: Message-ID: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Tagging with oslo as this sounds related to oslo.db. On 9/5/19 7:37 PM, Albert Braden wrote: > After more googling it appears that max_pool_size is a maximum limit on > the number of connections that can stay open, and max_overflow is a > maximum limit on the number of connections that can be temporarily > opened when the pool has been consumed. It looks like the defaults are 5 > and 10 which would keep 5 connections open all the time and allow 10 temp. > > Do I need to set max_pool_size to 0 and max_overflow to the number of > connections that I want to allow? Is that a reasonable and correct > configuration? Intuitively that doesn't seem right, to have a pool size > of 0, but if the "pool" is a group of connections that will remain open > until they time out, then maybe 0 is correct? I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. 0: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size 1: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.QueuePool.__init__ 2: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.connection_recycle_time > > *From:* Albert Braden > *Sent:* Wednesday, September 4, 2019 10:19 AM > *To:* openstack-discuss at lists.openstack.org > *Cc:* Gaëtan Trellu > *Subject:* RE: Nova causes MySQL timeouts > > We’re not setting max_pool_size nor max_overflow option presently. I > googled around and found this document: > > https://docs.openstack.org/keystone/stein/configuration/config-options.html > > > Document says: > > [api_database] > > connection_recycle_time = 3600               (Integer) Timeout before > idle SQL connections are reaped. > > max_overflow = None                                   (Integer) If set, > use this value for max_overflow with SQLAlchemy. > > max_pool_size = None                                  (Integer) Maximum > number of SQL connections to keep open in a pool. > > [database] > > connection_recycle_time = 3600               (Integer) Timeout before > idle SQL connections are reaped. > > min_pool_size = 1                                            (Integer) > Minimum number of SQL connections to keep open in a pool. > > max_overflow = 50                                          (Integer) If > set, use this value for max_overflow with SQLAlchemy. > > max_pool_size = None                                  (Integer) Maximum > number of SQL connections to keep open in a pool. > > If min_pool_size is >0, would that cause at least 1 connection to remain > open until it times out? What are the recommended values for these, to > allow unused connections to close before they time out? Is > “min_pool_size = 0” an acceptable setting? > > My settings are default: > > [api_database]: > > #connection_recycle_time = 3600 > > #max_overflow = > > #max_pool_size = > > [database]: > > #connection_recycle_time = 3600 > > #min_pool_size = 1 > > #max_overflow = 50 > > #max_pool_size = 5 > > It’s not obvious what max_overflow does. Where can I find a document > that explains more about these settings? > > *From:* Gaëtan Trellu > > *Sent:* Tuesday, September 3, 2019 1:37 PM > *To:* Albert Braden > > *Cc:* openstack-discuss at lists.openstack.org > > *Subject:* Re: Nova causes MySQL timeouts > > Hi Albert, > > It is a configuration issue, have a look to max_pool_size > and max_overflow options under [database] section. > > Keep in mind than more workers you will have more connections will be > opened on the database. > > Gaetan (goldyfruit) > > On Sep 3, 2019 4:31 PM, Albert Braden > wrote: > > It looks like nova is keeping mysql connections open until they time > out. How are others responding to this issue? Do you just ignore the > mysql errors, or is it possible to change configuration so that nova > closes and reopens connections before they time out? Or is there a > way to stop mysql from logging these aborted connections without > hiding real issues? > > Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' > (Got timeout reading communication packets) > From jungleboyj at gmail.com Fri Sep 6 14:37:39 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Sep 2019 09:37:39 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <38c6f889-2b82-1a59-f00d-699fb04df6f3@gmail.com> > > - reduce the number of TC members from 13 to 9 (I actually proposed > that 6 months ago at the PTG but that was not as popular then). A > group of 9 is a good trade-off between the difficulty to get enough > people to do project stewardship and the need to get a diverse set of > opinions on governance decision. > I am in support of this.  Seems appropriate to support the level of participation in OpenStack. > - allow "PTL" role to be multi-headed, so that it is less of a > superhuman and spreading the load becomes more natural. We would not > elect/choose a single person, but a ticket with one or more names on > it. From a governance perspective, we still need a clear contact point > and a "bucket stops here" voice. But in practice we could (1) contact > all heads when we contact "the PTL", and (2) consider that as long as > there is no dissent between the heads, it is "the PTL voice". To > actually make it work in practice I'd advise to keep the number of > heads low (think 1-3). > No concerns with this given that it has been something we have unofficially done in Cinder for years.  I couldn't have gotten things done the way I did without help from Sean McGinnis.  Now that the torch has been passed to Brian I plan to continue to support him there. >> [...] >> We drastically need to change the expectations we place on ourselves >> in terms of velocity. > > In terms of results, train cycle activity (as represented by merged > commits/day) is globally down 9.6% compared to Stein. Only considering > "core" projects, that's down 3.8%. > > So maybe we still have the same expectations, but we are definitely > reducing our velocity... Would you say we need to better align our > expectations with our actual speed? Or that we should reduce our > expectations further, to drive velocity further down? > In the case of Cinder our velocity is slowing due to reduced review activity.  That is soon going to be a big problem and we have had little luck at encouraging to do more reviews again.  I have also found that we have had to get better at saying 'No' to things.  This is in the interest of avoiding burnout.  There is a lot we want to do but if it isn't a priority for someone it simply isn't going to get done.  Prioritizing the work has become increasingly important. As has been touched upon in other discussions, I think we have a culture where it is difficult for them to say no to things.  It is great that people care about OpenStack and want to make things happen but it can't be at the cost of people  burning out.  To some extent we need to slow velocity.  If corporations don't step up to start helping out then we must be doing what needs to get done. From jungleboyj at gmail.com Fri Sep 6 14:38:28 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Sep 2019 09:38:28 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> Message-ID: <4a9a49e2-911c-5e55-d7d3-4115859a000c@gmail.com> On 9/5/2019 5:04 AM, Chris Dent wrote: > On Thu, 5 Sep 2019, Thierry Carrez wrote: > >> So maybe we still have the same expectations, but we are definitely >> reducing our velocity... Would you say we need to better align our >> expectations with our actual speed? Or that we should reduce our >> expectations further, to drive velocity further down? > > We should slow down enough that the vendors and enterprises start to > suffer. If they never notice, then it's clear we're trying too hard > and can chill out. > I actually agree with this!  :-)  We need them to start helping us prioritize. From jungleboyj at gmail.com Fri Sep 6 14:42:34 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 6 Sep 2019 09:42:34 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> Message-ID: On 9/5/2019 5:33 AM, Ghanshyam Mann wrote: > ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent wrote ---- > > On Thu, 5 Sep 2019, Thierry Carrez wrote: > > > > > So maybe we still have the same expectations, but we are definitely reducing > > > our velocity... Would you say we need to better align our expectations with > > > our actual speed? Or that we should reduce our expectations further, to drive > > > velocity further down? > > > > We should slow down enough that the vendors and enterprises start to > > suffer. If they never notice, then it's clear we're trying too hard > > and can chill out. > > +1 on this but instead of slow down and make vendors suffer we need the proper > way to notify or make them understand about the future cutoff effect on OpenStack > as software. I know we have been trying every possible way but I am sure there are > much more managerial steps can be taken. I expect Board of Director to come forward > on this as an accountable entity. TC should raise this as high priority issue to them (in meetings, > joint leadership meeting etc). Agreed.  I think that partially falls into the community's hands itself.  I have spent years advocating for OpenStack in my company and have started having success.  The problem is that it is a slow process.  I am hoping that others are doing the same and we will start seeing a reverse in the trend.  Otherwise, I think we need help from the foundation leadership to reach out and start re-engaging companies. > > I am sure this has been brought up before, can we make OpenStack membership company > to have a minimum set of developers to maintain upstream. With the current situation, I think > it make sense to ask them to contribute manpower also along with membership fee. But again > this is more of BoD and foundation area. I had this thought, but it is quite likely that then I would not be able to contribute anymore.  :-(  So, I fear that could be a slippery slope for many people. > > I agree on ttx proposal to reduce the TC number to 9 or 7, I do not think this will make any > difference or slow down on any of the TC activity. 9 or 7 members are enough in TC. > > As long as we get PTL(even without an election) we are in a good position. This time only > 7 leaderless projects (6 actually with Cyborg PTL missing to propose nomination in election repo and only on ML) are > not so bad number. But yes this is a sign of taking action before it goes into more worst situation. > > -gmann > > > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent > > From openstack at nemebean.com Fri Sep 6 15:01:29 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Sep 2019 10:01:29 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <20190905223137.i72s7n4tibkgypqf@bishop> <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> Message-ID: <918e56aa-d9c3-88e9-22fc-c7da12990f97@nemebean.com> On 9/6/19 3:48 AM, Thierry Carrez wrote: > Nate Johnston wrote: >> On Thu, Sep 05, 2019 at 11:59:22AM +0200, Thierry Carrez wrote: >>> - allow "PTL" role to be multi-headed, so that it is less of a >>> superhuman >>> and spreading the load becomes more natural. We would not elect/choose a >>> single person, but a ticket with one or more names on it. From a >>> governance >>> perspective, we still need a clear contact point and a "bucket stops >>> here" >>> voice. But in practice we could (1) contact all heads when we contact >>> "the >>> PTL", and (2) consider that as long as there is no dissent between the >>> heads, it is "the PTL voice". To actually make it work in practice I'd >>> advise to keep the number of heads low (think 1-3). >> >> I think there was already an effort to allow the PTL to shed some of >> their >> duties, in the form of the Cross Project Liaisons [1] project.  I >> thought that >> was a great way for more junior members of the community to get >> involved with >> stewardship and be recognized for that contribution, and perhaps be >> mentored up >> as they take a bit of load off the PTL.  I think if we expand the >> roles to >> include more of the functions that PTLs feel the need to do >> themselves, then by >> doing so we (of necessity) document those parts of the job so that >> others can >> handle them.  And perhaps projects can cooperate and pool resources - for >> example, the same person who is a liaison for Neutron to Oslo could >> probably be >> on the look out for issues of interest to Octavia as well, and so on. > > Cross-project liaisons are a form of delegation. So yes, PTLs already > can (and probably should) delegate most of their duties. And in a lot of > teams it already works like that. But we have noticed that it can be > harder to delegate tasks than share tasks. Basically, once someone is > the PTL, it is tempting to just have them do all the PTL stuff (since > they will do it by default if nobody steps up). > > That makes the job a bit intimidating, and it is sometimes hard to find > candidates to fill it. If it's clear from day 0 that two or three people > will share the tasks and be collectively responsible for those tasks to > be covered, it might be less intimidating (easier to find 2 x 50% than 1 > x 100% ?). > Just to play a bit of devil's advocate here, in many cases if a problem is everyone's problem then it becomes no one's problem because everyone assumes someone else will deal with it. This is why it usually works better to ask a specific person to volunteer for something than to put out a broad call for *someone* to volunteer. That said, maybe this ties into what Doug wrote earlier that if something doesn't get done maybe it wasn't that important in the first place. I'm not entirely sure I agree with that, but if it's going to be our philosophy going forward then this might be a non-issue. I'll also say that for me specifically, having the PTL title gives me a lever to use downstream. People don't generally question you spending time on a project you're leading. The same isn't necessarily true of being a core to whom PTL duties were delegated. Again, I'm not necessarily opposed to this, I just want to point out some potential drawbacks from my perspective. From rosmaita.fossdev at gmail.com Fri Sep 6 15:21:30 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 6 Sep 2019 11:21:30 -0400 Subject: [cinder][ops] Shanghai Forum - Cinder Topic Planning Message-ID: <64b7e9be-0210-129f-fe1e-c4455e7c944d@gmail.com> The Cinder Community would like to get in on some Forum action in Shanghai, but to do that we need to have some topics to propose. You don't have to actively be working on Cinder to propose a topic, and you don't have to be present to win. The point of the Forum sessions is to get feedback from operators and users about the current state of the software, get some ideas about what should be in the next release, and have some strategic discussion about The Future. So whether you can attend or not, if you have some ideas you'd like us to discuss, feel free to propose a topic: https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals The deadline for proposals to the Foundation is 20 September, so if you could get your idea down on the etherpad before the Cinder weekly meeting on Wednesday 18 September 16:00 UTC, that will give the Cinder team time to look them over. thanks! brian From mriedemos at gmail.com Fri Sep 6 15:46:15 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Sep 2019 10:46:15 -0500 Subject: [nova] Deprecating the XenAPI driver Message-ID: After discussing this at the Train PTG and logging a quality warning in the driver 3 months ago [1] with no response, the nova team is now formally deprecating the XenAPI driver [2]. There has been no working third party CI for the driver for at least a release and no clear maintainers of the driver in nova anymore. If you're using the driver in production, please speak up now otherwise technically the driver could be removed as early as the Ussuri release. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006744.html [2] https://review.opendev.org/#/c/680732/ -- Thanks, Matt From francois.scheurer at everyware.ch Fri Sep 6 15:59:29 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 6 Sep 2019 17:59:29 +0200 Subject: [keystone] cannot use 'openstack trust list' without admin role Message-ID: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> Dear Keystone Experts, I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. 'openstack trust create/show' works. 'openstack trust list' is denied. But keystone policy.json says:     "identity:create_trust": "user_id:%(trust.trustor_user_id)s",     "identity:list_trusts": "",     "identity:list_roles_for_trust": "",     "identity:get_role_for_trust": "",     "identity:delete_trust": "",     "identity:get_trust": "", So "openstack list trusts" is always allowed. In keystone log (I replaced the uid's by names in the ouput below) I see that 'identity:list_trusts()' was actually granted but just after that a_*admin_required()*_ is getting checked and fails... I wonder why... There is also a flag*is_admin_project=True* in the rbac creds for some reason... Any clue? Many thanks in advance! Cheers Francois #openstack --os-cloud stage-fsc trust create --project fscproject --role creator fsc fsc #=> fail because of the names and policy rules, but using uid's it works openstack --os-cloud stage-fsc trust create --project aeac4b07d8b144178c43c65f29fa9dac --role 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c 3e9b1a4fe95048a3b98fb5abebd44f6c +--------------------+----------------------------------+ | Field              | Value                            | +--------------------+----------------------------------+ | deleted_at         | None                             | | expires_at         | None                             | | id                 | e74bcdf125e049c69c2e0ab1b182df5b | | impersonation      | False                            | | project_id         | fscproject | | redelegation_count | 0                                | | remaining_uses     | None                             | | roles              | creator                          | | trustee_user_id    | fsc | | trustor_user_id    | fsc | +--------------------+----------------------------------+ openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b +--------------------+----------------------------------+ | Field              | Value                            | +--------------------+----------------------------------+ | deleted_at         | None                             | | expires_at         | None                             | | id                 | e74bcdf125e049c69c2e0ab1b182df5b | | impersonation      | False                            | | project_id         | fscproject | | redelegation_count | 0                                | | remaining_uses     | None                             | | roles              | creator                          | | trustee_user_id    | fsc | | trustor_user_id    | fsc | +--------------------+----------------------------------+ #this fails: openstack --os-cloud stage-fsc trust list *You are not authorized to perform the requested action: admin_required. (HTTP 403)* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Sep 6 16:14:56 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 6 Sep 2019 11:14:56 -0500 Subject: [release][heat][nova][openstackclient][openstacksdk][nova] Pending final library releases Message-ID: <20190906161456.GA14051@sm-workstation> Hey everyone, Yesterday was the deadline for any non-client library releases. In order to make sure we include any unreleased commits, and to make sure those commits are able to make it onto a stable/train branch, releases were proposed by the Release Management team for any libs that looked like they needed it. We've had most of them +1'd by the team's PTL or release liaison and have been able to process those requests. There are still a few with no responses from the teams though. https://review.opendev.org/#/q/topic:train-3+status:open If you are a PTL or release liaison for one of the teams tagged in the subject line, please take a look and either +1 if things are ready, or if there are any last minute critical fixes going in, update the patches with a more appropriate commit hash to tag from. For any release requests not ack'd by the teams, we will need to proceed with these by Monday morning to make sure updates make it out and any dependency issues are flushed out before the client lib and other upcoming freezes. I will also submit patches to create the stable/train branch for any of these libs that have not already done so. If there are any questions or concerns about any of this, please reach out here or in the #openstack-release channel and we'll do what we can to help out. Thanks! Sean From dale at bewley.net Fri Sep 6 16:44:16 2019 From: dale at bewley.net (Dale Bewley) Date: Fri, 6 Sep 2019 09:44:16 -0700 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: As an uninformed user I would just like to say Horizon is seen _as_ Openstack to new users and I appreciate ever effort to improve it. Without discounting past work, the Horizon experience leaves much to be desired and it colors the perspective on the entire platform. On Fri, Sep 6, 2019 at 05:01 Radomir Dopieralski wrote: > > > On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka > wrote: > >> Hi, >> >> > we need people familiar with Angular and Horizon's ways of using >> Angular (which seem to be very >> > non-standard) that would be willing to write and review code. >> Unfortunately the people who originally >> > introduced Angular in Horizon and designed how it is used are no longer >> interested in contributing, >> > and there don't seem to be any new people able to handle this. >> >> I've been working with Horizon's Angular for quite some time and don't >> mind keeping at it, but >> it's useless unless I can get my code merged, hence my original message. >> >> As far as attracting new developers goes, I think that removing some >> barriers to entry couldn't hurt - >> seeing commits simply lost to time being one of them. I can see it as >> being fairly demoralizing. >> > > We can't review your patches, because we don't understand them. For the > patches to be merged, we > need more than one person, so that they can review each other's patches. > > >> > Personally, I think that a better long-time strategy would be to remove >> all >> > Angular-based views from Horizon, and focus on maintaining one language >> and one set of tools. >> >> Removing AngularJS wouldn't remove JavaScript from horizon. We'd still be >> left with a home-brewish >> framework (which is buggy as is). I don't think removing js completely is >> realistic either: we'd lose >> functionality and worsen user experience. I think that keeping Angular is >> the better alternative: >> >> 1) A lot of work has already been put into Angularization, solving many >> problems >> 2) Unlike legacy js, Angular code is covered by automated tests >> 3) Arguably, improvments are, on average, easier to add to Angular than >> pure js implementations >> >> Whatever reservations there may be about the current implementation can >> be identified and addressed, but >> all in all, I think removing it at this point would be counterproductive. >> > > JavaScript is fine. We all know how to write and how to review JavaScript > code, and there doesn't > have to be much of it — Horizon is not the kind of tool that has to bee > all shiny and animated. It's a tool > for getting work done. AngularJS is a problem, because you can't tell what > the code does just by looking > at the code, and so you can neither review nor fix it. > > There has been a lot of work put into mixing Horizon with Angular, but I > disagree that it has solved problems, > and in fact it has introduced a lot of regressions. Just to take a simple > example, the translations are currently > broken for en.AU and en.GB languages, and date display is not localized. > And nobody cares. > > We had automated tests before Angular. There weren't many of them, because > we also didn't have much JavaScript code. > If I remember correctly, those tests were ripped out during the > Angularization. > > Arguably, improvements are, on average, impossible to add to Angular, > because the code makes no sense on its own. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Sep 6 17:16:48 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 06 Sep 2019 18:16:48 +0100 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: Message-ID: <2b3f791f1ac861d3a9cb8c73b438c51044fa6ed1.camel@redhat.com> On Thu, 2019-09-05 at 15:10 +0000, Adrian Chiris wrote: > Greetings, > I was wondering what is the guideline in regards to which kernels are supported by OpenStack in the various Linux > distributions. > > Looking at [1], Taking for example latest CentOS major (7): > Every "minor" version is released with a different kernel version, > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and the newest released in 2018 (CentOS 7.6, kernel > 3.10.0-957) for what its worth once centos8 is out which should be soonish i hope that will not be an issue so in Ussuri the bug can be fixed without regard for ceten 7 at least on master. > > While I understand that OpenStack projects are expected to support all CentOS 7.x releases. am i actully dont know if its resonable to expect all centos 7.x version to be supported. downstream we do not support OSP on all Rhel 7 version for all release. after a certen point to recive new zstream ream version of OSP you need to move to a later rhel release. if you continue to run the old x.y.z version on older rhel its supported but the latest .z is only tested/supported on the latest rhel 7.x expecting all openstack project to support the kernel form 7.0 is probably an unrealistic requirement. if so it would meen 10 years of support for that kernel or well untill we eol it. we dont test with old kernel in the gate as far as i know but i also dont know if we have a policy for this. > Does the same applies for the kernels they originally came out with? > > The reason I'm asking, is because I was working on doing some cleanup in neutron [2] for a workaround introduced > because of an old kernel bug, > It is unclear to me if it is safe to introduce this change. > > [1] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > [2] https://review.opendev.org/#/c/677095/ > > Thanks, > Adrian. > From smooney at redhat.com Fri Sep 6 17:29:18 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 06 Sep 2019 18:29:18 +0100 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> References: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> Message-ID: On Thu, 2019-09-05 at 08:20 -0700, Clark Boylan wrote: > On Thu, Sep 5, 2019, at 8:10 AM, Adrian Chiris wrote: > > > > Greetings, > > > > I was wondering what is the guideline in regards to which kernels are > > supported by OpenStack in the various Linux distributions. > > > > > > Looking at [1], Taking for example latest CentOS major (7): > > > > Every “minor” version is released with a different kernel version, > > > > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and > > the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > > > > > While I understand that OpenStack projects are expected to support all > > CentOS 7.x releases. > > It is my understanding that CentOS (and RHEL?) only support the current/latest point release of their distro [3]. yes so each rhedhat openstack plathform (OSP) z stream (x.y.z) release is tested and packaged only for the latest point release of rhel. we support customer on older .z release if they are also on the version of rhel it was tested with but we do expect customer to upgrage to the new rhel minor version when they update there openstack to a newer .z relese. this is becasue we update qemu and other products as part of the minor release of rhel and we need to ensure that nova works with that qemu and the kvm it was tested with. > We only test against that current point release. I don't expect we can be expected to support a distro release which > the distro doesn't even support. ya i think that is sane. also if we are being totally honest old kernels have bug many of which are security bugs so anyone running the original kernel any os shipped with is deploying a vulnerable cloud. > > All that to say I would only worry about the most recent point release. we might want to update the doc to that effect. it currently say latest Centos Major https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions perhaps it should be lates centos point/minor release since that is what we actully test with. also centos 8 is apprently complete the RC work so hopfully we will see a release soon. https://wiki.centos.org/About/Building_8 i have 0 info on centos but for Ussuri i hope we will have move to centos 8 and python 3 only. > > > > > Does the same applies for the kernels they _originally_ came out with? > > > > > > The reason I’m asking, is because I was working on doing some cleanup > > in neutron [2] for a workaround introduced because of an old kernel bug, > > > > It is unclear to me if it is safe to introduce this change. > > > > > > [1] > > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > > > [2] https://review.opendev.org/#/c/677095/ > > [3] https://wiki.centos.org/FAQ/General#head-dcca41e9a3d5ac4c6d900a991990fd11930867d6 > From ianyrchoi at gmail.com Fri Sep 6 17:37:23 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 7 Sep 2019 02:37:23 +0900 Subject: [i18n][tc] The future of I18n In-Reply-To: <20190906133759.obgszlvqexgam5n3@csail.mit.edu> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> <20190906133759.obgszlvqexgam5n3@csail.mit.edu> Message-ID: <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> Hello, First of all, thanks a lot for raising into this thread. Please see inline: Jonathan Proulx wrote on 9/6/2019 10:37 PM: > I'd be lead by how the people working in the space want to organize, but... > > Seems like SIG would be a good fit as I18N is extremely cross project, > presumably everything has text output even if it's just logging and > not enduser focused. > > my 2¢ > -Jon > > On Fri, Sep 06, 2019 at 11:36:38AM +0200, Thierry Carrez wrote: > :Hi! > : > :The I18n project team had no PTL candidates for Ussuri, so the TC needs to > :decide what to do with it. It just happens that Ian kindly volunteered to be > :an election official, and therefore could not technically run for I18n PTL. > :So if Ian is still up for taking it, we could just go and appoint him. I love I18n, and I could not imagine OpenStack world without I18n - I would like to take I18n PTL role for Ussuari cycle if there is no objection. > : > :That said, I18n evolved a lot, to the point where it might fit the SIG > :profile better than the project team profile. > : > :As a reminder, project teams are responsible for producing > :OpenStack-the-software, and since they are all integral in the production of > :the software that we want to release on a time-based schedule, they come with > :a number of mandatory tasks (like designating a PTL every 6 months). > : > :SIGs (special interest groups) are OpenStack teams that work on a mission > :that is not directly producing a piece of the OpenStack release. SIG members > :are bound by their mission, rather than by a specific OpenStack release > :deliverable. There is no mandatory task, as it is OK if the group goes > :dormant for a while. > : > :The I18n team regroups translators, with an interest of making OpenStack (in > :general, not just the software) more accessible to non-English speakers. They > :currently try to translate the OpenStack user survey, the Horizon dashboard > :messages, and key documentation. > : > :It could still continue as a project team (since it still produces Horizon > :translations), but I'd argue that at this point it is not what defines them. > :The fact that they are translators is what defines them, which IMHO makes > :them fit the SIG profile better than the project team profile. They can > :totally continue proposing translation files for Horizon as a I18n SIG, so > :there would be no technical difference. Just less mandatory tasks for the > :team. > : > :Thoughts ? First of all, I would like to more clarify the scope of which artifacts I18n team deals with. Regarding translation contributions to upstream official projects, I18n team started with 1) user-facing strings (e.g., dashboards), 2) non-user-facing strings (e.g., log messages) and 3) openstack-manuals documentation. The second one is not active after no real support for maintaining to translate log messages, and the third one is now expanded to some of project documents which there are the demand of translation like openstack-helm, openstack-ansible, and horizon ([2] includes the list of Docs team repos, project documents for operators and part of SIG). Based on the background, I can say that I18n team currently involves in total 19 dashboard projects [3], and 6 official project document repositories. Although the number of translated words is not larger than previous cycles [4], the amount of parts related with upstream official projects seems not to be small. IMHO, since it seems that I18n team's release activities [5] are rather stable, from the perspective, I think staying I18n team as SIG makes sense, but please kindly consider the followings: - Translators who have contributed translations to official OpenStack projects are currendly regarded as ATC and APC of the I18n project.   It would be great if OpenStack TC and official project teams regard those translation contribution as ATC and APC of corresponding official projects, if I18n team stays as SIG. - Zanata (translation platform, instance: translate.openstack.org) open source is not maintained anymore. I18n team wanted to change translation platform to something other than Zanata [6] but   current I18n team members don't have enough technical bandwidth to do that (FYI: Fedora team just started to migrate from Zanata to Weblate [7] - not easy stuff and non-small budget were agreed to use by Council).   Regardless of I18n team's status as an official team or SIG, such migration to a new translation platform indeed needs the support from the current governance (TC, UC, Foundation, Board of Directors, ...). - Another my brief understanding on the difference between as an official team and as SIG from the perspective of Four Opens is that SIGs and working groups seems that they have some flexibility using non-opensource tools for communication.   For example, me, as PTL currently encourage all the translators to come to the tools official teams use such as IRC, mailing lists, and Launchpad (note: I18n team has not migrated from Launchpad to Storyboard) - I like to use them and   I strongly believe that using such tools can assure that the team is following Four Opens well. But sometimes I encounter some reality - local language teams prefer to use their preferred communication protocols.   I might need to think more how I18n team as SIG communicates well with members, but I think the team members might want to more find out how to better communicate with language teams (e.g., using Hangout, Slack, and so on from the feedback)   , and try to use better communication tools which might be comfortable to translators who have little background on development. Note that I have not discussed the details with team members - I am still open with my thoughts, would like to more listen to opinions from the team members, and originally wanted to expand the discussion with such perspective during upcoming PTG in Shanghai with more Chinese translators. And dear OpenStackers including I18n team members & translators: please kindly share your sincere thoughts. With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/114191.html [2] https://translate.openstack.org/version-group/view/doc-resources/projects [3] https://translate.openstack.org/version-group/view/Train-dashboard-translation/projects [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007989.html [5] https://docs.openstack.org/i18n/latest/release_management.html [6] https://blueprints.launchpad.net/openstack-i18n/+spec/renew-translation-platform [7] https://fedoraproject.org/wiki/L10N_Move_to_Weblate > : > :-- > :Thierry Carrez (ttx) > : From zbitter at redhat.com Fri Sep 6 17:44:47 2019 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Sep 2019 13:44:47 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190905113636.qwxa4fjxnju7tmip@barron.net> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> Message-ID: <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> On 5/09/19 7:36 AM, Tom Barron wrote: > On 05/09/19 19:33 +0900, Ghanshyam Mann wrote: >> ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent >> wrote ---- >> > On Thu, 5 Sep 2019, Thierry Carrez wrote: >> > >> > > So maybe we still have the same expectations, but we are >> definitely reducing >> > > our velocity... Would you say we need to better align our >> expectations with >> > > our actual speed? Or that we should reduce our expectations >> further, to drive >> > > velocity further down? >> > >> > We should slow down enough that the vendors and enterprises start to >> > suffer. If they never notice, then it's clear we're trying too hard >> > and can chill out. >> >> +1 on this but instead of slow down and make vendors suffer we need >> the proper >> way to notify or make them understand about the future cutoff effect >> on OpenStack >> as software. I know we have been trying every possible way but I am >> sure there are >> much more managerial steps can be taken.  I expect Board of Director >> to come forward >> on this as an accountable entity. TC should raise this as high >> priority issue to them (in meetings, >> joint leadership meeting etc). >> >> I am sure this has been brought up before, can we make OpenStack >> membership company >> to have a minimum set of developers to maintain upstream. With the >> current situation, I think >> it make sense to ask them to contribute manpower also along with >> membership fee.  But again >> this is more of BoD and foundation area. > > +1 > > IIUC Gold Membership in the Foundation provides voting privileges at a > cost of $50-200K/year and Corporate Sponsorship provides these plus > various marketing benefits at a cost of $10-25K/year.  So far as I can > tell there is not a requirement of a commitment of contributors and > maintainers with the exception of the (currently closed) Platinum > Membership, which costs $500K/year and requires at least 2 FTE > equivalents contributing to OpenStack. Even this incredibly minimal requirement was famously not met for years by one platinum member, and a (different) platinum member was accepted without ever having contributed upstream in the past or apparently ever intending to in the future. What I'm saying is that if this a the mechanism we want to use to drive contributions, I can tell you now how it's gonna work out. The question we should be asking ourselves is why companies see value in being sponsors of the foundation but not in contributing upstream, and how we convince them of the value of the latter. One initiative the TC started on this front is this: https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html (BTW we could use help in converting the outdated Help Most Wanted entries to this format. Volunteers welcome.) cheers, Zane. > In general I see requirements > for annual cash expenditure to the Foundation, as for membership in any > joint commercial enterprise, but little that ensures the availability of > skilled labor for ongoing maintenance of our projects. > > -- Tom Barron > >> >> I agree on ttx proposal to reduce the TC number to 9 or 7, I do not >> think this will make any >> difference or slow down on any of the TC activity. 9 or 7 members are >> enough in TC. >> >> As long as we get PTL(even without an election) we are in a good >> position. This time only >> 7 leaderless projects (6 actually with Cyborg PTL missing to propose >> nomination in election repo and only on ML) are >> not so bad number. But yes this is a sign of taking action before it >> goes into more worst situation. >> >> -gmann >> >> > >> > -- >> > Chris Dent                       ٩◔̯◔۶           https://anticdent.org/ >> > freenode: cdent >> >> > From ianyrchoi at gmail.com Fri Sep 6 17:55:23 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 7 Sep 2019 02:55:23 +0900 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> <878ebb98-3204-7ce3-8ca6-b516ae7921a2@gmail.com> Message-ID: <900c9ec1-fade-05ee-cdff-4f6e9edb00e8@gmail.com> Akihiro Motoki wrote on 9/4/2019 11:06 PM: > On Wed, Sep 4, 2019 at 12:43 AM Ian Y. Choi wrote: >> Akihiro Motoki wrote on 9/3/2019 11:12 PM: >>> On Tue, Sep 3, 2019 at 10:18 PM Doug Hellmann wrote: >>>> >>>>> On Sep 3, 2019, at 9:04 AM, Stephen Finucane wrote: >>>>> >>>>> On Tue, 2019-09-03 at 08:42 -0400, Doug Hellmann wrote: >>>>>>> On Sep 3, 2019, at 5:54 AM, Stephen Finucane wrote: >>>>>>> >>>>>>> On Mon, 2019-09-02 at 15:31 -0400, Doug Hellmann wrote: >>>>>>>>> On Sep 2, 2019, at 3:07 AM, Akihiro Motoki wrote: >>>>>>> [snip] >>>>>>> >>>>>>>>> When the goal is defined the docs team thought the doc gate job can >>>>>>>>> handle the PDF build >>>>>>>>> without extra tox env and zuul job configuration. However, during >>>>>>>>> implementing the zuul job support >>>>>>>>> it turns out at least a new tox env or an extra zuul job configuration >>>>>>>>> is required in each project >>>>>>>>> to make the docs job fail when PDF build failure is detected. As a >>>>>>>>> result, we changes the approach >>>>>>>>> and the new tox target is now required in each project repo. >>>>>>>> The whole point of structuring the goal the way we did was that we do >>>>>>>> not want to update every single repo this cycle so we could roll out >>>>>>>> PDF building transparently. We said we would allow the job to pass >>>>>>>> even if the PDF build failed, because this was phase 1 of making all >>>>>>>> of this work. >>>>>>>> >>>>>>>> The plan was to 1. extend the current job to make PDF building >>>>>>>> optional; 2. examine the results to see how many repos need >>>>>>>> significant work; 3. add a feature flag via a setting somewhere in >>>>>>>> the repo to control whether the job fails if PDFs cannot be built. >>>>>>>> That avoids a second doc job running in parallel, and still allows us >>>>>>>> to roll out the PDF build requirement over time when we have enough >>>>>>>> information to do so. >>>>>>> Unfortunately when we tried to implement this we found that virtually >>>>>>> every project we looked at required _some_ amount of tweaks just to >>>>>>> build, let alone look sensible. This was certainly true of the big >>>>>>> service projects (nova, neutron, cinder, ...) which all ran afoul of a >>>>>>> bug [1] in the Sphinx LaTeX builder. Given the issues with previous >>>>>>> approach, such as the inability to easily reproduce locally and the >>>>>>> general "hackiness" of the thing, along with the fact that we now had >>>>>>> to submit changes against projects anyway, a collective decision was >>>>>>> made [2] to drop that plan and persue the 'pdfdocs' tox target >>>>>>> approach. >>>>>> We wanted to avoid making a bunch of the same changes to projects just to >>>>>> add the PDF building instructions. If the *content* of a project’s documentation >>>>>> needs work, that’s different. We should make those changes. >>>>> I thought the only reason to hack the docs venv in a Zuul job was to >>>>> avoid having to mass patch projects to add tox configuration? As such, >>>>> if we're already having to mass patch projects because they don't build >>>>> otherwise, why wouldn't we add the tox configuration? Was there another >>>>> reason to pursue the zuul-only approach that I've forgotten about/never >>>>> knew? >>>> I expected to need to fix formatting (even up to the point of commenting things >>>> out, like we found with the giant config sample files). Those are content changes, >>>> and would be mostly unique across projects. >>>> >>>> I wanted to avoid a large number of roughly identical changes to add tox environments, >>>> zuul jobs, etc. because having a lot of patches like that across all the repos makes >>>> extra work for small gain, especially when we can get the same results with a small >>>> number of changes in one repository. >>>> >>>> The approach we discussed was to update the docs job to run some extra steps using >>>> scripts that lived in the openstackdocstheme repository. That shouldn’t require >>>> adding any extra software or otherwise modifying the tox environments. Did that approach >>>> not work out? >>> We explored ways only to update the docs job to run extra commands to >>> build PDF docs, >>> but there is one problem that the job cannot know whether PDF build is >>> ready or not. >>> If we ignore an error from PDF build, it works for repositories which >>> are not ready for PDF build, >>> but we cannot prevent PDF build failure again for repositories ready >>> for PDF build >>> As my project team hat of neutron team, we don't want to have PDF >>> build failure again >>> once the PDF build starts to work. >>> To avoid this, stephenfin, asettle, AJaeger and I agree that some flag >>> to determine if the PDF build >>> is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. >>> Another way we considered is a variable in openstack-tox-docs job, but >>> we cannot pass a variable >>> to zuul project template, so we didn't use this way. >>> If there is a more efficient way, I am happy to use it. >>> >>> Thanks, >>> Akihiro >>> >> Hello, >> >> >> Sorry for joining in this thread late, but to I first would like to try >> to figure out the current status regarding the current discussion on the >> thread: >> >> - openstackdocstheme has docstheme-build-pdf script [1] >> >> - build-pdf-docs Zuul job in openstack-zuul-jobs pre-installs all >> required packages [2] >> >> - Current guidance for project repos is that 1) is to just add to >> latex_documents settings [3] and add pdf-docs environment for trigger [4] >> >> - Project repos additionally need to change more for successful PDF >> builds like adding more options on conf.py [5] and changing more on rst >> files to explictly options like [6] . > Thanks Ian. > > Your understanding on the current situations is correct. Good summary, thanks. > >> >> Now my questions from comments are: >> >> a) How about checking an option in somewhere else like .zuul.yaml or >> using grep in docs env part, not doing grep to check the existance of >> "pdf-docs" tox env [3]? > I am not sure how your suggestion works more efficiently than the > current pdf-docs tox env approach. > We explored an option to introduce a flag variable to the > openstack-tox-docs job but we use > a zuul project-template which wraps openstack-tox-docs job and another job. > The current zuul project-template does not accept a variable and > projects who want to specify > a flag explicitly needs to copy the content of the project-template. > Considering this we gave up this route. > Regarding "using grep in docs env part", I haven't understood what you think, > but it looks similar to the current approach. > >> b) Can we call docstheme-build-pdf in openstackdocstheme [1] instead of >> direct Sphinx & make commands in "pdf-docs" environment [4]? > It can, but I am not sure whether we need to update the current > proposed patches. > The only advantage of using docstheme-build-pdf is that we don't need to change > project repositories when we update the command lines in future, but > it sounds a matter of taste. > >> c) Ultimately, would executing docstheme-build-pdf command in >> build-pdf-docs Zuul job with another kind of trigger like bullet a) be >> feasible and/or be implemented by the end of this cycle? > We can, but again it is a matter of taste to me > and most important thing is how we handle a flag to enable PDF build. > > Thanks, > Akihiro Thank you for sharing your opinion, and I agree that it can be the matter of taste. I wanted to emphasize that the changes to project repositories are rather so small, and have tried to explore which ways can more minimize the changes to project repositories (e.g., without any change on tox.ini in project repositories). By the way, is it possible to centralize such flags into a common repository such as a repo related with build-pdf-docs Zuul job like [1] and [2] (I took examples from I18n team)? I am asking since I also agree that it would be the best if the same changes to all repos' tox.ini and other files could be minimized. If it isn't possible, than I think there would be no alternatives. Note that my asking assumes that current PDF community goal well reflects what I previously discussed with Doug [3]. With many thanks, /Ian [1] https://review.opendev.org/#/c/525028/1/zuul.d/projects.yaml [2] https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/roles/prepare-zanata-client/files/common_translation_update.sh#L39 [3] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134609.html > >> >> >> With many thanks, >> >> >> /Ian >> >> >> [1] https://review.opendev.org/#/c/665163/ >> >> [2] >> https://review.opendev.org/#/c/664555/25/roles/prepare-build-pdf-docs/tasks/main.yaml at 3 >> >> [3] https://review.opendev.org/#/c/678393/4/doc/source/conf.py >> >> [4] https://review.opendev.org/#/c/678393/4/tox.ini >> >> [5] https://review.opendev.org/#/c/678747/1/doc/source/conf.py at 270 >> >> [6] https://review.opendev.org/#/c/678747/1/doc/source/index.rst at 13 >> From farida.elzanaty at mail.mcgill.ca Fri Sep 6 18:16:39 2019 From: farida.elzanaty at mail.mcgill.ca (Farida El Zanaty) Date: Fri, 6 Sep 2019 18:16:39 +0000 Subject: [all][research] Survey for Openstack developers =) Message-ID: Hi!I am Farida from McGill University. I am trying to learn more about code reviews in the Openstack community, as I have been studying Openstack projects for a while. Please help me understand your perspective on design discussions during code reviews by filling up this 10-minute survey: https://forms.gle/Hhn191f6cxF5hVgG8 Survey participants will also be entered into a raffle for a $50 Amazon gift card. A little bit of context: Under the supervision of Prof. Shane McIntosh, my research aims to study design discussions that occur between developers during code reviews. Last year, we published a study about the frequency and types of such discussions that occur in OpenStack Nova and Neutron (http://rebels.ece.mcgill.ca/papers/esem2018_elzanaty.pdf).We are reaching out to Openstack developers to better understand their perspectives on design discussions during code reviews. Survey: https://forms.gle/Hhn191f6cxF5hVgG8Thanks for your time, Farida El-Zanaty =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Fri Sep 6 18:20:20 2019 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 6 Sep 2019 14:20:20 -0400 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190906131053.rofnz7zeoudctoif@yuggoth.org> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: <20190906182020.7gr2uckoqdj7ycwn@barron.net> On 06/09/19 13:10 +0000, Jeremy Stanley wrote: >On 2019-09-06 13:27:53 +0100 (+0100), Chris Dent wrote: >[...] >> Most people work on OpenStack because it is their job or is closely >> related to their job. But because it is "open source" and "a >> community" and "collaborative" doing what people ask for and helping >> others achieve what they need is but one small piece of the >> motivation and action calculus. >[...] > >I don't know that this captures my motivation, at least. I chose my >job so that I could assist in the creation and maintenance of >OpenStack and similar free software, not the other way around. Maybe >I'm in a minority within the community, but I suspect there are more >folks than just me who feel the same. > Me too, though I'm fortunate enough to have an employer who genuinely values open source work, including building and fostering open source communities. I've worked for others where open source work was always only an instrumental goal, not an end in itself -- indeed I think it was sometimes considered a necessary evil. -- Tom From miguel at mlavalle.com Fri Sep 6 18:24:44 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 6 Sep 2019 13:24:44 -0500 Subject: [infra][neutron] Requesting help to remove feature branches Message-ID: Dear Infra Team, We have decided to remove from the Neutron repo the following feature branches: feature/graphql feature/lbaasv2 feature/pecan feature/qos We don't need to preserve any state from these branches. In the case of the first one, no code was merged. The work in the other three branches is already merged into master. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Sep 6 18:35:29 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Sep 2019 11:35:29 -0700 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019, at 11:24 AM, Miguel Lavalle wrote: > Dear Infra Team, > > We have decided to remove from the Neutron repo the following feature branches: > > feature/graphql > feature/lbaasv2 > feature/pecan > feature/qos > > We don't need to preserve any state from these branches. In the case of > the first one, no code was merged. The work in the other three branches > is already merged into master. I forgot to mention that we need to close all the open changes proposed to these branches before we can delete the branch in Gerrit. feature/graphql appears to have some open changes, but the others are fine. Can you abandon those changes then we can delete the branch. Thanks, Clark From zbitter at redhat.com Fri Sep 6 19:26:10 2019 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Sep 2019 15:26:10 -0400 Subject: [heat] Resource handling in Heat stacks In-Reply-To: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> References: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> Message-ID: On 4/09/19 3:51 AM, Andreas Florath wrote: > Many thanks! Works like a charm! > > Suggestion: document default value of 'delete_on_termination'. 😉 Patches accepted 😉 > Kind regards > > Andre > > > On Wed, 2019-09-04 at 12:04 +0530, Rabi Mishra wrote: >> On Wed, Sep 4, 2019 at 11:41 AM Andreas Florath > > wrote: >>> Hello! >>> >>> >>> Can please anybody tell me, if all resources which are created >>> within a Heat stack belong to the stack in the way that >>> all the resources are freed / deleted when the stack is deleted? >>> >>> IMHO all resources which are created during the initial creation or >>> update of a stack, even if they are ephemeral or only internal >>> created, must be deleted when the stack is deleted by OpenStack Heat >>> itself. Correct? >>> >>> My question might see obvious, but I did not find an explicit hint in >>> the documentation stating this. >>> >>> >>> The reason for my question: I have a Heat template which uses two >>> images to create a server (using block_device_mapping_v2). Every time >>> I run an 'openstack stack create' and 'openstack stack delete' cycle >>> one ephemeral volume is left over / gets not deleted. >>> >> I think it's due toe delete_on_termination[1] property of bdmv2 which >> is interpreted as 'False', if not specified. You can set it to 'True' >> to delete the volumes along with server. I've not checked if it's >> different from how nova api behaves though. >> >> [1] >> https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Server-prop-block_device_mapping_v2-*-delete_on_termination >> >>> For me this sounds like a problem in OpenStack (Heat). >>> (It looks that this is at least similar to >>> https://review.opendev.org/#/c/341008/ >>> which never made it into master.) >>> >>> >>> Kind regards >>> >>> Andre >>> >>> >>> >> >> From fungi at yuggoth.org Fri Sep 6 19:54:33 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Sep 2019 19:54:33 +0000 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> Message-ID: <20190906195433.iv5xtixqbsvdwd4h@yuggoth.org> On 2019-09-06 18:29:18 +0100 (+0100), Sean Mooney wrote: [...] > for Ussuri i hope we will have move to centos 8 and python 3 only. [...] In that case, you'll probably want to keep an eye on https://review.opendev.org/679798 as things unfold. Right now, though, it looks likely you'll get your wish. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Fri Sep 6 20:17:53 2019 From: amy at demarco.com (Amy Marrich) Date: Fri, 6 Sep 2019 15:17:53 -0500 Subject: [Horizon] Help making custom theme - resend as still looking:) In-Reply-To: References: Message-ID: Just thought I'd resend this out to see if someone could help:) For the Grace Hopper Conference's Open Source Day we're doing a Horizon based workshop for OpenStack (running Devstack Pike). The end goal is to have the attendee teams create their own OpenStack theme supporting a humanitarian effort of their choice in a few hours. I've tried modifying the material theme thinking it would be the easiest route to go but that might not be the best way to go about this.:) I've been getting some assistance from e0ne in the Horizon channel and my logo now shows up on the login page, and I had already gotten the SITE_BRAND attributes and the theme itself to show up after changing the local_settings.py. If anyone has some tips or a tutorial somewhere it would be greatly appreciated and I will gladly put together a tutorial for the repo when done. Thanks! Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Sep 6 20:44:26 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 6 Sep 2019 15:44:26 -0500 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: Hi Clark, Thanks for the quick respond. Done: https://review.opendev.org/#/q/project:openstack/neutron+branch:feature/graphql Regards On Fri, Sep 6, 2019 at 1:36 PM Clark Boylan wrote: > On Fri, Sep 6, 2019, at 11:24 AM, Miguel Lavalle wrote: > > Dear Infra Team, > > > > We have decided to remove from the Neutron repo the following feature > branches: > > > > feature/graphql > > feature/lbaasv2 > > feature/pecan > > feature/qos > > > > We don't need to preserve any state from these branches. In the case of > > the first one, no code was merged. The work in the other three branches > > is already merged into master. > > I forgot to mention that we need to close all the open changes proposed to > these branches before we can delete the branch in Gerrit. feature/graphql > appears to have some open changes, but the others are fine. > > Can you abandon those changes then we can delete the branch. > > Thanks, > Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 6 22:57:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Sep 2019 22:57:51 +0000 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: <20190906225750.tgbnwz6wu5gdfezo@yuggoth.org> On 2019-09-06 13:24:44 -0500 (-0500), Miguel Lavalle wrote: > We have decided to remove from the Neutron repo the following feature > branches: > > feature/graphql > feature/lbaasv2 > feature/pecan > feature/qos > > We don't need to preserve any state from these branches. In the case of the > first one, no code was merged. The work in the other three branches is > already merged into master. Sanity-checking feature/lbaasv2, `git merge-base` between it and master suggest cc400e2 is the closest common ancestor. There are 4 potentially substantive commits on feature/lbaasv2 past that point which do not seem to appear in the master branch history: 7147389 Implement Jinja templates for haproxy config cfa4a86 Tests for extension, db and plugin for LBaaS V2 02c01a3 Plugin/DB additions for version 2 of LBaaS API 4ed8862 New extension for version 2 of LBaaS API Do you happen to know whether these need to be preserved (or what happened with them)? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Sep 6 23:17:54 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Sep 2019 16:17:54 -0700 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019, at 1:44 PM, Miguel Lavalle wrote: > Hi Clark, > > Thanks for the quick respond. Done: > https://review.opendev.org/#/q/project:openstack/neutron+branch:feature/graphql > And now the branches are gone. For historical papertrails here are the branches and their heads: feature/graphql ab371ffcc69ab93d1046932297f7029bf7f184e5 feature/lbaasv2 0eed081ad9ef516f0207f179643781aad5b85b8e feature/pecan f747c35b1c1b8371de399c8239699cb89455c6e6 feature/qos 28a4c0aa69924e28f2e302acb9a8313fb310d5bf Clark From melwittt at gmail.com Fri Sep 6 23:59:48 2019 From: melwittt at gmail.com (melanie witt) Date: Fri, 6 Sep 2019 16:59:48 -0700 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? Message-ID: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Howdy all, TL;DR: I have a question, does the Telemetry service (or any other service) still make use of the server usage audit log API in Nova [1]? Recently I was investigating customer issues where the nova.task_log database table grows infinitely and is never cleaned up [2]. I asked about it today in #openstack-nova [3] and Matt Riedemann explained that the API is toggled via config option [4] and that the Telemetry service is/was the consumer of the API. I found through code inspection that there are no methods for deleting nova.task_log records and am trying to determine what is the best way forward for handling cleanup. Matt mentioned the possibility of deprecating the server usage audit log API altogether, which we might be able to do if no one is using it anymore. So, I was thinking: * If Telemetry is no longer using the server usage audit log API, we deprecate it in Nova and notify deployment tools to stop setting [DEFAULT]/instance_usage_audit = true to prevent further creation of nova.task_log records and recommend manual cleanup by users or * If Telemetry is still using the server usage audit log API, we create a new 'nova-manage db purge_task_log --before ' (or similar) command that will hard delete nova.task_log records before a specified date or all if --before is not specified Can anyone shed any light on whether Telemetry, or any other service, still uses the server usage audit log API in Nova? Would we be able to deprecate it? If we can't, what do you think of the nova-manage command idea? I would appreciate hearing your thoughts about it. Cheers, -melanie [1] https://docs.openstack.org/api-ref/compute/#server-usage-audit-log-os-instance-usage-audit-log [2] https://bugzilla.redhat.com/show_bug.cgi?id=1726256 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-09-06.log.html#t2019-09-06T14:10:38 [4] https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.instance_usage_audit From corvus at inaugust.com Fri Sep 6 16:19:58 2019 From: corvus at inaugust.com (James E. Blair) Date: Fri, 06 Sep 2019 09:19:58 -0700 Subject: [all][tc] PDF Community Goal Update In-Reply-To: (Akihiro Motoki's message of "Tue, 3 Sep 2019 23:12:30 +0900") References: <4ea9cf7e-1669-3f29-59a7-bc2b788628e9@suse.com> <9430fe6726ca53328abb588b21c1823055cdaca3.camel@redhat.com> <160D24A7-DE66-45DA-BBB8-AFD916D00004@doughellmann.com> <7a4f103390cb2b9e4ec107b94f2e1e0dd2c500f0.camel@redhat.com> <6C2701AC-6305-45C6-A62D-7FF0B43DD0F2@doughellmann.com> Message-ID: <874l1p9z2p.fsf@meyer.lemoncheese.net> Akihiro Motoki writes: > To avoid this, stephenfin, asettle, AJaeger and I agree that some flag > to determine if the PDF build > is ready or not is needed. As of now, "pdf-docs" tox env is used as the flag. > Another way we considered is a variable in openstack-tox-docs job, but > we cannot pass a variable > to zuul project template, so we didn't use this way. You can't pass a variable to a project-template, but you can set a variable on a project: https://zuul-ci.org/docs/zuul/user/config.html#attr-project.vars -Jim From anmar.salih1 at gmail.com Sat Sep 7 02:51:23 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Fri, 6 Sep 2019 22:51:23 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Dear Lingxian, I cloud't find aodh log file. Also I did 'ps -ef | grep aodh' and here is the response. Best regards. On Thu, Sep 5, 2019 at 6:56 PM Lingxian Kong wrote: > Hi Anmar, > > Please see my comments in-line below. > > - > Best regards, > Lingxian Kong > Catalyst Cloud > > > On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih wrote: > >> Hi Lingxian, >> >> First of all, I would like to apologize because the email is pretty long. >> I listed all the steps I went through just to make sure that I did >> everything correctly. >> > > No need to apologize, more information is always helpful to solve the > problem. > > >> 4- Creating the webhook for the function by: openstack webhook create >> --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen >> capture for the response. I tried to copy >> and paste the webhook_url " >> http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into >> my internet browser, so I got 404 not found. I am not sure if this is >> normal response or I have something wrong here. >> > > Like Gaetan said, the webhook is supposed to be invoked by http POST. > > 9- Checking aodh alarm history by aodh alarm-history show >> ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response >> >> >> 10- Last step is to check the function execution in qinling and here is >> the response . (empty bracket). I am not sure >> what is the problem. >> > > Yeah, from the output of alarm history, the alarm is not triggered, as a > result, there won't be execution created by the webhook. > > Seems like the aodh-listener didn't receive the message or the message was > ignored. Could you paste the aodh-listener log but make sure: > > 1. `debug = True` in /etc/aodh/aodh.conf > 2. Trigger the python script again > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmar.salih1 at gmail.com Sat Sep 7 02:52:08 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Fri, 6 Sep 2019 22:52:08 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: <56d312af-2b52-49e4-afbc-446162cb08c8@email.android.com> References: <56d312af-2b52-49e4-afbc-446162cb08c8@email.android.com> Message-ID: Dear Gaetan. Thank you for responding to my question. I will check it out. Best regards. Anmar Salih On Wed, Sep 4, 2019 at 9:27 AM Gaëtan Trellu wrote: > Hi Anmar, > > About your 404 when try to use the webhook, I guess this is because you > are not doing a POST but a GET. > > Try to use curl or postman with POST method to validate your webhook. > > Gaetan (goldyfruit) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Sat Sep 7 04:57:15 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 06 Sep 2019 21:57:15 -0700 Subject: [keystone] Pre-feature-freeze update Message-ID: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> I won't be writing a team report since I'm still figuring out which way is up after a week in the desert, but with feature freeze next week I wanted to give a status update on all the in-flight work that is due next week: * System Scope and Default Roles All documented scope[1] and role[2] migrations are in progress. Some are closer to done than others. Since enforce_scope cannot be set to true in keystone.conf until all of them are completed, and since leaving deprecation warnings in the logs for more than two cycles is a very undesirable operator experience, it's essential we complete these by next week. * Application Credential Access Rules This implementation[3] for keystone has been completed for months but the last few patches in the stack are lacking reviews. Client support has been proposed but with the final client release happening next week we will likely not land it until next cycle. * Resource Options and Immutable Resources Resource options[4] and immutable resources[5] are intertwined and the finishing touches are still being applied. Hope to have this completed early next week. * Federated Attributes for Users Support for federated attributes for users[6] is passing CI but needs reviews, it's unclear to me how much has changed since those patches were originally proposed two years ago so it's unfortunate that we're only left with a week to look at them. * Expiring Group Membership There is only a partial implementation proposed for expiring group membership[7] and neither patch is passing CI. This seems to have effectively missed the feature proposal freeze deadline which was a few weeks ago and will not likely make it in this cycle. * CI After skimming the meeting logs I saw the unit test timeout problem was discussed and a temporary workaround was proposed[8]. This sounded like a great idea but it seems that no one implemented it, so I did[9]. Unfortunately this will conflict with all the system-scope/default-roles patches in flight. With how many changes need to go in and how slow it will be with all of them needing to be rechecked and continually making the problem even worse, I propose we go ahead and merge the workaround ASAP and update all the in-flight changes to move the protection tests to the new location. It also appears that the non-voting federation CI broke recently, this will hopefully be fixed by updating the opensuse nodeset[10]. [1] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope [2] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [3] https://review.opendev.org/#/q/topic:bp/whitelist-extension-for-app-creds [4] https://review.opendev.org/678322 [5] https://review.opendev.org/#/q/topic:immutable-resources [6] https://review.opendev.org/#/q/topic:bp/support-federated-attr [7] https://review.opendev.org/#/q/topic:bug/1809116 [8] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-08-27-16.01.log.html#l-84 [9]https://review.opendev.org/680788 [10] https://review.opendev.org/680799 Colleen From skaplons at redhat.com Sat Sep 7 08:08:51 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 7 Sep 2019 10:08:51 +0200 Subject: [neutron] CI issues In-Reply-To: <2BBD3139-A073-42D1-8A2A-A4847F9CBA4D@redhat.com> References: <2BBD3139-A073-42D1-8A2A-A4847F9CBA4D@redhat.com> Message-ID: Hi, Patch https://review.opendev.org/#/c/680001/ is merged now. It addresses both issues which we have with neutron-functional tests currently. So Neutron's gate should be in better condition now :) > On 4 Sep 2019, at 16:37, Slawek Kaplonski wrote: > > Hi neutrinos, > > We are currently having some issues in our gate. Please see [1], [2] and [3] for details. > If Your Neutron patch failed on neutron-functional, neutron-functional-python27 or networking-ovn-tempest-dsvm-ovs-release jobs, please don’t recheck before all those issues will be solved. Recheck will not help and You will only use infra resources. > > [1] https://bugs.launchpad.net/neutron/+bug/1842659 > [2] https://bugs.launchpad.net/neutron/+bug/1842482 > [3] https://bugs.launchpad.net/bugs/1842657 > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From antonio.ojea.garcia at gmail.com Sat Sep 7 09:49:57 2019 From: antonio.ojea.garcia at gmail.com (Antonio Ojea) Date: Sat, 7 Sep 2019 11:49:57 +0200 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190906131053.rofnz7zeoudctoif@yuggoth.org> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: On Fri, 6 Sep 2019 at 15:14, Jeremy Stanley wrote: > > On 2019-09-06 13:27:53 +0100 (+0100), Chris Dent wrote: > [...] > > Most people work on OpenStack because it is their job or is closely > > related to their job. But because it is "open source" and "a > > community" and "collaborative" doing what people ask for and helping > > others achieve what they need is but one small piece of the > > motivation and action calculus. > [...] > > I don't know that this captures my motivation, at least. I chose my > job so that I could assist in the creation and maintenance of > OpenStack and similar free software, not the other way around. Maybe > I'm in a minority within the community, but I suspect there are more > folks than just me who feel the same. > I think that the reality is that not everybody can "chose" his job. Maybe the foundation can start to employ people to take care of the projects with the money received from the sponsors, I'm sure that a lot of folks will step in, not having to take time from his family life and able to dedicate their full time to the project. From fungi at yuggoth.org Sat Sep 7 12:51:36 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 7 Sep 2019 12:51:36 +0000 Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> Message-ID: <20190907125136.4ame3so6xowu42ck@yuggoth.org> On 2019-09-07 11:49:57 +0200 (+0200), Antonio Ojea wrote: [...] > I think that the reality is that not everybody can "chose" his > job. That's a fair point. I've had the luxury of turning down much higher-paying jobs to accept one at a non-profit organization aligned with my ideals. I definitely understand that not everyone can afford to do that. On the other hand, I wonder how many folks who work on OpenStack because their employer tells them they have to, not because they're inspired by the project's goals, are compelled (through the sense of community Chris mentioned in his post) to spend extra unpaid time helping with commons tasks and assisting others... to the point that they're burned out on these activities and decide to go work on something else instead. I don't doubt that there are at least some, but perhaps no more than those who took their jobs because they wanted to help the cause. I do feel for the part-time/volunteer contributors in our community, particularly since I've spent much of my life as a part-time/volunteer contributor in a number of other free/libre open-source communities myself. I continue trying to find ways to make such "casual" contribution easier, and to see it eventually play a much more influential role in the future of OpenStack. On the other hand, OpenStack is *very* large (the third-most-active open-source project of all time, depending on how you measure that), and whether we like it or not, full-time contributors are responsible for the bulk of what we've built so far. That reality creates processes and bureaucratic structure to streamline efficiency for high-volume contribution, with a trade-off of making "casual" contribution more challenging. > Maybe the foundation can start to employ people to take care of > the projects with the money received from the sponsors, I'm sure > that a lot of folks will step in, not having to take time from his > family life and able to dedicate their full time to the project. The OSF *does* employ people to help take care of projects with the money received from corporate memberships. If you think the proportion of its funds spent on staff to handle project commons tasks which otherwise go untended is insufficient, please find time to discuss it with your elected Individual Member representatives on the board of directors and convince them to argue for a different balance in the OSF budget. The total budget of the OSF could, however, be compared to that of one small/medium-sized department at a typical member company, so it lacks the capacity to do much on its own and the staff dedicated to this are already spread quite thin as a result. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Sat Sep 7 13:09:13 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 7 Sep 2019 08:09:13 -0500 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: On 9/6/2019 6:59 PM, melanie witt wrote: > > * If Telemetry is no longer using the server usage audit log API, we > deprecate it in Nova and notify deployment tools to stop setting > [DEFAULT]/instance_usage_audit = true to prevent further creation of > nova.task_log records and recommend manual cleanup by users Deprecating the API would just be a signal to not develop new tools based on it since it's effectively unmaintained but that doesn't mean we can remove it since there could be non-Telemtry tools in the wild using it that we'd never hear about. You might not be suggesting an eventual path to removal of the API, I'm just bringing that part up since I'm sure people are thinking it. I'm also assuming that API isn't multi-cell aware, meaning it won't traverse cells pulling records like listing servers or migration resources. As for the config option to run the periodic task that creates these records, that's disabled by default so deployment tools shouldn't be enabling it by default - but maybe some do if they are configured to deploy ceilometer. > > or > > * If Telemetry is still using the server usage audit log API, we create > a new 'nova-manage db purge_task_log --before ' (or similar) > command that will hard delete nova.task_log records before a specified > date or all if --before is not specified If you can't remove the API then this is probably something that needs to happen regardless, though we likely won't know if anyone uses it. I'd consider it pretty low priority given how extremely latent this is and would expect anyone that's been running with this enabled in production has developed DB purge scripts for this table long ago. -- Thanks, Matt From mriedemos at gmail.com Sat Sep 7 13:18:33 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 7 Sep 2019 08:18:33 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <918e56aa-d9c3-88e9-22fc-c7da12990f97@nemebean.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <20190905223137.i72s7n4tibkgypqf@bishop> <0bbb4765-3e57-b7dc-11ef-50ed639ea5c0@openstack.org> <918e56aa-d9c3-88e9-22fc-c7da12990f97@nemebean.com> Message-ID: On 9/6/2019 10:01 AM, Ben Nemec wrote: > I'll also say that for me specifically, having the PTL title gives me a > lever to use downstream. People don't generally question you spending > time on a project you're leading. The same isn't necessarily true of > being a core to whom PTL duties were delegated. Yuuuup. My last stint as nova PTL while at IBM was so I could keep working upstream on OpenStack despite my internal management and rest of my team having moved on to other things. And then eventually moving to another company to continue working on OpenStack. -- Thanks, Matt From mriedemos at gmail.com Sat Sep 7 13:23:26 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 7 Sep 2019 08:23:26 -0500 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <38c6f889-2b82-1a59-f00d-699fb04df6f3@gmail.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <38c6f889-2b82-1a59-f00d-699fb04df6f3@gmail.com> Message-ID: On 9/6/2019 9:37 AM, Jay Bryant wrote: > As has been touched upon in other discussions, I think we have a culture > where it is difficult for them to say no to things. Welcome to the club. Nova has been harangued for years for saying no to so many things and maybe now people are starting to see why. It's not because saying no is fun. -- Thanks, Matt From Prabhjit.Singh22 at T-Mobile.com Fri Sep 6 20:42:29 2019 From: Prabhjit.Singh22 at T-Mobile.com (Singh, Prabhjit) Date: Fri, 6 Sep 2019 20:42:29 +0000 Subject: [Octavia]-Seeking some high points on using Octavia Message-ID: Hi Michael, I have been trying to get Octavia LbaaS up and running and get performance tested. It has taken me some time to get quite a few things working. While I continue to invest time in using Octavia and stay excited on some of the upcoming features. I have been asked the following questions by my leadership to which I do not have any direct answers. 1. What is the adoption of Octavia, are major organizations looking to adopt and invest in it. Can you provide some numbers 2. Roadmap wise is the Open community committed to investing in Octavia and why 3. Per your suggestion I tried to look up who are the primary companies using Octavia and haven't found a clear indication, any insight would be great. 4. Would features from haproxy 2.0 be included in Octavia 5. There are some open solutions from haproxy, Envoy, consul. How would Octavia compare. 6. Lastly, do you have enough encouragement to keep the project going, I guess I am looking for some motivation for continuing to choose to use Octavia when there are several turnkey solutions ( though offered at a price ). Currently I have been working with Redhat to answer the following questions, these are not for the community, hopefully Redhat will be able to pursue with your team. 1. How to offload logs to an external log/metrics collector 2. How to turn off logs during performance testing, I honestly do not want to do this because the performance tester is really generating live traffic which mimics a real time scenario. 3. How to set cron for rotating logs, I would think that this should be automatic. Would I need to do this everytime? 4. Do you have any way to increase performance of the amphora, my take is haproxy can handle several thousands of concurrent connections but in our case seems like we hit a threshold at 3500 - 4500 connections and then it starts to either send resets or the connections stay open for a long time. Thanks & Regards Prabhjit -----Original Message----- From: Singh, Prabhjit Sent: Tuesday, July 23, 2019 9:45 AM To: Michael Johnson Cc: openstack-discuss at lists.openstack.org Subject: RE: [Octavia]-Seeking performance numbers on Octavia Thanks so much for the valuable insights Michael! Appreciate it and keep up the good work, as I ramp up with more dev know how hopefully I would start making contributions and can maybe convince my team to start as well. Thanks & Regards Prabhjit Singh -----Original Message----- From: Michael Johnson Sent: Monday, July 22, 2019 5:48 PM To: Singh, Prabhjit Cc: openstack-discuss at lists.openstack.org Subject: Re: [Octavia]-Seeking performance numbers on Octavia [External] Hi Prabhjit, Comments in-line below. Michael On Sun, Jul 21, 2019 at 5:24 PM Singh, Prabhjit wrote: > > Hi Michael, > > Thanks for taking the time out to send me your inputs and valuable suggestions. I do remember meeting you at the Denver Summit and hearing to a couple of your sessions. > If you wouldn't mind, I do have a few more questions and your answers would help me understand that should I continue to invest in having Octavia as one of our available LBs. > > 1. Based on your response and the amount of time you are investing in > supporting Octavia, what are some of the use cases, like for e.g. if load balancing web traffic how many transactions/connections minimum can be expected. I do understand you mentioned that it's hard to performance test Octavia but some real time situations from your testing and how customers have adopted Octavia would help me level set some expectations. This is really cloud and application specific. I would recommend you fire up an Octavia install and use your preferred tool to measure it. Some good tools are tsung, weighttp, and iperf3. > 2. We are thinking of Octavia as one of the offerings, that offers a self-serve type model. Do you know of any customers who have been able to use Octavia as one of their primary load balancers and any encouraging feedback you have gotten on Octavia. There are examples of organizations using Octavia available if you google Octavia. > 3. You suggested increasing the Ram size, I could go about making a whole new Flavor. Yes, to increase the allocated RAM for a load balancer, you would create an additional nova flavor with the specifications you would like. You can then either set this as the default nova flavor for amphora (amp_flavor_id is the setting) or you can create an Octavia flavor that specifies the nova compute flavor to use (See https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Fflavors.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=FDlAK3%2FKh0DNo%2BMSJQ8kJ8lSnn01TJXASS6AHd1kRoA%3D&reserved=0 for more information on Octavia flavors). > 4. I also noticed on the haproxy.conf the maxconns is set to 2000, should I increase this, does this affect the connection per server, which you said 64000 conns per server, so if I have 10 servers can I expect somewhere close to 640000 sessions? I think you are looking at the haproxy.conf file provided by your operating system package. Octavia does not use this file, it creates it's own HAProxy configuration files as needed under /var/lib/octavia inside the amphora. The default, if the user does not specify one at listener creation, is 1,000,000. > 5. Based on some of the limitations and the dev work in progress, I think the most important feature that would make Octavia a real solid offering would be the Active-Active and Autoscaling feature. I brought this up with you in our brief conversation at the summit, and you did mention that its not a top priority at this time and you are looking for some help. I have noticed a lot of documentation has been updated on this feature, do you think with the available document and progress I could spin up a distributor and manage sessions between Amphora or it's not complete yet. Active/Active is still on our roadmap, but unfortunately the people that were working on it had to stop for personal reasons. There may be some folks picking up this work again soon. At this point the Active/Active patches up for review are non-functional and still a work in progress. > 6. We have a Triple O setup, do you think I can make the above tweaks with the Triple O setup. I think you are able to make various adjustments to Octavia with Triple O, but I do not have specifics on that. > Thanks & Regards > > Prabhjit Singh > Systems Design and Strategy - Magentabox > | O: (973) 397-4819 | M: (973) 563-4445 > > > > -----Original Message----- > From: Michael Johnson > Sent: Friday, July 19, 2019 6:00 PM > To: Singh, Prabhjit > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia]-Seeking performance numbers on Octavia > > [External] > > > Hi Prabhjit, > > As you have mentioned, it is very challenging to get accurate performance results in cloud environments. There are a large number(very large in fact) of factors that can impact the overall performance of OpenStack and Octavia. > > In our OpenDev testing environment, we only have software emulation virtual machines available (Qemu running with the TCG engine) which performs extremely poorly. This means that the testing environment does not reflect how the software is used in real world deployments. > An example of this is simply booting a VM can take up to ten minutes on Qemu with TCG when it takes about twenty seconds on a real OpenStack deployment. > With this resource limitation, we cannot effectively run performance benchmarking test jobs on the OpenDev environment. > > Because of this, we don't publish performance numbers as they will not reflect what you can achieve in your environment. > > Let me try to speak to your bullet points: > 1. The Octavia team has never (to my knowledge) claimed the Amphora driver is "carrier grade". We do consider the Amphora driver to be "operator grade", which speaks to a cloud operator's perspective versus the previous offering that did not support high availability, have appropriate maintenance tooling, upgrade paths, performance, etc. > To me, "carrier grade" has an additional level of requirements including performance, latency, scale, and availability SLAs. This is not what the Octavia Amphora driver is currently ready for. That said, third party provider drivers for Octavia may be able to provide a "carrier grade" level of load balancing for OpenStack. > 2. As for performance tuning, much of this is either automatically handled by Octavia or are dependent on the application you are load balancing and your cloud deployment. For example we have many configuration settings to tune how many retries we attempt when interacting with other services. In performing and stable clouds, these can be tuned down, in others the defaults may be appropriate. If you would like faster failover, at the expense of slightly more network traffic, you can tune the health monitoring and keepalived_vrrp settings. We do not currently have a performance tuning guide for Octavia but would support someone authoring one. > 3. We do not currently have a guide for this. I will say with the version of HAproxy currently being shipped with the distributions, going beyond the 1vCPU per amphora does not gain you much. With the release of HAProxy 2.0 this has changed and we expect to be adding support for vertically scaling the Amphora in future releases. Disk space is only necessary if you are storing the flow logs locally, which I would not recommend for a performance load balancer (See the notes in the log offloading guide: > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Flog-offloading.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=qyX1BM6wR6v804WCYB2HY6IRmDfeQS1zi38FS34kB1U%3D&reserved=0). > Finally, the RAM usage is a factor of the number of concurrent connections and if you are enabling TLS on the load balancer. For typical load balancing loads, the default is typically fine. However, if you have high connection counts and/or TLS offloading, you may want to experiment with increasing the available RAM. > 4. The source IP issue is a known issue > (https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstoryboard.openstack.org%2F%23!%2Fstory%2F1629066&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=GkTPXRmOfjpMYXDYZ9t5xH1aEq0E%2BWDZRhK8ux%2FnrUQ%3D&reserved=0). We have not prioritized addressing this as we have not had anyone come forward that they needed this in their deployment. If this is an issue impacting your use case, please comment on the story to that effect and provide a use case. This will help the team prioritize this work. > Also, patches are welcome! If you are interested in working on this issue, I can help you with information about how this could be added. > It should also be noted that it is a limitation of 64,000 connections per-backend server, not per load balancer. > 5. The team uses the #openstack-lbaas IRC channel on freenode and is happy to answer questions, etc. > > To date, we have had limited resources (people and equipment) available to do performance evaluation and tuning. There are definitely kernel and HAProxy tuning settings we have evaluated and added to the Amphora driver, but I know there is more work that can be done. If you are interested in help us with this work, please let us know. > > Michael > > P.S. Here are just a few considerations that can/will impact the performance of an Octavia Amphora load balancer: > > Hardware used for the compute nodes > Network Interface Cards (NICs) used in the compute nodes Number of > network ports enabled on the compute hosts Network switch > configurations (Jumbo frames, and so on) Cloud network topology > (leaf‐spine, fat‐tree, and so on) The OpenStack Neutron networking > configuration (ML2 and ML3 drivers) Tenant networking configuration > (VXLAN, VLANS, GRE, and so on) Colocation of applications and Octavia > amphorae Over subscription of the compute and networking resources > Protocols being load balanced Configuration settings used when > creating the load balancer (connection limits, and so on) Version of > OpenStack services (nova, neutron, and so on) Version of OpenStack > Octavia Flavor of the OpenStack Octavia load balancer OS and > hypervisor versions used Deployed security mitigations (Spectre, > Meltdown, and so on) Customer application performance Health of the > customer application > > On Fri, Jul 19, 2019 at 8:52 AM Singh, Prabhjit wrote: > > > > Hi > > > > > > > > I have been trying to test Octavia with some traffic generators and > > my tests are inconclusive. Appreciate your inputs on the following > > > > > > > > It would be really nice to have some performance numbers that you guys have been able to achieve for this to be termed as carrier grade. > > Would also appreciate if you could share any inputs on performance > > tuning Octavia Any recommended flavor sizes for spinning up Amphorae, the default size of 1 core, 2 Gb disk and 1 Gig RAM does not seem enough. > > Also I noticed when the Amphorae are spun up, at one time only one > > master is talking to the backend servers and has one IP that its > > using, it has to run out of ports after 64000 TCP concurrent > > sessions, id there a way to add more IPs or is this the limitation > > If I needed some help with Octavia and some guidance around > > performance tuning can someone from the community help > > > > > > > > Thanks & Regards > > > > > > > > Prabhjit Singh > > > > > > > > > > > > From tim.bell at cern.ch Sat Sep 7 15:16:13 2019 From: tim.bell at cern.ch (Tim Bell) Date: Sat, 7 Sep 2019 17:16:13 +0200 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: On 9/7/19 3:09 PM, Matt Riedemann wrote: > On 9/6/2019 6:59 PM, melanie witt wrote: >> >> * If Telemetry is no longer using the server usage audit log API, we >> deprecate it in Nova and notify deployment tools to stop setting >> [DEFAULT]/instance_usage_audit = true to prevent further creation of >> nova.task_log records and recommend manual cleanup by users > > Deprecating the API would just be a signal to not develop new tools > based on it since it's effectively unmaintained but that doesn't mean > we can remove it since there could be non-Telemtry tools in the wild > using it that we'd never hear about. You might not be suggesting an > eventual path to removal of the API, I'm just bringing that part up > since I'm sure people are thinking it. > Tools like cASO (https://github.com/IFCA/caso) use this API. This is used by many of the EGI Federated Cloud sites to do accounting per VM (https://egi-federated-cloud-integration.readthedocs.io/en/latest/openstack.html) > I'm also assuming that API isn't multi-cell aware, meaning it won't > traverse cells pulling records like listing servers or migration > resources. Given scaling issues with the current Telemetry implementation, I suspect alternative approaches have had to be developed in any case. CERN uses libvirt data extraction. > > As for the config option to run the periodic task that creates these > records, that's disabled by default so deployment tools shouldn't be > enabling it by default - but maybe some do if they are configured to > deploy ceilometer. > >> >> or >> >> * If Telemetry is still using the server usage audit log API, we >> create a new 'nova-manage db purge_task_log --before ' (or >> similar) command that will hard delete nova.task_log records before a >> specified date or all if --before is not specified > > If you can't remove the API then this is probably something that needs > to happen regardless, though we likely won't know if anyone uses it. > I'd consider it pretty low priority given how extremely latent this is > and would expect anyone that's been running with this enabled in > production has developed DB purge scripts for this table long ago. > From johnsomor at gmail.com Sat Sep 7 20:21:59 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Sat, 7 Sep 2019 13:21:59 -0700 Subject: [Octavia]-Seeking some high points on using Octavia In-Reply-To: References: Message-ID: Hi Prabhjit, Answers to the questions I can answer below. I hope you continue to work with your support contact to resolve the issues you are experiencing. Here I can only speak with my OpenStack community hat on. Michael On Fri, Sep 6, 2019 at 1:42 PM Singh, Prabhjit wrote: > > Hi Michael, > > I have been trying to get Octavia LbaaS up and running and get performance tested. It has taken me some time to get quite a few things working. > > While I continue to invest time in using Octavia and stay excited on some of the upcoming features. I have been asked the following questions by my leadership to which I do not have any direct answers. > > 1. What is the adoption of Octavia, are major organizations looking to adopt and invest in it. Can you provide some numbers I don't have much I can share here. You can look at the OpenStack user survey information: https://www.openstack.org/analytics though some of that is still fragmented as Octavia was part of neutron in some older releases. In the 2016 and 2017 survey, "Software load balancing" was the #1 neutron feature "actively used, interested in, or planned for use." Page 53: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/survey/April-2016-User-Survey-Report.pdf Page 60: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/survey/April2017SurveyReport.pdf You may also find interest in which companies have contributed to the project by looking at Stackalytics: https://www.stackalytics.com/?module=octavia-group > 2. Roadmap wise is the Open community committed to investing in Octavia and why We do maintain a roadmap for longer-term goals: https://wiki.openstack.org/wiki/Octavia/Roadmap Beyond that, as OpenStack is an open community of many contributors I cannot speculate commitment. > 3. Per your suggestion I tried to look up who are the primary companies using Octavia and haven't found a clear indication, any insight would be great. That is really all I can share. > 4. Would features from haproxy 2.0 be included in Octavia Yes, it is on the roadmap. We have been waiting for 2.0.x to stabilize. The release timing of HAProxy 2.0 means that most of the major Linux distributions are not yet shipping it. This makes it a bit tricky for the OpenStack team as our testing standard is tied to these releases: https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions There is a chance that the OpenStack team will start adding features that need HAProxy 2.0.x in the Ussuri release cycle. > 5. There are some open solutions from haproxy, Envoy, consul. How would Octavia compare. There are many, many different load balancing options available. As you know Octavia supports provider drivers, so that alternate technologies can be plugged in. For the reference amphora driver (the one we use for OpenStack testing), HAProxy was selected for its stability and wide support. > 6. Lastly, do you have enough encouragement to keep the project going, I guess I am looking for some motivation for continuing to choose to use Octavia when there are several turnkey solutions ( though offered at a price ). Well, personally I plan to keep working on Octavia. I am not the project team lead for the Train or Ussuri releases, but I am still an active core member. I am a "right tool for the right job" kind of person, so it really is up to you and your needs to balance the decision of which load balancing option to select. > Currently I have been working with Redhat to answer the following questions, these are not for the community, hopefully Redhat will be able to pursue with your team. With my OpenStack hat on and not speaking for Red Hat: > 1. How to offload logs to an external log/metrics collector This was a new feature for the Train release: https://docs.openstack.org/octavia/latest/admin/log-offloading.html > 2. How to turn off logs during performance testing, I honestly do not want to do this because the performance tester is really generating live traffic which mimics a real time scenario. https://docs.openstack.org/octavia/latest/configuration/configref.html#haproxy_amphora.connection_logging > 3. How to set cron for rotating logs, I would think that this should be automatic. Would I need to do this everytime? Logs are already being rotated inside the amphora. > 4. Do you have any way to increase performance of the amphora, my take is haproxy can handle several thousands of concurrent connections but in our case seems like we hit a threshold at 3500 - 4500 connections and then it starts to either send resets or the connections stay open for a long time. Yes, I have had amphora do many more connections per second than that. There is some issue in your environment that is limiting it. > Thanks & Regards > > Prabhjit > > > > > -----Original Message----- > From: Singh, Prabhjit > Sent: Tuesday, July 23, 2019 9:45 AM > To: Michael Johnson > Cc: openstack-discuss at lists.openstack.org > Subject: RE: [Octavia]-Seeking performance numbers on Octavia > > Thanks so much for the valuable insights Michael! Appreciate it and keep up the good work, as I ramp up with more dev know how hopefully I would start making contributions and can maybe convince my team to start as well. > > Thanks & Regards > > Prabhjit Singh > > > > -----Original Message----- > From: Michael Johnson > Sent: Monday, July 22, 2019 5:48 PM > To: Singh, Prabhjit > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia]-Seeking performance numbers on Octavia > > [External] > > > Hi Prabhjit, > > Comments in-line below. > > Michael > > On Sun, Jul 21, 2019 at 5:24 PM Singh, Prabhjit wrote: > > > > Hi Michael, > > > > Thanks for taking the time out to send me your inputs and valuable suggestions. I do remember meeting you at the Denver Summit and hearing to a couple of your sessions. > > If you wouldn't mind, I do have a few more questions and your answers would help me understand that should I continue to invest in having Octavia as one of our available LBs. > > > > 1. Based on your response and the amount of time you are investing in > > supporting Octavia, what are some of the use cases, like for e.g. if load balancing web traffic how many transactions/connections minimum can be expected. I do understand you mentioned that it's hard to performance test Octavia but some real time situations from your testing and how customers have adopted Octavia would help me level set some expectations. > > This is really cloud and application specific. I would recommend you fire up an Octavia install and use your preferred tool to measure it. > Some good tools are tsung, weighttp, and iperf3. > > > 2. We are thinking of Octavia as one of the offerings, that offers a self-serve type model. Do you know of any customers who have been able to use Octavia as one of their primary load balancers and any encouraging feedback you have gotten on Octavia. > > There are examples of organizations using Octavia available if you google Octavia. > > > 3. You suggested increasing the Ram size, I could go about making a whole new Flavor. > > Yes, to increase the allocated RAM for a load balancer, you would create an additional nova flavor with the specifications you would like. You can then either set this as the default nova flavor for amphora (amp_flavor_id is the setting) or you can create an Octavia flavor that specifies the nova compute flavor to use (See > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Fflavors.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=FDlAK3%2FKh0DNo%2BMSJQ8kJ8lSnn01TJXASS6AHd1kRoA%3D&reserved=0 for more information on Octavia flavors). > > > 4. I also noticed on the haproxy.conf the maxconns is set to 2000, should I increase this, does this affect the connection per server, which you said 64000 conns per server, so if I have 10 servers can I expect somewhere close to 640000 sessions? > > I think you are looking at the haproxy.conf file provided by your operating system package. Octavia does not use this file, it creates it's own HAProxy configuration files as needed under /var/lib/octavia inside the amphora. The default, if the user does not specify one at listener creation, is 1,000,000. > > > 5. Based on some of the limitations and the dev work in progress, I think the most important feature that would make Octavia a real solid offering would be the Active-Active and Autoscaling feature. I brought this up with you in our brief conversation at the summit, and you did mention that its not a top priority at this time and you are looking for some help. I have noticed a lot of documentation has been updated on this feature, do you think with the available document and progress I could spin up a distributor and manage sessions between Amphora or it's not complete yet. > > Active/Active is still on our roadmap, but unfortunately the people that were working on it had to stop for personal reasons. There may be some folks picking up this work again soon. At this point the Active/Active patches up for review are non-functional and still a work in progress. > > > 6. We have a Triple O setup, do you think I can make the above tweaks with the Triple O setup. > > I think you are able to make various adjustments to Octavia with Triple O, but I do not have specifics on that. > > > Thanks & Regards > > > > Prabhjit Singh > > Systems Design and Strategy - Magentabox > > | O: (973) 397-4819 | M: (973) 563-4445 > > > > > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Friday, July 19, 2019 6:00 PM > > To: Singh, Prabhjit > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [Octavia]-Seeking performance numbers on Octavia > > > > [External] > > > > > > Hi Prabhjit, > > > > As you have mentioned, it is very challenging to get accurate performance results in cloud environments. There are a large number(very large in fact) of factors that can impact the overall performance of OpenStack and Octavia. > > > > In our OpenDev testing environment, we only have software emulation virtual machines available (Qemu running with the TCG engine) which performs extremely poorly. This means that the testing environment does not reflect how the software is used in real world deployments. > > An example of this is simply booting a VM can take up to ten minutes on Qemu with TCG when it takes about twenty seconds on a real OpenStack deployment. > > With this resource limitation, we cannot effectively run performance benchmarking test jobs on the OpenDev environment. > > > > Because of this, we don't publish performance numbers as they will not reflect what you can achieve in your environment. > > > > Let me try to speak to your bullet points: > > 1. The Octavia team has never (to my knowledge) claimed the Amphora driver is "carrier grade". We do consider the Amphora driver to be "operator grade", which speaks to a cloud operator's perspective versus the previous offering that did not support high availability, have appropriate maintenance tooling, upgrade paths, performance, etc. > > To me, "carrier grade" has an additional level of requirements including performance, latency, scale, and availability SLAs. This is not what the Octavia Amphora driver is currently ready for. That said, third party provider drivers for Octavia may be able to provide a "carrier grade" level of load balancing for OpenStack. > > 2. As for performance tuning, much of this is either automatically handled by Octavia or are dependent on the application you are load balancing and your cloud deployment. For example we have many configuration settings to tune how many retries we attempt when interacting with other services. In performing and stable clouds, these can be tuned down, in others the defaults may be appropriate. If you would like faster failover, at the expense of slightly more network traffic, you can tune the health monitoring and keepalived_vrrp settings. We do not currently have a performance tuning guide for Octavia but would support someone authoring one. > > 3. We do not currently have a guide for this. I will say with the version of HAproxy currently being shipped with the distributions, going beyond the 1vCPU per amphora does not gain you much. With the release of HAProxy 2.0 this has changed and we expect to be adding support for vertically scaling the Amphora in future releases. Disk space is only necessary if you are storing the flow logs locally, which I would not recommend for a performance load balancer (See the notes in the log offloading guide: > > https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Foctavia%2Flatest%2Fadmin%2Flog-offloading.html&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=qyX1BM6wR6v804WCYB2HY6IRmDfeQS1zi38FS34kB1U%3D&reserved=0). > > Finally, the RAM usage is a factor of the number of concurrent connections and if you are enabling TLS on the load balancer. For typical load balancing loads, the default is typically fine. However, if you have high connection counts and/or TLS offloading, you may want to experiment with increasing the available RAM. > > 4. The source IP issue is a known issue > > (https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstoryboard.openstack.org%2F%23!%2Fstory%2F1629066&data=02%7C01%7CPrabhjit.Singh22%40t-mobile.com%7Cfb41388d6020453d92c908d70eee4a72%7Cbe0f980bdd994b19bd7bbc71a09b026c%7C0%7C0%7C636994288931593870&sdata=GkTPXRmOfjpMYXDYZ9t5xH1aEq0E%2BWDZRhK8ux%2FnrUQ%3D&reserved=0). We have not prioritized addressing this as we have not had anyone come forward that they needed this in their deployment. If this is an issue impacting your use case, please comment on the story to that effect and provide a use case. This will help the team prioritize this work. > > Also, patches are welcome! If you are interested in working on this issue, I can help you with information about how this could be added. > > It should also be noted that it is a limitation of 64,000 connections per-backend server, not per load balancer. > > 5. The team uses the #openstack-lbaas IRC channel on freenode and is happy to answer questions, etc. > > > > To date, we have had limited resources (people and equipment) available to do performance evaluation and tuning. There are definitely kernel and HAProxy tuning settings we have evaluated and added to the Amphora driver, but I know there is more work that can be done. If you are interested in help us with this work, please let us know. > > > > Michael > > > > P.S. Here are just a few considerations that can/will impact the performance of an Octavia Amphora load balancer: > > > > Hardware used for the compute nodes > > Network Interface Cards (NICs) used in the compute nodes Number of > > network ports enabled on the compute hosts Network switch > > configurations (Jumbo frames, and so on) Cloud network topology > > (leaf‐spine, fat‐tree, and so on) The OpenStack Neutron networking > > configuration (ML2 and ML3 drivers) Tenant networking configuration > > (VXLAN, VLANS, GRE, and so on) Colocation of applications and Octavia > > amphorae Over subscription of the compute and networking resources > > Protocols being load balanced Configuration settings used when > > creating the load balancer (connection limits, and so on) Version of > > OpenStack services (nova, neutron, and so on) Version of OpenStack > > Octavia Flavor of the OpenStack Octavia load balancer OS and > > hypervisor versions used Deployed security mitigations (Spectre, > > Meltdown, and so on) Customer application performance Health of the > > customer application > > > > On Fri, Jul 19, 2019 at 8:52 AM Singh, Prabhjit wrote: > > > > > > Hi > > > > > > > > > > > > I have been trying to test Octavia with some traffic generators and > > > my tests are inconclusive. Appreciate your inputs on the following > > > > > > > > > > > > It would be really nice to have some performance numbers that you guys have been able to achieve for this to be termed as carrier grade. > > > Would also appreciate if you could share any inputs on performance > > > tuning Octavia Any recommended flavor sizes for spinning up Amphorae, the default size of 1 core, 2 Gb disk and 1 Gig RAM does not seem enough. > > > Also I noticed when the Amphorae are spun up, at one time only one > > > master is talking to the backend servers and has one IP that its > > > using, it has to run out of ports after 64000 TCP concurrent > > > sessions, id there a way to add more IPs or is this the limitation > > > If I needed some help with Octavia and some guidance around > > > performance tuning can someone from the community help > > > > > > > > > > > > Thanks & Regards > > > > > > > > > > > > Prabhjit Singh > > > > > > > > > > > > > > > > > > From hongbin034 at gmail.com Sat Sep 7 21:22:25 2019 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 7 Sep 2019 17:22:25 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> Message-ID: On Fri, Sep 6, 2019 at 1:49 PM Zane Bitter wrote: > On 5/09/19 7:36 AM, Tom Barron wrote: > > On 05/09/19 19:33 +0900, Ghanshyam Mann wrote: > >> ---- On Thu, 05 Sep 2019 19:04:39 +0900 Chris Dent > >> wrote ---- > >> > On Thu, 5 Sep 2019, Thierry Carrez wrote: > >> > > >> > > So maybe we still have the same expectations, but we are > >> definitely reducing > >> > > our velocity... Would you say we need to better align our > >> expectations with > >> > > our actual speed? Or that we should reduce our expectations > >> further, to drive > >> > > velocity further down? > >> > > >> > We should slow down enough that the vendors and enterprises start to > >> > suffer. If they never notice, then it's clear we're trying too hard > >> > and can chill out. > >> > >> +1 on this but instead of slow down and make vendors suffer we need > >> the proper > >> way to notify or make them understand about the future cutoff effect > >> on OpenStack > >> as software. I know we have been trying every possible way but I am > >> sure there are > >> much more managerial steps can be taken. I expect Board of Director > >> to come forward > >> on this as an accountable entity. TC should raise this as high > >> priority issue to them (in meetings, > >> joint leadership meeting etc). > >> > >> I am sure this has been brought up before, can we make OpenStack > >> membership company > >> to have a minimum set of developers to maintain upstream. With the > >> current situation, I think > >> it make sense to ask them to contribute manpower also along with > >> membership fee. But again > >> this is more of BoD and foundation area. > > > > +1 > > > > IIUC Gold Membership in the Foundation provides voting privileges at a > > cost of $50-200K/year and Corporate Sponsorship provides these plus > > various marketing benefits at a cost of $10-25K/year. So far as I can > > tell there is not a requirement of a commitment of contributors and > > maintainers with the exception of the (currently closed) Platinum > > Membership, which costs $500K/year and requires at least 2 FTE > > equivalents contributing to OpenStack. > > Even this incredibly minimal requirement was famously not met for years > by one platinum member, and a (different) platinum member was accepted > without ever having contributed upstream in the past or apparently ever > intending to in the future. > > What I'm saying is that if this a the mechanism we want to use to drive > contributions, I can tell you now how it's gonna work out. > > The question we should be asking ourselves is why companies see value in > being sponsors of the foundation but not in contributing upstream, and > how we convince them of the value of the latter. > One of the reason could be the vendors have their own implementation of the OpenStack APIs instead of using the upstream implementation. Those vendors probably don't have much motivation on contributing upstream because they are not using the upstream code (except the APIs). A follow-up question is why those vendors chose to re-implement OpenStack instead of using the upstream one. This would be an interesting question to ask. > > One initiative the TC started on this front is this: > > > https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html > > (BTW we could use help in converting the outdated Help Most Wanted > entries to this format. Volunteers welcome.) > > cheers, > Zane. > > > In general I see requirements > > for annual cash expenditure to the Foundation, as for membership in any > > joint commercial enterprise, but little that ensures the availability of > > skilled labor for ongoing maintenance of our projects. > > > > -- Tom Barron > > > >> > >> I agree on ttx proposal to reduce the TC number to 9 or 7, I do not > >> think this will make any > >> difference or slow down on any of the TC activity. 9 or 7 members are > >> enough in TC. > >> > >> As long as we get PTL(even without an election) we are in a good > >> position. This time only > >> 7 leaderless projects (6 actually with Cyborg PTL missing to propose > >> nomination in election repo and only on ML) are > >> not so bad number. But yes this is a sign of taking action before it > >> goes into more worst situation. > >> > >> -gmann > >> > >> > > >> > -- > >> > Chris Dent ٩◔̯◔۶ > https://anticdent.org/ > >> > freenode: cdent > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sat Sep 7 22:21:36 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sat, 7 Sep 2019 17:21:36 -0500 Subject: [infra][neutron] Requesting help to remove feature branches In-Reply-To: <20190906225750.tgbnwz6wu5gdfezo@yuggoth.org> References: <20190906225750.tgbnwz6wu5gdfezo@yuggoth.org> Message-ID: Hi, So we all stay on the same page, the four branches were removed by the infra team. Thanks!. This is the conversation we had in regards to the feature/lbaasv2 branch, where we agreed tht it was not necessary to save any state: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-09-06.log.html#t2019-09-06T23:07:35 Cheers On Fri, Sep 6, 2019 at 5:58 PM Jeremy Stanley wrote: > On 2019-09-06 13:24:44 -0500 (-0500), Miguel Lavalle wrote: > > We have decided to remove from the Neutron repo the following feature > > branches: > > > > feature/graphql > > feature/lbaasv2 > > feature/pecan > > feature/qos > > > > We don't need to preserve any state from these branches. In the case of > the > > first one, no code was merged. The work in the other three branches is > > already merged into master. > > Sanity-checking feature/lbaasv2, `git merge-base` between it and > master suggest cc400e2 is the closest common ancestor. There are 4 > potentially substantive commits on feature/lbaasv2 past that point > which do not seem to appear in the master branch history: > > 7147389 Implement Jinja templates for haproxy config > cfa4a86 Tests for extension, db and plugin for LBaaS V2 > 02c01a3 Plugin/DB additions for version 2 of LBaaS API > 4ed8862 New extension for version 2 of LBaaS API > > Do you happen to know whether these need to be preserved (or what > happened with them)? > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sat Sep 7 22:28:16 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sun, 8 Sep 2019 10:28:16 +1200 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: OpenStack services in DevStack are managed by systemd, so you can check aodh-listener log by running `sudo journalctl -u devstack at aodh-listener.service | less` - Best regards, Lingxian Kong Catalyst Cloud On Sat, Sep 7, 2019 at 2:51 PM Anmar Salih wrote: > > Dear Lingxian, > > I cloud't find aodh log file. > > Also I did 'ps -ef | grep aodh' and here is > the response. > > Best regards. > > > On Thu, Sep 5, 2019 at 6:56 PM Lingxian Kong wrote: > >> Hi Anmar, >> >> Please see my comments in-line below. >> >> - >> Best regards, >> Lingxian Kong >> Catalyst Cloud >> >> >> On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih >> wrote: >> >>> Hi Lingxian, >>> >>> First of all, I would like to apologize because the email is pretty >>> long. I listed all the steps I went through just to make sure that I did >>> everything correctly. >>> >> >> No need to apologize, more information is always helpful to solve the >> problem. >> >> >>> 4- Creating the webhook for the function by: openstack webhook create >>> --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen >>> capture for the response. I tried to copy >>> and paste the webhook_url " >>> http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into >>> my internet browser, so I got 404 not found. I am not sure if this is >>> normal response or I have something wrong here. >>> >> >> Like Gaetan said, the webhook is supposed to be invoked by http POST. >> >> 9- Checking aodh alarm history by aodh alarm-history show >>> ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response >>> >>> >>> 10- Last step is to check the function execution in qinling and here is >>> the response . (empty bracket). I am not >>> sure what is the problem. >>> >> >> Yeah, from the output of alarm history, the alarm is not triggered, as a >> result, there won't be execution created by the webhook. >> >> Seems like the aodh-listener didn't receive the message or the message >> was ignored. Could you paste the aodh-listener log but make sure: >> >> 1. `debug = True` in /etc/aodh/aodh.conf >> 2. Trigger the python script again >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmar.salih1 at gmail.com Sat Sep 7 23:29:43 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Sat, 7 Sep 2019 19:29:43 -0400 Subject: Need help trigger aodh alarm - All the steps I went through by details. In-Reply-To: References: Message-ID: Dear Lingxian, I executed 'sudo journalctl -u devstack at aodh-listener.service | less' and got this response . Thank you. On Sat, Sep 7, 2019 at 6:28 PM Lingxian Kong wrote: > OpenStack services in DevStack are managed by systemd, so you can check > aodh-listener log by running `sudo journalctl -u > devstack at aodh-listener.service | less` > > - > Best regards, > Lingxian Kong > Catalyst Cloud > > > On Sat, Sep 7, 2019 at 2:51 PM Anmar Salih wrote: > >> >> Dear Lingxian, >> >> I cloud't find aodh log file. >> >> Also I did 'ps -ef | grep aodh' and here >> is the response. >> >> Best regards. >> >> >> On Thu, Sep 5, 2019 at 6:56 PM Lingxian Kong >> wrote: >> >>> Hi Anmar, >>> >>> Please see my comments in-line below. >>> >>> - >>> Best regards, >>> Lingxian Kong >>> Catalyst Cloud >>> >>> >>> On Wed, Sep 4, 2019 at 2:51 PM Anmar Salih >>> wrote: >>> >>>> Hi Lingxian, >>>> >>>> First of all, I would like to apologize because the email is pretty >>>> long. I listed all the steps I went through just to make sure that I did >>>> everything correctly. >>>> >>> >>> No need to apologize, more information is always helpful to solve the >>> problem. >>> >>> >>>> 4- Creating the webhook for the function by: openstack webhook create >>>> --function 07edc434-a4b8-424a-8d3a-af253aa31bf8 . Here is a screen >>>> capture for the response. I tried to copy >>>> and paste the webhook_url " >>>> http://192.168.1.155:7070/v1/webhooks/c5608648-bd73-478f-b452-ad1eabf93328/invoke" into >>>> my internet browser, so I got 404 not found. I am not sure if this is >>>> normal response or I have something wrong here. >>>> >>> >>> Like Gaetan said, the webhook is supposed to be invoked by http POST. >>> >>> 9- Checking aodh alarm history by aodh alarm-history show >>>> ea16edb9-2000-471b-88e5-46f54208995e -f yaml . So I got this response >>>> >>>> >>>> 10- Last step is to check the function execution in qinling and here is >>>> the response . (empty bracket). I am not >>>> sure what is the problem. >>>> >>> >>> Yeah, from the output of alarm history, the alarm is not triggered, as a >>> result, there won't be execution created by the webhook. >>> >>> Seems like the aodh-listener didn't receive the message or the message >>> was ignored. Could you paste the aodh-listener log but make sure: >>> >>> 1. `debug = True` in /etc/aodh/aodh.conf >>> 2. Trigger the python script again >>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Sep 8 11:11:31 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 08 Sep 2019 20:11:31 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> <1567771216.28660.0@smtp.office365.com> Message-ID: <16d10925fe1.1021dc3419292.6668714441615096551@ghanshyammann.com> ---- On Fri, 06 Sep 2019 22:34:41 +0900 Mohammed Naser wrote ---- > On Fri, Sep 6, 2019 at 8:04 AM Balázs Gibizer wrote: > > > > > > > > On Thu, Sep 5, 2019 at 6:20 PM, Chris Dent wrote: > > > > On Fri, 6 Sep 2019, Ghanshyam Mann wrote: > > > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. In today TC meeting[2], we discussed the few possibilities and decided to reach out to the eligible candidates to serve the PTL position. > > > > Thanks for being concerned about this, but it would have been useful if you included me (as the current PTL) and the rest of the Placement team in the discussion or at least confirmed plans with me before starting this seek-volunteers process. There are a few open questions we are still trying to resolve before we should jump to any decisions: * We are currently waiting to see if Tetsuro is available (he's been away for a few days). If he is, he'll be great, but we don't know yet if he can or wants to. * We've started, informally, discussing the option of pioneering the option of leaderless projects within Placement (we pioneer many other things there, may as well add that to the list) but without more discussion from the whole team (which can't happen because we don't have quorum of the actively involved people) and the TC it's premature. Leaderless would essentially mean consensually designating release liaisons and similar roles but no specific PTL. I think this is easily possible in a small in number, focused, and small feature-queue [1] group like Placement but would much harder in one of the larger groups like Nova. * We have several reluctant people who _can_ do it, but don't want to. Once we've explored the other ideas here and any others we can come up with, we can dredge one of those people up as a stand-in PTL, keeping the slot open. Because of [1] there's not much on the agenda for U. > > > > > > I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. > > If it's okay with you, would you like to share what are some of the > priorities and work that you feel is placed on a PTL which makes you > reluctant? > > PS, by no means I am trying to push for you to be PTL if you're not > currently interested, but I want to hear some of the community > thoughts about this (and feel free to reply privately) This is really important point. I can agree about PTL responsibility for big and very high traffic of work (review + feature request + discussions etc) are more time consuming but for other projects it should not be so bad. My personal experience as QA PTL (where you have lot of responsibility during release time, stable branches for devstack and other QA tools, stable testing job etc) is really good and does not consume my mush time (when I separated my PTL time and QA core developer time). Listing the items, responsibility which making PTL job very hard will be great way to improve it. -gmann > > > Cheers, > > gibi > > > > Since the Placement team is not planning to have an active presence at the PTG, nor planning to have much of a pre-PTG (as no one has stepped up with any feature ideas) we have some days or even weeks before it matters who the next PTL (if any) is, so if possible, let's not rush this. [1] It's been a design goal of mine from the start that Placement would quickly reach a position of stability and maturity that I liked to call "being done". By the end of Train we are expecting to be feature complete for any features that have been actively discussed in the recent past [2]. The main tasks in U will be responding to bug fixes and requests-for-explanations for the features that already exist (because people asked for them) but are not being used yet and getting the osc-placement client caught up. [2] The biggest thing that has been discussed as a "maybe we should do" for which there are no immediate plans is "resource provider sharding" or "one placement, many clouds". That's a thing we imagined people might ask for, but haven't yet, so there's little point doing it. > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > From adrianc at mellanox.com Sun Sep 8 12:37:15 2019 From: adrianc at mellanox.com (Adrian Chiris) Date: Sun, 8 Sep 2019 12:37:15 +0000 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: <5e84afec-ca3b-4a9f-969a-69f4f748c893@www.fastmail.com> Message-ID: Thanks for the inputs, Supporting the last point release makes sense to me; however, current policy is a bit vague on that. Updating the doc would certainly help (if indeed the intention is to support the latest minor release). For my particular issue, it seems that CentOS major will likely be bumped for U release as stated by Sean. So, worst case is pushing to master after Train release. Thanks, Adrian. > -----Original Message----- > From: Sean Mooney > Sent: Friday, September 6, 2019 8:29 PM > To: Clark Boylan ; openstack- > discuss at lists.openstack.org > Subject: Re: [tc][neutron] Supported Linux distributions and their kernel > > On Thu, 2019-09-05 at 08:20 -0700, Clark Boylan wrote: > > On Thu, Sep 5, 2019, at 8:10 AM, Adrian Chiris wrote: > > > > > > Greetings, > > > > > > I was wondering what is the guideline in regards to which kernels > > > are supported by OpenStack in the various Linux distributions. > > > > > > > > > Looking at [1], Taking for example latest CentOS major (7): > > > > > > Every “minor” version is released with a different kernel version, > > > > > > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) > > > and the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > > > > > > > > While I understand that OpenStack projects are expected to support > > > all CentOS 7.x releases. > > > > It is my understanding that CentOS (and RHEL?) only support the > current/latest point release of their distro [3]. > yes so each rhedhat openstack plathform (OSP) z stream (x.y.z) release is > tested and packaged only for the latest point release of rhel. we support > customer on older .z release if they are also on the version of rhel it was > tested with but we do expect customer to upgrage to the new rhel minor > version when they update there openstack to a newer .z relese. > this is becasue we update qemu and other products as part of the minor > release of rhel and we need to ensure that nova works with that qemu and > the kvm it was tested with. > > > We only test against that current point release. I don't expect we > > can be expected to support a distro release which the distro doesn't even > support. > ya i think that is sane. also if we are being totally honest old kernels have bug > many of which are security bugs so anyone running the original kernel any os > shipped with is deploying a vulnerable cloud. > > > > All that to say I would only worry about the most recent point release. > we might want to update the doc to that effect. > it currently say latest Centos Major > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgover > nance.openstack.org%2Ftc%2Freference%2Fproject-testing- > interface.html%23linux- > distributions&data=02%7C01%7Cadrianc%40mellanox.com%7C88d2a34c > 865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0% > 7C0%7C637033879438364542&sdata=m7NmJgCGZ00hiseoZo5uqTc0xKyE > ro29acCKKaUsQhU%3D&reserved=0 > perhaps it should be lates centos point/minor release since that is what we > actully test with. > also centos 8 is apprently complete the RC work so hopfully we will see a > release soon. > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.c > entos.org%2FAbout%2FBuilding_8&data=02%7C01%7Cadrianc%40mella > nox.com%7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9ba6a > 4d149256f461b%7C0%7C0%7C637033879438364542&sdata=Qzpuz408idk > D0v21Z0a1xdlfqnSbhGzjz7ygTFmLXc8%3D&reserved=0 > i have 0 info on centos but for Ussuri i hope we will have move to centos 8 > and python 3 only. > > > > > > > > Does the same applies for the kernels they _originally_ came out with? > > > > > > > > > The reason I’m asking, is because I was working on doing some > > > cleanup in neutron [2] for a workaround introduced because of an old > > > kernel bug, > > > > > > It is unclear to me if it is safe to introduce this change. > > > > > > > > > [1] > > > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgo > > > vernance.openstack.org%2Ftc%2Freference%2Fproject-testing- > interface. > > > html%23linux- > distributions&data=02%7C01%7Cadrianc%40mellanox.com > > > > %7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9ba6a4d149256 > f46 > > > > 1b%7C0%7C0%7C637033879438364542&sdata=m7NmJgCGZ00hiseoZo5u > qTc0xK > > > yEro29acCKKaUsQhU%3D&reserved=0 > > > > > > [2] > > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fre > > > > view.opendev.org%2F%23%2Fc%2F677095%2F&data=02%7C01%7Cadri > anc%40 > > > > mellanox.com%7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7d2e4d9 > ba6 > > > > a4d149256f461b%7C0%7C0%7C637033879438364542&sdata=ShNrkEaJQ > XBgin > > > rzET4YKXf06%2Bd6GL8CuOX5mByuGCA%3D&reserved=0 > > > > [3] > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki > > .centos.org%2FFAQ%2FGeneral%23head- > dcca41e9a3d5ac4c6d900a991990fd11930867d6&data=02%7C01%7Cadria > nc%40mellanox.com%7C88d2a34c865d4c43a8d708d732f02cd3%7Ca652971c7 > d2e4d9ba6a4d149256f461b%7C0%7C0%7C637033879438364542&sdata= > du%2BagCLSO%2FQoPIq%2FKVYY8bmE4uM9op2b%2BgFL6QfSlcc%3D&r > eserved=0 > > > From tpb at dyncloud.net Sun Sep 8 16:33:52 2019 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 8 Sep 2019 12:33:52 -0400 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> Message-ID: <20190908163352.2autwoapaid6vim5@barron.net> On 06/09/19 13:44 -0400, Zane Bitter wrote: >On 5/09/19 7:36 AM, Tom Barron wrote: >>IIUC Gold Membership in the Foundation provides voting privileges at >>a cost of $50-200K/year and Corporate Sponsorship provides these >>plus various marketing benefits at a cost of $10-25K/year.  So far >>as I can tell there is not a requirement of a commitment of >>contributors and maintainers with the exception of the (currently >>closed) Platinum Membership, which costs $500K/year and requires at >>least 2 FTE equivalents contributing to OpenStack. > >Even this incredibly minimal requirement was famously not met for >years by one platinum member, and a (different) platinum member was >accepted without ever having contributed upstream in the past or >apparently ever intending to in the future. > >What I'm saying is that if this a the mechanism we want to use to >drive contributions, I can tell you now how it's gonna work out. I expect that you are right but if anyone has references to past communications between TC and Foundation about participation requirements or expectations for Members and Sponsors I'd appreciate pointers to these. (By analogy, it's helpful to know who has made commitments to the Paris Agreement [1], who has not, and actual track records even if one is not convinced that the agreement is going to work out.) [1] https://en.wikipedia.org/wiki/Paris_Agreement > >The question we should be asking ourselves is why companies see value >in being sponsors of the foundation but not in contributing upstream, >and how we convince them of the value of the latter. Participating companies are complex organizations whose decision makers have a mix of motives and goals, but functionally I think the classic tragedy of the commons model fits pretty well. It may be worth $50-500/K per year to foster the perception that one is a supporter or contributor to OpenStack, and to get the various marketing advantages that come along, even if one doesn't actively contribute to or maintain the software or community beyond that. > >One initiative the TC started on this front is this: > >https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html > >(BTW we could use help in converting the outdated Help Most Wanted >entries to this format. Volunteers welcome.) Reframing "Help Wanted" as "Investment Opportunities" is IMO a great idea. There were seven entries for 2018 and there is one for 2019. Did the other six get done or does the help solicited amount to submitting governance reviews like the one you did for Glance [2] for the remaining 2018 items? [2] https://review.opendev.org/#/c/668054/ From mthode at mthode.org Sun Sep 8 18:21:57 2019 From: mthode at mthode.org (Matthew Thode) Date: Sun, 8 Sep 2019 13:21:57 -0500 Subject: [tc][neutron] Supported Linux distributions and their kernel In-Reply-To: References: Message-ID: <20190908182157.2bf7gbdxifzj4zew@mthode.org> On 19-09-05 15:10:17, Adrian Chiris wrote: > Greetings, > I was wondering what is the guideline in regards to which kernels are supported by OpenStack in the various Linux distributions. > > Looking at [1], Taking for example latest CentOS major (7): > Every "minor" version is released with a different kernel version, > the oldest being released in 2014 (CentOS 7.0, kernel 3.10.0-123) and the newest released in 2018 (CentOS 7.6, kernel 3.10.0-957) > > While I understand that OpenStack projects are expected to support all CentOS 7.x releases. > Does the same applies for the kernels they originally came out with? > > The reason I'm asking, is because I was working on doing some cleanup in neutron [2] for a workaround introduced because of an old kernel bug, > It is unclear to me if it is safe to introduce this change. > > [1] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > [2] https://review.opendev.org/#/c/677095/ > > Thanks, > Adrian. > For kernel support the way we (gentoo) do it (downstream) is to have checks to make sure the running kernel has the needed modules enabled (either statically or as a module). See the linked ebuild for our syntax (it basically checks /proc/config.gz though). https://github.com/gentoo/gentoo/blob/master/net-misc/openvswitch/openvswitch-2.11.1-r1.ebuild#L39-L54 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Sun Sep 8 22:00:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 8 Sep 2019 22:00:30 +0000 Subject: [all][elections][ptl] Combined Project Team Lead and Technical Committee Election Conclusion and Results In-Reply-To: <20190908163352.2autwoapaid6vim5@barron.net> References: <20190904024941.qaapsjuddklree26@yuggoth.org> <01bb0934-44df-331f-e654-5232a59ffb13@openstack.org> <16d00fc100d.104db03dc225299.3598510759501367665@ghanshyammann.com> <20190905113636.qwxa4fjxnju7tmip@barron.net> <7cdee1c1-3541-17cf-5a9b-05a6f872c134@redhat.com> <20190908163352.2autwoapaid6vim5@barron.net> Message-ID: <20190908220029.wx7jaot6rnutmok2@yuggoth.org> On 2019-09-08 12:33:52 -0400 (-0400), Tom Barron wrote: [...] > There were seven entries for 2018 and there is one for 2019. Did > the other six get done Not that, unfortunately, as far as I know. > or does the help solicited amount to submitting governance reviews > like the one you did for Glance [2] for the remaining 2018 items? > > [2] https://review.opendev.org/#/c/668054/ Yes, I believe that's what's still needed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Mon Sep 9 01:52:28 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 9 Sep 2019 01:52:28 +0000 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: <9bf6497d2e65429bbe83220a28a3c146@AUSX13MPS308.AMER.DELL.COM> Chris, Thank you so much for all the great help. Thanks, Arkady -----Original Message----- From: Chris Hoge Sent: Wednesday, September 4, 2019 11:24 AM To: OpenStack Discuss Subject: Thank you Stackers for five amazing years! [EXTERNAL EMAIL] Hi everyone, After more than nine years working in cloud computing and on OpenStack, I've decided that it is time for a change and will be moving on from the OpenStack Foundation. For the last five years I've had the honor of helping to support this vibrant community, and I'm going to deeply miss being a part of it. OpenStack has been a central part of my life for so long that it's hard to imagine a work life without it. I'm proud to have helped in some small way to create a lasting project and community that has, and will continue to, transform how infrastructure is managed. September 12 will officially be my last day with the OpenStack Foundation. As I make the move away from my responsibilities, I'll be working with community members to help ensure continuity of my efforts. Thank you to everyone for building such an incredible community filled with talented, smart, funny, and kind people. You've built something special here, and we're all better for it. I'll still be involved with open source. If you ever want to get in touch, be it with questions about work I've been involved with or to talk about some exciting new tech or to just catch up over a tasty meal, I'm just a message away in all the usual places. Sincerely, Chris chris at hogepodge.com Twitter/IRC/everywhere else: @hogepodge From andre at florath.net Mon Sep 9 05:40:43 2019 From: andre at florath.net (Andreas Florath) Date: Mon, 09 Sep 2019 07:40:43 +0200 Subject: [heat] Resource handling in Heat stacks In-Reply-To: References: <0f3f727581dc68f4f1ab26ed2ef47686811dbe07.camel@florath.net> Message-ID: <942fbd4b9e95cfa7049b61b2530265a2efa17a4a.camel@florath.net> On Fri, 2019-09-06 at 15:26 -0400, Zane Bitter wrote: > On 4/09/19 3:51 AM, Andreas Florath wrote: > > Many thanks! Works like a charm! > > > > Suggestion: document default value of 'delete_on_termination'. 😉 > > Patches accepted 😉 https://review.opendev.org/#/c/680912/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Mon Sep 9 07:23:26 2019 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 9 Sep 2019 12:53:26 +0530 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> Message-ID: On Wed, Sep 4, 2019 at 12:57 AM Jeremy Stanley wrote: > > On 2019-09-03 14:03:37 -0500 (-0500), Sean McGinnis wrote: > [...] > > The release automation can only create branches, not remove them. > > That is something the infra team would need to do. > > > > I can't recall how this was handled in the past. Maybe someone > > from infra can shed some light on how EOL'ing stable branches > > should be handled for the no longer needed stable/* branches. > > We've done it different ways. Sometimes it's been someone from the > OpenDev/Infra sysadmins who volunteers to just delete the list of > branches requested, but more recently for large batches related to > EOL work we've temporarily elevated permissions for a member of the > Stable Branch (now Extended Maintenance SIG?) or Release teams. > -- Thanks Jeremy, Sean for all the information. Can someone from Release or Infra Team can do the needful of removing stable/ocata and stable/pike branch for TripleO projects being EOLed for pike/ocata in https://review.opendev.org/#/c/677478/ and https://review.opendev.org/#/c/678154/. > Jeremy Stanley Thanks and Regards Yatin Karel From renat.akhmerov at gmail.com Mon Sep 9 07:53:45 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 9 Sep 2019 14:53:45 +0700 Subject: [mistral] Publish field in workflow tasks In-Reply-To: References: Message-ID: <91b6f2db-9b82-4c6c-8dde-e4f6519cc08d@Spark> Ali, I’m for the option 2.a because it’s not so difficult to implement but it’ll be the best effort to handle a situation more gracefully if someone puts “publish” in both places (old syntax and advanced syntax). Over time we’ll deprecate the old “publish” completely though. Thanks Renat Akhmerov @Nokia On 28 Aug 2019, 15:37 +0700, Ali Abdelal , wrote: > Hello, > > Currently, there are two "publish" fields, one in the task(regular "publish")-the scope is branch and not global, > and another under "on-success", “on-error” or “on-complete”. > > In the current behavior, regular "publish" is ignored if there is "publish" under "on-success", “on-error” or “on-complete” [1]. > > For example:- > (a) > version: '2.0' > wf1: >     tasks: >       t1: >         publish: >           res_x1: 1 >         on-success: >           publish: >             branch: >               res_x2: 2 > > (b) > version: '2.0' > wf2: >     tasks: >       t1: >         publish: >           res_x1: 1 > > "res_x1" won't be published in (a), but it will in (b). > > > We can either:- > > 1) Invalidate such syntax. > 2) Merge the two publishes together and if there are duplicate keys, there are two options:- >    a) What takes priority is what's in publish under "on-success" or “on-error” or “on-complete. >    b) Not allow having a duplicate. > > > What is your opinion? > And please tell us if you have other suggestions. > > [1] https://bugs.launchpad.net/mistral/+bug/1791449 -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack-dev at storpool.com Mon Sep 9 08:22:58 2019 From: openstack-dev at storpool.com (Peter Penchev) Date: Mon, 9 Sep 2019 11:22:58 +0300 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? Message-ID: Hi, When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has been specified, this function tries to also install the development library for Python 2.x, I guess just in case some package has not declared proper Python 3 support or something. It then proceeds to install the Python 3 version of the library and all its dependencies. Unfortunately there is a problem with that, and specifically with script files installed in the system's executable files directory, e.g. /usr/local/bin. The problem appears when some Python library has already been installed for Python 3 (and has installed its script files), but is now installed for Python 2 (overwriting the script files) and is then not forcefully reinstalled for Python 3, since it is already present. Thus, the script files are last modified by the Python 2 library installation and they have a hashbang line saying `python2.x` - so if something then tries to execute them, they will run and use modules and libraries for Python 2 only. We experienced this problem when running the cinderlib tests from Cinder's `playbooks/cinderlib-run.yaml` file - it finds a unit2 executable (installed by the unittest2 library) and runs it, hoping that unit2 will be able to discover and collect the cinderlib tests and load the cinderlib modules. However, since unittest2 has last been installed as a Python 2 library, unit2 runs with Python 2 and fails to locate the cinderlib modules. (Yes, we know that there are other ways to run the cinderlib tests; this message is about the problem exposed by this way of running them) The obvious solution would be to instruct the Python 2 pip to not install script (or other shared) files at all; unfortunately, https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on install"), detailing a very similar use case ("need it installed for Python 2, but want to use it with Python 3") has been open for almost exactly three years now with no progress. I wonder if I could try to help, but even if this issue is resolved, there will be some time before OpenStack can actually depend on a recent enough version of pip. A horrible workaround would be to find the binary directory before installing the Python 2 library (using something like `pip3.7 show somepackage` and then running some heuristics on the "Location" field), tar'ing it up and then restoring it... but I don't know if I even want to think about this. Another possible way forward would be to consider whether we still want the Python 2 libraries installed - is OpenStack's Python 3 transition reached a far enough stage to assume that any projects that still require Python 2 *and* fail to declare their Python 2 dependencies properly are buggy? To be honest, this seems the most reasonable path for me - drop the "also install the Python 2 libs" code and see what happens. I could try to make this change in a couple of test runs in our third-party Cinder CI system and see if something breaks. Here is a breakdown of what happens, with links to the log of the StorPool third-party CI system for Cinder: https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_55_691087 `stack.sh` invokes `pip_install` for `os-testr` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_56_030839 `pip_install` sees that we want a Python 3 installation and invokes `pip3.7` to install os-testr. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_59_869198 `pip3.7` wants to install `unittest2` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_44_15_851337 `pip3.7` has installed `unittest2` - now `/usr/local/bin/unit2` has a hashbang line saying `python3.7` Now this is where it gets, uhm, interesting: https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_708737 `setup_dev_lib` is invoked for `os-brick` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_723318 `setup_dev_lib`, seeing that we really want a Python 3 installation, decides to install `os-brick` for Python 2 just in case. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_00_661346 `pip2.7` is invoked to install `os-brick` and its dependencies. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_25_209365 `pip2.7` decides it wants to install `unittest2`, too. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_20_924559 `pip2.7` has installed `unittest2`, and now `/usr/local/bin/unit2` has a hasbang line saying `python2.7` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_21_591114 `setup_dev_lib` turns the Python 3 flag back on. https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_22_659564 `pip3.7` is invoked to install `os-brick` https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_36_759583 `pip3.7` decides (correctly) that it has already installed `unittest2`, so (only partially correctly) it does not need to install it again. Thus `/usr/local/bin/unit2` is left with a hashbang line saying `python2.7`. Thanks for reading this far, I guess :) G'luck, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Mon Sep 9 08:32:26 2019 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 9 Sep 2019 09:32:26 +0100 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: References: Message-ID: +1, seems like a good addition to the team! On Thu, 5 Sep 2019 at 05:35, Renat Akhmerov wrote: > Andras, > > You just went one step ahead of me! I was going to promote Oleg in the end > of this week :) I’m glad that we coincided at this. Thanks! I’m for it with > my both hands! > > > Renat Akhmerov > @Nokia > On 4 Sep 2019, 17:33 +0700, András Kövi , wrote: > > I would like to invite Oleg Ovcharuk to join the > Mistral Core Team. Oleg has been a very active and enthusiastic contributor > to the project. He has definitely earned his way into our community. > > Thank you, > Andras > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From merlin.blom at bertelsmann.de Mon Sep 9 09:32:22 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Mon, 9 Sep 2019 09:32:22 +0000 Subject: AW: [metrics] [telemetry] [stein] cpu_util In-Reply-To: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> References: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> Message-ID: >From Witek Bedyk on Re: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? Fr 16.08.2019 17:11 ' Hi all, You can also collect `cpu.utilization_perc` metric with Monasca and trigger Heat auto-scaling as we demonstrated in the hands-on workshop at the last Summit in Denver. Here the Heat template we've used [1]. You can find the workshop material here [2]. Cheers Witek [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling_blob_master_final_autoscaling.yaml&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=wZLSXjvqYiPmMVbz8fgezCE1iwxZcQXRe3zZZW1JBFo&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=M1D9BENrKX7HD43HfcYFuB8vdP9fKgAuGOTXtRq5aZI&e= ' Cheers Merlin -----Ursprüngliche Nachricht----- Von: Budai Laszlo Gesendet: Freitag, 16. August 2019 18:10 An: OpenStack Discuss Betreff: [metrics] [telemetry] [stein] cpu_util Hello all, the release release announce of ceilometer rocky is deprecating the cpu_util and *.rate metrics "* cpu_util and *.rate meters are deprecated and will be removed in future release in favor of the Gnocchi rate calculation equivalent." so we don't have them in Stein. Can you direct me to some document that describes how to achieve these with Gnocchi rate calculation? Thank you, Laszlo -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Sep 9 09:44:39 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Sep 2019 10:44:39 +0100 (BST) Subject: [placement] "now" worklist Message-ID: As we near the end of a cycle it can be a bit unclear what tasks are relevant or a priority for the placement project. I've made a worklist in storyboard https://storyboard.openstack.org/#!/worklist/754 called "placement now". It gathers stories from the placement group (placement, osc-placement, os-resource-classes, os-traits) that I've tagged with 'pnow' to mean "these are the things we should be concerned with in the near future". This helps to take off the radar anything from the following groups: * Features that will not be considered this cycle. * Anything related to osc-placement (which has already seen its likely last release for this cycle) This leaves placement (the service) bug fixes, and docs. Not yet there is an item for "documenting the new nested provider features", mostly because the story for that has not solidified. Anything that is currently on that list we should finish before the end of the cycle. I hope that having a focused list can help drive that. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From tobias.urdin at binero.se Mon Sep 9 10:06:30 2019 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 9 Sep 2019 12:06:30 +0200 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: I don't think ceilometer uses the compute.instance.exists event by default somewhere or atleast I cannot find a reference to it. What I do know however is that we have a billing system that polls the os-simple-tenant-usage API so if that is unaffected by the possible deprecation of instance_usage_audit then I don't think we use it. Best regards Tobias On 9/7/19 5:20 PM, Tim Bell wrote: > On 9/7/19 3:09 PM, Matt Riedemann wrote: >> On 9/6/2019 6:59 PM, melanie witt wrote: >>> * If Telemetry is no longer using the server usage audit log API, we >>> deprecate it in Nova and notify deployment tools to stop setting >>> [DEFAULT]/instance_usage_audit = true to prevent further creation of >>> nova.task_log records and recommend manual cleanup by users >> Deprecating the API would just be a signal to not develop new tools >> based on it since it's effectively unmaintained but that doesn't mean >> we can remove it since there could be non-Telemtry tools in the wild >> using it that we'd never hear about. You might not be suggesting an >> eventual path to removal of the API, I'm just bringing that part up >> since I'm sure people are thinking it. >> > Tools like cASO (https://github.com/IFCA/caso) use this API. This is > used by many of the EGI Federated Cloud sites to do accounting per VM > (https://egi-federated-cloud-integration.readthedocs.io/en/latest/openstack.html) > > >> I'm also assuming that API isn't multi-cell aware, meaning it won't >> traverse cells pulling records like listing servers or migration >> resources. > Given scaling issues with the current Telemetry implementation, I > suspect alternative approaches have had to be developed in any case. > CERN uses libvirt data extraction. >> As for the config option to run the periodic task that creates these >> records, that's disabled by default so deployment tools shouldn't be >> enabling it by default - but maybe some do if they are configured to >> deploy ceilometer. >> >>> or >>> >>> * If Telemetry is still using the server usage audit log API, we >>> create a new 'nova-manage db purge_task_log --before ' (or >>> similar) command that will hard delete nova.task_log records before a >>> specified date or all if --before is not specified >> If you can't remove the API then this is probably something that needs >> to happen regardless, though we likely won't know if anyone uses it. >> I'd consider it pretty low priority given how extremely latent this is >> and would expect anyone that's been running with this enabled in >> production has developed DB purge scripts for this table long ago. >> > From tobias.urdin at binero.se Mon Sep 9 10:15:00 2019 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 9 Sep 2019 12:15:00 +0200 Subject: AW: [metrics] [telemetry] [stein] cpu_util In-Reply-To: References: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> Message-ID: The cpu_util is a pain-point for us as well, we will unfortunately need to add that metric back to keep backward compatibility to our customers. Best regards Tobias On 9/9/19 11:37 AM, Blom, Merlin, NMU-OI wrote: > From Witek Bedyk on Re: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? > Fr 16.08.2019 17:11 > ' > Hi all, > > You can also collect `cpu.utilization_perc` metric with Monasca and trigger Heat auto-scaling as we demonstrated in the hands-on workshop at the last Summit in Denver. > > Here the Heat template we've used [1]. > You can find the workshop material here [2]. > > Cheers > Witek > > [1] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling_blob_master_final_autoscaling.yaml&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=wZLSXjvqYiPmMVbz8fgezCE1iwxZcQXRe3zZZW1JBFo&e= > [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sjamgade_monasca-2Dautoscaling&d=DwICaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=hTUN4-Trlb-8Fh11dR6m5VD1uYA15z7v9WL8kYigkr8&m=KDzBi0a41i4kfZG7LrvMjx6tKJCAZHM71I9snAHtDbU&s=M1D9BENrKX7HD43HfcYFuB8vdP9fKgAuGOTXtRq5aZI&e= > ' > > Cheers > Merlin > > -----Ursprüngliche Nachricht----- > Von: Budai Laszlo > Gesendet: Freitag, 16. August 2019 18:10 > An: OpenStack Discuss > Betreff: [metrics] [telemetry] [stein] cpu_util > > Hello all, > > the release release announce of ceilometer rocky is deprecating the cpu_util and *.rate metrics > "* cpu_util and *.rate meters are deprecated and will be removed in > future release in favor of the Gnocchi rate calculation equivalent." > > so we don't have them in Stein. Can you direct me to some document that describes how to achieve these with Gnocchi rate calculation? > > Thank you, > Laszlo > From thierry at openstack.org Mon Sep 9 10:30:56 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Sep 2019 12:30:56 +0200 Subject: [i18n][tc] The future of I18n In-Reply-To: <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> References: <0ffa02d3-fef5-8fc3-1925-5c663b6c967d@openstack.org> <20190906133759.obgszlvqexgam5n3@csail.mit.edu> <817c9cf8-ca12-146b-af49-3f4345402888@gmail.com> Message-ID: <462cad35-832e-c5b0-8971-a97f386f78e0@openstack.org> Ian Y. Choi wrote: >> On Fri, Sep 06, 2019 at 11:36:38AM +0200, Thierry Carrez wrote: >> :The I18n project team had no PTL candidates for Ussuri, so the TC needs to >> :decide what to do with it. It just happens that Ian kindly volunteered to be >> :an election official, and therefore could not technically run for I18n PTL. >> :So if Ian is still up for taking it, we could just go and appoint him. > > I love I18n, and I could not imagine OpenStack world without I18n - I > would like to take I18n PTL role for Ussuari cycle if there is no > objection. Great! I posted a review to suggest that the TC appoints you at: https://review.opendev.org/680968 >> :That said, I18n evolved a lot, to the point where it might fit the SIG >> :profile better than the project team profile. >> [...] > > IMHO, since it seems that I18n team's release activities [5] are rather > stable, from the perspective, I think staying I18n team as SIG makes > sense, but please kindly consider the followings: > > - Translators who have contributed translations to official OpenStack > projects are currendly regarded as ATC and APC of the I18n project. >   It would be great if OpenStack TC and official project teams regard > those translation contribution as ATC and APC of corresponding official > projects, if I18n team stays as SIG. Note that SIG members are considered ATCs (just like project team members) and can vote in the TC election... so there would be no difference really (except I18n SIG members would no longer have to formally vote for a PTL). > [...] > - Another my brief understanding on the difference between as an > official team and as SIG from the perspective of Four Opens is that SIGs > and working groups seems that they have some flexibility using > non-opensource tools for communication. >   For example, me, as PTL currently encourage all the translators to > come to the tools official teams use such as IRC, mailing lists, and > Launchpad (note: I18n team has not migrated from Launchpad to > Storyboard) - I like to use them and >   I strongly believe that using such tools can assure that the team is > following Four Opens well. But sometimes I encounter some reality - > local language teams prefer to use their preferred communication protocols. >   I might need to think more how I18n team as SIG communicates well > with members, but I think the team members might want to more find out > how to better communicate with language teams (e.g., using Hangout, > Slack, and so on from the feedback) >   , and try to use better communication tools which might be > comfortable to translators who have little background on development. Yes, it's true that SIGs have more freedom in how they operate, and so the diversity of communication tools used by the translators might be another reason the I18n team fits the SIG profile at this point better than the Project Team profile. > Note that I have not discussed the details with team members - I am > still open with my thoughts, would like to more listen to opinions from > the team members, and originally wanted to expand the discussion with > such perspective during upcoming PTG > in Shanghai with more Chinese translators. > And dear OpenStackers including I18n team members & translators: please > kindly share your sincere thoughts. Certainly, the idea is not to rush anything -- the team will continue to operate as a project team for the time being. But if the team agrees, transitioning to a SIG is pretty cheap, and I feel like the SIG format fits the group better at this point (and gives extra flexibility)... so it is one thing to consider :) -- Thierry Carrez (ttx) From smooney at redhat.com Mon Sep 9 11:54:58 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Sep 2019 12:54:58 +0100 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: Message-ID: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> On Mon, 2019-09-09 at 11:22 +0300, Peter Penchev wrote: > Hi, > > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has > been specified, this function tries to also install the development library > for Python 2.x, I guess just in case some package has not declared proper > Python 3 support or something. It then proceeds to install the Python 3 > version of the library and all its dependencies. > > Unfortunately there is a problem with that, and specifically with script > files installed in the system's executable files directory, e.g. > /usr/local/bin. The problem appears when some Python library has already > been installed for Python 3 (and has installed its script files), but is > now installed for Python 2 (overwriting the script files) and is then not > forcefully reinstalled for Python 3, since it is already present. Thus, the > script files are last modified by the Python 2 library installation and > they have a hashbang line saying `python2.x` - so if something then tries > to execute them, they will run and use modules and libraries for Python 2 > only. yes this is a long standing issue. we discovered it a year ago but it was never fix. in Ussrui i guess one of the first changes to devstack to make it python 3 only will be to chagne that behavior. im not sure if we will be able to change it before then. whenever you us libs_from_git in your local.conf on a python 3 install it will install them twice both with python 2 and python 3. i hope more distros elect to symlink /usr/bin/python to python 3 some distros have chosen to do that on systems that are python only and i believe that is the correct approch. when i encountered this it was always resuliting on the script header being #!/usr/bin/python with no version suffix i gues on a system where that points to python 3 the python 2.7 install might write python2.7 there instead? > > We experienced this problem when running the cinderlib tests from Cinder's > `playbooks/cinderlib-run.yaml` file - it finds a unit2 executable > (installed by the unittest2 library) and runs it, hoping that unit2 will be > able to discover and collect the cinderlib tests and load the cinderlib > modules. However, since unittest2 has last been installed as a Python 2 > library, unit2 runs with Python 2 and fails to locate the cinderlib > modules. (Yes, we know that there are other ways to run the cinderlib > tests; this message is about the problem exposed by this way of running > them) > > The obvious solution would be to instruct the Python 2 pip to not install > script (or other shared) files at all; unfortunately, > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on > install"), detailing a very similar use case ("need it installed for Python > 2, but want to use it with Python 3") has been open for almost exactly > three years now with no progress. I wonder if I could try to help, but even > if this issue is resolved, there will be some time before OpenStack can > actually depend on a recent enough version of pip. well the obvious solution is to stop doing this entirly. it was added as a hack to ensure if you use LIB_FROM_GIT in you local.conf that those libs would always be install from the git checkout that you specified in you local.conf for train we are technically requireing all project to run under python 3 so we could remove the fallback mechanium of in stalling under python 2. it was there incase a service installed under python 2 to ensure it used the same version of the lib and did not use a version form pypi instead. i wanted to stop doing this last year but we could not becase not all project could run under python 3. but now that they should be able to we dont need this hack anymore. we should change it to respec the python version you have selected. that will speed up stacking speed as we wont have to install everything twice and fix the issue you have encountered. > > A horrible workaround would be to find the binary directory before > installing the Python 2 library (using something like `pip3.7 show > somepackage` and then running some heuristics on the "Location" field), > tar'ing it up and then restoring it... but I don't know if I even want to > think about this. > > Another possible way forward would be to consider whether we still want the > Python 2 libraries installed - is OpenStack's Python 3 transition reached a > far enough stage to assume that any projects that still require Python 2 > *and* fail to declare their Python 2 dependencies properly are buggy? To be > honest, this seems the most reasonable path for me - drop the "also install > the Python 2 libs" code and see what happens. I could try to make this > change in a couple of test runs in our third-party Cinder CI system and see > if something breaks. > > Here is a breakdown of what happens, with links to the log of the StorPool > third-party CI system for Cinder: > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_55_691087 > `stack.sh` invokes `pip_install` for `os-testr` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_56_030839 > `pip_install` sees that we want a Python 3 installation and invokes > `pip3.7` to install os-testr. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_43_59_869198 > `pip3.7` wants to install `unittest2` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_44_15_851337 > `pip3.7` has installed `unittest2` - now `/usr/local/bin/unit2` has a > hashbang line saying `python3.7` > > Now this is where it gets, uhm, interesting: > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_708737 > `setup_dev_lib` is invoked for `os-brick` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_45_59_723318 > `setup_dev_lib`, seeing that we really want a Python 3 installation, > decides to install `os-brick` for Python 2 just in case. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_00_661346 > `pip2.7` is invoked to install `os-brick` and its dependencies. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_46_25_209365 > `pip2.7` decides it wants to install `unittest2`, too. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_20_924559 > `pip2.7` has installed `unittest2`, and now `/usr/local/bin/unit2` has a > hasbang line saying `python2.7` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_21_591114 > `setup_dev_lib` turns the Python 3 flag back on. > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_22_659564 > `pip3.7` is invoked to install `os-brick` > > https://spfactory.storpool.com/logs/80/639180/35/check/cinder-storpool-tempest/82fb46b/job-output.txt.gz#_2019-09-09_05_47_36_759583 > `pip3.7` decides (correctly) that it has already installed `unittest2`, so > (only partially correctly) it does not need to install it again. > > Thus `/usr/local/bin/unit2` is left with a hashbang line saying `python2.7`. > > Thanks for reading this far, I guess :) > > G'luck, > Peter From mnaser at vexxhost.com Mon Sep 9 12:05:15 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 9 Sep 2019 08:05:15 -0400 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance In-Reply-To: References: <0CCB5020-D524-4304-8682-A015AEDB7C50@doughellmann.com> <466A5D87-5936-4F05-91D9-36ACD680FFA4@doughellmann.com> <31bc5922-3480-2fb6-dade-f76dab1e9013@fried.cc> Message-ID: On Fri, Sep 6, 2019 at 5:10 AM Thierry Carrez wrote: > > Divya K Konoor wrote: > > Missing the deadline for a PTL nomination cannot be the reason for > > removing governance. > > I agree with that, but missing the deadline twice in a row is certainly > a sign of some disconnect with the rest of the OpenStack community. > Project teams require a minimal amount of reactivity and presence, so it > is fair to question whether PowerVMStackers should continue as a project > team in the future. > > > PowerVMStackers continue to be an active project > > and would want to be continued to be governed under OpenStack. For PTL, > > an eligible candidate can still be appointed . > > There is another option, to stay under OpenStack governance but without > the constraints of a full project team: PowerVMStackers could be made an > OpenStack SIG. > > I already proposed that 6 months ago (last time there was no PTL nominee > for the team), on the grounds that interest in PowerVM was clearly a > special interest, and a SIG might be a better way to regroup people > interested in supporting PowerVM in OpenStack. > > The objection back then was that PowerVMStackers maintained a number of > PowerVM-related code, plugins and drivers that should ideally be adopted > by their consuming project teams (nova, neutron, ceilometer), and that > making it a SIG would endanger that adoption process. > > I still think it makes sense to consider PowerVMStackers as a Special > Interest Group. As long as the PowerVM-related code is not adopted by > the consuming projects, it is arguably a special interest, and not a > completely-integrated part of OpenStack components. > > The only difference in being a SIG (compared to being a project team) > would be to reduce the amount of mandatory tasks (like designating a PTL > every 6 months). You would still be able to own repositories, get room > at OpenStack events, vote on TC election... > > It would seem to be the best solution in your case. I echo all of this and I think at this point, it's better for the deliverables to be within a SIG. > -- > Thierry Carrez (ttx) > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Mon Sep 9 12:08:01 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 9 Sep 2019 08:08:01 -0400 Subject: [all][ptl][tc][docs] Develope a code-review practices document In-Reply-To: References: Message-ID: On Fri, Sep 6, 2019 at 12:11 AM Trinh Nguyen wrote: > > Hi all, > > I find it's hard sometimes to handle situations in code-review, something likes solving conflicts while not upsetting developers, or suggesting a change to a patchset while still encouraging the committer, etc. I know there are already documents that guide us on how to do a code-review [2] and even projects develope their own procedures but I find they're more about technical issues rather than human communication. Currently reading Google's code-review practices [1] give me some inspiration to develop more human-centric code-review guidelines for OpenStack projects. IMO, it could be a great way to help project teams develop stronger relationship as well as encouraging newcomers. When the document is finalized, I then encourage PTLs to refer to that document in the project's docs. > > Let me know what you think and I will put a patchset after one or two weeks. I am very supportive of this and I agree with you on this. I'd be happy to see and go over what you are looking to propose! > [1] https://google.github.io/eng-practices/review/ > [2] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > [3] https://docs.openstack.org/doc-contrib-guide/docs-review.html > [4] https://docs.openstack.org/nova/rocky/contributor/code-review.html > [5] https://docs.openstack.org/neutron/pike/contributor/policies/code-reviews.html > > > Bests, > > -- > Trinh Nguyen > www.edlab.xyz > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Mon Sep 9 12:09:23 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 9 Sep 2019 08:09:23 -0400 Subject: [ansible-sig] weekly meetings In-Reply-To: References: <7922685b-b7dd-3599-1fec-01c3cb4ce9bc@googlemail.com> Message-ID: Hi all, Sorry about the lack of details :) It will be held in #openstack-ansible-sig on Freenode. Thanks, Mohammed On Wed, Sep 4, 2019 at 9:24 PM Carter, Kevin wrote: > > Thanks Mohammed, I've added it to my calendar and look forward to getting started. > > -- > > Kevin Carter > IRC: Cloudnull > > > On Wed, Sep 4, 2019 at 8:17 PM Wesley Peng wrote: >> >> Hi >> >> on 2019/9/5 0:20, Mohammed Naser wrote: >> > For those interested in getting involved, the ansible-sig meetings >> > will be held weekly on Fridays at 2:00 pm UTC starting next week (13 >> > September 2019). >> > >> > Looking forward to discussing details and ideas with all of you! >> >> Is it a onsite meeting? where is the location? > > > This is a good question, I assume the meeting will be on IRC, on freenode, but what channel will we be using? #openstack-ansible-sig ? > >> >> >> thanks. >> -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mriedemos at gmail.com Mon Sep 9 13:17:46 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Sep 2019 08:17:46 -0500 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: On 9/9/2019 5:06 AM, Tobias Urdin wrote: > What I do know however is that we have a billing system that polls the > os-simple-tenant-usage API so > if that is unaffected by the possible deprecation of > instance_usage_audit then I don't think we use it. Different APIs [1][2] so it's not a problem. [1] https://docs.openstack.org/api-ref/compute/#usage-reports-os-simple-tenant-usage [2] https://docs.openstack.org/api-ref/compute/#server-usage-audit-log-os-instance-usage-audit-log -- Thanks, Matt From balazs.gibizer at est.tech Mon Sep 9 13:23:19 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Mon, 9 Sep 2019 13:23:19 +0000 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> <1567771216.28660.0@smtp.office365.com> Message-ID: <1568035395.12646.1@smtp.office365.com> On Fri, Sep 6, 2019 at 3:34 PM, Mohammed Naser wrote: On Fri, Sep 6, 2019 at 8:04 AM Balázs Gibizer > wrote: I guess I'm one of the reluctant people. I think technically I can do it but I don't want to commit to work when I don't see that I will have enough time to do it well. For me this is all about priorities and the amount of work I'm already commited to at the moment. Still I'm open to get tasks delegated to me, like doing the project update in Sanghai. If it's okay with you, would you like to share what are some of the priorities and work that you feel is placed on a PTL which makes you reluctant? PS, by no means I am trying to push for you to be PTL if you're not currently interested, but I want to hear some of the community thoughts about this (and feel free to reply privately) I preceive the PTL role as a person who oversees the project and follows the status of the ongoing features and high severity bugs. A person who organizes Forum and PTG discussions and ensures that the results are documented. A person who tries to improve the overal collaboration in the given project. And I guess there are things on the PTL's plate that I'm not even aware of. This needs time and it needs commitment to have that time available during the whole cycle. I'm in a situation where I constantly feel the lack of time to do my current commitments (e.g. be a good Nova core, be a good Placement core, finish the feature I promised both to the community and internally to my employer.) I think it won't be fair from me to commit to the PTL role when I already see I would not have time to do it properly. On the personal side I guess I also affraid of not having enough skill to delegeta the above PTL related tasks to others. Based on the above my constructive suggestion is to try out that the Placement core team together try to fulfill the PTL's role. I know that for the TC it creates some extra pain as there would be no single point of contact for the Placement project. Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Sep 9 14:27:19 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Sep 2019 16:27:19 +0200 Subject: [release][cyborg] os-acc status Message-ID: Hi Cyborgs, One of your deliverables is the os-acc library. It has seen no change over this development cycle and therefore was not released at all in train. We have several options for this library now: 1- It's still very much alive and desired and just has exceptionally not seen much activity during this cycle. We should just cut a stable/train branch from the last release available (0.2.0) and continue in ussuri. 2- It's a valuable library, it just changes extremely rarely. We should make it independent from the release cycle and have it release at its own rhythm. 3- Development has stopped on this, and the library is not useful right now. We should retire this deliverable so that we do not build wrong expectations for our users. Please let us know which option fits the current status of os-acc. -- Thierry Carrez (ttx) From fungi at yuggoth.org Mon Sep 9 14:32:58 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Sep 2019 14:32:58 +0000 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> References: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> Message-ID: <20190909143258.4f32wsvamj666y2m@yuggoth.org> On 2019-09-09 12:54:58 +0100 (+0100), Sean Mooney wrote: [...] > i hope more distros elect to symlink /usr/bin/python to python 3 > some distros have chosen to do that on systems that are python > only and i believe that is the correct approch. I personally hope they don't, and at least my preferred Linux distro is not planning to do that any time in the foreseeable future (if ever). I see python and python3 as distinct programming languages with their own interpreters, and so any distro which by default pretends that its python3 interpreter is a python interpreter (by claiming the unversioned "python" executable name in the system context search path) is simply broken. > when i encountered this it was always resuliting on the script > header being #!/usr/bin/python with no version suffix i gues on a > system where that points to python 3 the python 2.7 install might > write python2.7 there instead? Yes, the correct solution is to update those to #!/usr/bin/python3 because at least some distros are going to cease providing a /usr/bin/python executable at all when they drop their 2.7 packages. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Sep 9 14:33:37 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Sep 2019 15:33:37 +0100 (BST) Subject: [tc] Campaign Question: Treating the Problem, not just the symptoms- Burnout, No Polling, etc In-Reply-To: <20190909132120.aqbv3plus2hp7q6j@pacific.linksys.moosehall> References: <99048F8B-EE87-4A3A-A689-8F05F8EBDBBE@doughellmann.com> <20190906131053.rofnz7zeoudctoif@yuggoth.org> <20190909132120.aqbv3plus2hp7q6j@pacific.linksys.moosehall> Message-ID: On Mon, 9 Sep 2019, Adam Spiers wrote: > Chris Dent wrote: >> On Fri, 6 Sep 2019, Jeremy Stanley wrote: >> >>> I'm disappointed that you don't think the software you're making is >>> open source. I think the software I'm making is open source, and if >>> I didn't I wouldn't be here. > > I wouldn't either. I'd be very worried to live in a world where there > was no serious open source rival to AWS, Azure, GCE etc. (One possible tl;dr of the below is: For at least some people working on OpenStack the more direct and immediate cause and result of their work (whatever their intent) is the enablement of corporate profit (through sales and support) not individual humans using the software.) >From some standpoints I would guess that OpenStack looks and behaves like open source: people work on it collaboratively and the code is available for anyone to change. And I would agree that from that standpoint it is open source, and the four opens are good both in letter and in spirit. I would also agree that the academic and other non-profit use of a OpenStack that Jeremy is very compelling and motivating. But the context of much of this thread has been about the experience of the developers making OpenStack. How they come to be in this situation, how they manage their work, who they work with, what drives decisions, etc. (What follows is a ramble. A apologize for not being able to write less. But you did ask so here it goes.) In the context of daily developer experience, things are less clear. They would be more clear and would feel more like open source if I was more frequently collaborating in the creation of code with people who were using OpenStack. But I don't. Most frequently I'm collaborating with people who instead of using OpenStack are helping to make something for other people (with whom they have infrequent collaborative contact) to use OpenStack. For some people this is not the case. For example, many of the people who have been deeply involved with OpenStack infra use OpenStack all the time and also work hard to improve the code of OpenStack. But on a daily basis that isn't my experience. Nor does it feel like the experience of most of the people I tend to collaborate with. Yes, sometimes I will collaborate with someone from CERN to create a feature, but this is rare. Usually I collaborate with people from Intel, VMware, Red Hat, and a variety of Telco vendors. Doing a thing to help an existing customer or hoped for notional customer, both of whom are abstractions at a distance, not humans. This isn't a bad thing. Organizations collaborating in any way is great. But it doesn't _feel_ like "open source" to me. And that feeling is an important factor (I think) in analyzing the motivations people experience when working on OpenStack and the choices they make with regard to how they act in the environment. As someone who has done what could be called open source since long before the term was invented, the common failure of corporate patrons to give maintainability and quality (of product and (critically) the experience of creating it) sufficient attention is a source of a great deal of resentment and internal conflict. I am far too conscious of the necessity to compensate for that failure if I want to feel a sense of well being with what I'm helping to create (both in terms of product and the environment it is being created in). That is: I care enough to try to do what I think is right. In this thread, and the one that started it, we've put forward the "maybe we should just chill" as a bit of an antidote to burnout and overcommitment. While I rationally think that's the right idea, emotionally it is very hard to do and the source of that difficulty is this: OpenStack has constituted itself over the years as the domain of contributing corporations. Many paid contributors for whom working on OpenStack is their job. At the same time we have also been very vocal about being not just open source, but a source of good wisdom (the four opens) on how to do open source well. The latter creates a community I want to believe in. A source of pride. The former creates a conflict of interest, a frequent inability to do the actually right thing for the long term health of the community. A source of shame. Continued pleas to get the corporates to do "open source" well -- that is with correct attention to: * developer experience * maintainability * architectural integrity * deeper/closer ties to user engagement and their satisfaction and thus some akin to "actually open source" -- have fallen on what, if actions speak louder than words, are deaf ears. This creates a conundrum. I've tried a variety of ways out of it. One I'm experimenting with now is realizing that OpenStack really isn't, now, proper open source. And if it is not, then I don't have to care because they don't. > Again I'd be very interested to learn more about your take on what we > can do better. There are two directions to go: Maintain the mode of corporate-contribution-driven development. If this is to be healthy then the corps doing that contribution need to invest far more heavily in general, but especially in the items I've listed above at "correct attention". This would grant the community sufficient resources to evolve out of its aging models for development and governance. You have to have some free space to have the head space to get to new spaces. Start breaking down the corporate-contribution-driven development. Encourage professional openstack devs (like me) to age out of the system and discourage new ones coming in. Encourage feature development from and via users. Feature velocity might drop drastically but they might be features individuals actually use within a few weeks of their release rather than a few years. Some of this latter is already happening. Especially in what some people call the non-core projects; things associated with deployment for example. But in projects like nova we're heavily driven by trying to create a feature base which is predicted to drive sales, either directly or indirectly. And, though opinions and experiences differ, my opinion and experience is that driving sales as a direct factor is anathema to "open source". Indirect? Sure, whatever, if that floats your boat. The proper direct factor is humans. There's a lot more to this than I've stated here, but I hope that gives at least something in answer to the question. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From smooney at redhat.com Mon Sep 9 14:41:54 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Sep 2019 15:41:54 +0100 Subject: [release][cyborg] os-acc status In-Reply-To: References: Message-ID: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> On Mon, 2019-09-09 at 16:27 +0200, Thierry Carrez wrote: > Hi Cyborgs, > > One of your deliverables is the os-acc library. It has seen no change > over this development cycle and therefore was not released at all in train. > > We have several options for this library now: > > 1- It's still very much alive and desired and just has exceptionally not > seen much activity during this cycle. We should just cut a stable/train > branch from the last release available (0.2.0) and continue in ussuri. > > 2- It's a valuable library, it just changes extremely rarely. We should > make it independent from the release cycle and have it release at its > own rhythm. > > 3- Development has stopped on this, and the library is not useful right > now. We should retire this deliverable so that we do not build wrong > expectations for our users. i think ^ is the case. i dont activly work on cyborg but i belive os-acc is no longer planned to be used or developed. they can correct me if that is wrong but i think it can be removed as a deliverable. > > Please let us know which option fits the current status of os-acc. > From openstack-dev at storpool.com Mon Sep 9 14:47:13 2019 From: openstack-dev at storpool.com (Peter Penchev) Date: Mon, 9 Sep 2019 17:47:13 +0300 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: <4402fa3186eff76382fa0b9171c4096db1d94d94.camel@redhat.com> Message-ID: On Mon, Sep 9, 2019 at 5:40 PM Peter Penchev wrote: > On Mon, Sep 9, 2019 at 2:55 PM Sean Mooney wrote: > >> On Mon, 2019-09-09 at 11:22 +0300, Peter Penchev wrote: >> > Hi, >> > >> > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has >> > been specified, this function tries to also install the development >> library >> > for Python 2.x, I guess just in case some package has not declared >> proper >> > Python 3 support or something. It then proceeds to install the Python 3 >> > version of the library and all its dependencies. >> > >> > Unfortunately there is a problem with that, and specifically with script >> > files installed in the system's executable files directory, e.g. >> > /usr/local/bin. The problem appears when some Python library has already >> > been installed for Python 3 (and has installed its script files), but is >> > now installed for Python 2 (overwriting the script files) and is then >> not >> > forcefully reinstalled for Python 3, since it is already present. Thus, >> the >> > script files are last modified by the Python 2 library installation and >> > they have a hashbang line saying `python2.x` - so if something then >> tries >> > to execute them, they will run and use modules and libraries for Python >> 2 >> > only. >> yes this is a long standing issue. we discovered it a year ago but it was >> never fix. >> >> in Ussrui i guess one of the first changes to devstack to make it python >> 3 only >> will be to chagne that behavior. im not sure if we will be able to change >> it before then. >> >> whenever you us libs_from_git in your local.conf on a python 3 install it >> will install >> them twice both with python 2 and python 3. i hope more distros elect to >> symlink /usr/bin/python to python 3 >> some distros have chosen to do that on systems that are python only and i >> believe that is the correct >> approch. >> >> when i encountered this it was always resuliting on the script header >> being #!/usr/bin/python with no >> version suffix i >> gues on a system where that points to python 3 the python 2.7 install >> might write python2.7 >> there instead? >> > > It depends on what version of pip is invoked; I think that the way > devstack invokes it nowadays it will always provide a version on the > shebang line. > > >> >> > >> > We experienced this problem when running the cinderlib tests from >> Cinder's >> > `playbooks/cinderlib-run.yaml` file - it finds a unit2 executable >> > (installed by the unittest2 library) and runs it, hoping that unit2 >> will be >> > able to discover and collect the cinderlib tests and load the cinderlib >> > modules. However, since unittest2 has last been installed as a Python 2 >> > library, unit2 runs with Python 2 and fails to locate the cinderlib >> > modules. (Yes, we know that there are other ways to run the cinderlib >> > tests; this message is about the problem exposed by this way of running >> > them) >> > >> > The obvious solution would be to instruct the Python 2 pip to not >> install >> > script (or other shared) files at all; unfortunately, >> > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on >> > install"), detailing a very similar use case ("need it installed for >> Python >> > 2, but want to use it with Python 3") has been open for almost exactly >> > three years now with no progress. I wonder if I could try to help, but >> even >> > if this issue is resolved, there will be some time before OpenStack can >> > actually depend on a recent enough version of pip. >> well the obvious solution is to stop doing this entirly. >> it was added as a hack to ensure if you use LIB_FROM_GIT in you >> local.conf that those >> libs would always be install from the git checkout that you specified in >> you local.conf >> for train we are technically requireing all project to run under python 3 >> so we could remove >> the fallback mechanium of in stalling under python 2. it was there incase >> a service installed >> under python 2 to ensure it used the same version of the lib and did not >> use a version form >> pypi instead. i wanted to stop doing this last year but we could not >> becase not all project >> could run under python 3. but now that they should be able to we dont >> need this hack anymore. >> we should change it to respec the python version you have selected. that >> will speed >> up stacking speed as we wont have to install everything twice and fix the >> issue you have encountered. >> > > Yeah, thanks for confirming my thoughts that this might be the right > solution. I've proposed https://review.opendev.org/681029/ (and set > workflow -1) to wait for the Ussuri cycle. > > >> > >> > A horrible workaround would be to find the binary directory before >> > installing the Python 2 library (using something like `pip3.7 show >> > somepackage` and then running some heuristics on the "Location" field), >> > tar'ing it up and then restoring it... but I don't know if I even want >> to >> > think about this. >> > >> > Another possible way forward would be to consider whether we still want >> the >> > Python 2 libraries installed - is OpenStack's Python 3 transition >> reached a >> > far enough stage to assume that any projects that still require Python 2 >> > *and* fail to declare their Python 2 dependencies properly are buggy? >> To be >> > honest, this seems the most reasonable path for me - drop the "also >> install >> > the Python 2 libs" code and see what happens. I could try to make this >> > change in a couple of test runs in our third-party Cinder CI system and >> see >> > if something breaks. >> > > G'luck, > Peter > > Argh, I sent this from the wrong account, did I not... G'luck, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From farida.elzanaty at mail.mcgill.ca Mon Sep 9 15:02:01 2019 From: farida.elzanaty at mail.mcgill.ca (Farida El Zanaty) Date: Mon, 9 Sep 2019 15:02:01 +0000 Subject: [nova][neutron][all][openstack-devs] studying and analysing Openstack developers Message-ID: Hi! I am Farida El-Zanaty from McGill University. Under the supervision of Prof. Shane McIntosh, my research aims to study design discussions that occur between developers during code reviews. Last year, we published a study about the frequency and types of such discussions that occur in OpenStack Nova and Neutron (http://rebels.ece.mcgill.ca/papers/esem2018_elzanaty.pdf). We are reaching out to OpenStack developers to better understand their perspectives on design discussions during code reviews. Those who are interested can start by participating in our 10-minute survey about their experiences as both the code reviewer and author. Survey participants will be entered into a raffle for a $50 Amazon gift card. Survey: https://forms.gle/Hhn191f6cxF5hVgG8 Thanks for your time, Farida El-Zanaty -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Sep 9 15:33:47 2019 From: melwittt at gmail.com (melanie witt) Date: Mon, 9 Sep 2019 08:33:47 -0700 Subject: [nova][telemetry] does Telemetry still use the Nova server usage audit log API? In-Reply-To: References: <2c376a85-1dc0-03cc-bdb4-ba8b9f4edb70@gmail.com> Message-ID: <91d2a3b7-cbbe-4e89-50a9-a3f12cc92e43@gmail.com> On 9/7/19 6:09 AM, Matt Riedemann wrote: > On 9/6/2019 6:59 PM, melanie witt wrote: >> >> * If Telemetry is no longer using the server usage audit log API, we >> deprecate it in Nova and notify deployment tools to stop setting >> [DEFAULT]/instance_usage_audit = true to prevent further creation of >> nova.task_log records and recommend manual cleanup by users > > Deprecating the API would just be a signal to not develop new tools > based on it since it's effectively unmaintained but that doesn't mean we > can remove it since there could be non-Telemtry tools in the wild using > it that we'd never hear about. You might not be suggesting an eventual > path to removal of the API, I'm just bringing that part up since I'm > sure people are thinking it. > > I'm also assuming that API isn't multi-cell aware, meaning it won't > traverse cells pulling records like listing servers or migration resources. > > As for the config option to run the periodic task that creates these > records, that's disabled by default so deployment tools shouldn't be > enabling it by default - but maybe some do if they are configured to > deploy ceilometer. Indeed, tripleo enables the periodic task when deploying Telemetry, which is how we have customers hitting the unbounded nova.task_log table growth problem. >> >> or >> >> * If Telemetry is still using the server usage audit log API, we >> create a new 'nova-manage db purge_task_log --before ' (or >> similar) command that will hard delete nova.task_log records before a >> specified date or all if --before is not specified > > If you can't remove the API then this is probably something that needs > to happen regardless, though we likely won't know if anyone uses it. I'd > consider it pretty low priority given how extremely latent this is and > would expect anyone that's been running with this enabled in production > has developed DB purge scripts for this table long ago. Yeah, based on Tim Bell's reply later in this thread, we can't remove the API (tools in the wild using it). So, I'll propose a new nova-manage command because we don't appear to have a standard way of cleaning up nova.task_log records for customers either, yet. -melanie From francois.scheurer at everyware.ch Mon Sep 9 15:36:34 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Mon, 9 Sep 2019 17:36:34 +0200 Subject: [keystone] cannot use 'openstack trust list' without admin role In-Reply-To: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> References: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> Message-ID: Hello I think this old link is explaining the reason behind this "inconsistency" with the policy.json rules: https://bugs.launchpad.net/keystone/+bug/1373599 So to summarize, the RBAC is allowing identity:list_trusts for a non admin user (cf. policy.json) but then hard coded policies deny the request if non admin. Quote: The policies in policy.json can make these operations more restricted, but not less restricted than the hard-coded restrictions. We can't simply remove these settings from policy.json, as that would cause the "default" rule to be used which makes trusts unusable in the case of the default "default" rule of "admin_required". Cheers Francois On 9/9/19 1:57 PM, Francois Scheurer wrote: > > Hi All > > > I found an answer here > > https://bugs.launchpad.net/keystone/+bug/1373599 > > On 9/6/19 5:59 PM, Francois Scheurer wrote: > Dear Keystone Experts, > I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. > 'openstack trust create/show' works. > 'openstack trust list' is denied. > But keystone policy.json says: >     "identity:create_trust": "user_id:%(trust.trustor_user_id)s", >     "identity:list_trusts": "", >     "identity:list_roles_for_trust": "", >     "identity:get_role_for_trust": "", >     "identity:delete_trust": "", >     "identity:get_trust": "", > So "openstack list trusts" is always allowed. > In keystone log (I replaced the uid's by names in the ouput below) I see that 'identity:list_trusts()' was actually granted > but just after that a_*admin_required()*_ is getting checked and fails... I wonder why... > There is also a flag*is_admin_project=True* in the rbac creds for some reason... > > Any clue? Many thanks in advance! > > > Cheers > Francois > > > #openstack --os-cloud stage-fsc trust create --project fscproject --role creator fsc fsc > #=> fail because of the names and policy rules, but using uid's it works > openstack --os-cloud stage-fsc trust create --project aeac4b07d8b144178c43c65f29fa9dac --role 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c 3e9b1a4fe95048a3b98fb5abebd44f6c > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > #this fails: > openstack --os-cloud stage-fsc trust list > *You are not authorized to perform the requested action: > admin_required. (HTTP 403)* > > > > > > > > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From yongle.li at gmail.com Mon Sep 9 15:56:26 2019 From: yongle.li at gmail.com (Fred Li) Date: Mon, 9 Sep 2019 23:56:26 +0800 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: Chris, Thank you for your help and fruitful work in interoperability working group, as well as many other work in OpenStack community. We will miss you and see you later somewhere in the world. On Thu, Sep 5, 2019 at 12:27 AM Chris Hoge wrote: > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, > I've > decided that it is time for a change and will be moving on from the > OpenStack > Foundation. For the last five years I've had the honor of helping to > support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way > to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. > As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special > here, > and we're all better for it. I'll still be involved with open source. If > you > ever want to get in touch, be it with questions about work I've been > involved > with or to talk about some exciting new tech or to just catch up over a > tasty > meal, I'm just a message away in all the usual places. > > Sincerely, > Chris > > chris at hogepodge.com > Twitter/IRC/everywhere else: @hogepodge > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Mon Sep 9 15:57:37 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 9 Sep 2019 15:57:37 +0000 Subject: [release][cyborg] os-acc status In-Reply-To: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> References: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> Message-ID: <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> Hi Thierry and all, Os-acc is not relevant and will be discontinued. This was communicated in [1]. A patch has been filed for the same [2]. I will start the work after Train-3 milestone. That was also mentioned in [3]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008473.html [2] https://review.opendev.org/#/c/676331/ [3] https://review.opendev.org/#/c/680091/ Regards, Sundar > -----Original Message----- > From: Sean Mooney > Sent: Monday, September 9, 2019 7:42 AM > To: Thierry Carrez ; openstack- > discuss at lists.openstack.org > Subject: Re: [release][cyborg] os-acc status > > On Mon, 2019-09-09 at 16:27 +0200, Thierry Carrez wrote: > > Hi Cyborgs, > > > > One of your deliverables is the os-acc library. It has seen no change > > over this development cycle and therefore was not released at all in train. > > > > We have several options for this library now: > > > > 1- It's still very much alive and desired and just has exceptionally > > not seen much activity during this cycle. We should just cut a > > stable/train branch from the last release available (0.2.0) and continue in > ussuri. > > > > 2- It's a valuable library, it just changes extremely rarely. We > > should make it independent from the release cycle and have it release > > at its own rhythm. > > > > 3- Development has stopped on this, and the library is not useful > > right now. We should retire this deliverable so that we do not build > > wrong expectations for our users. > i think ^ is the case. > i dont activly work on cyborg but i belive os-acc is no longer planned to be > used or developed. they can correct me if that is wrong but i think it can be > removed as a deliverable. > > > > Please let us know which option fits the current status of os-acc. > > > From colleen at gazlene.net Mon Sep 9 15:57:53 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 09 Sep 2019 08:57:53 -0700 Subject: [keystone] Pre-feature-freeze update In-Reply-To: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> References: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> Message-ID: <5729a3c4-ecf1-40ef-9d12-3d640e8661bc@www.fastmail.com> On Fri, Sep 6, 2019, at 21:57, Colleen Murphy wrote: [snipped] > > * CI > > After skimming the meeting logs I saw the unit test timeout problem was > discussed and a temporary workaround was proposed[8]. This sounded like > a great idea but it seems that no one implemented it, so I did[9]. > Unfortunately this will conflict with all the > system-scope/default-roles patches in flight. With how many changes > need to go in and how slow it will be with all of them needing to be > rechecked and continually making the problem even worse, I propose we > go ahead and merge the workaround ASAP and update all the in-flight > changes to move the protection tests to the new location. > Alternatively, we can raise the timeouts temporarily as proposed here[11], then merge all the policy changes, then merge the protection test split. [snipped] > [8] > http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-08-27-16.01.log.html#l-84 > [9]https://review.opendev.org/680788 [11] https://review.opendev.org/680798 > > Colleen > > From francois.scheurer at everyware.ch Mon Sep 9 16:23:08 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Mon, 9 Sep 2019 18:23:08 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users Message-ID: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Dear All We are using Mistral 7.0.1.1 with  Openstack Rocky. (with federated users) We can create and execute a workflow via horizon, but cron triggers always fail with this error:     {         "result":             "The action raised an exception [ action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded,                     action_cls='',                     attributes='{u'client_method_name': u'servers.find'}',                     params='{                         u'action_region': u'ch-zh1',                         u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa'                     }'                 ]                 \n NovaAction.servers.find failed: You are not authorized to perform the requested action: identity:validate_token. (HTTP 403) (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33)             "     } Adding the role *admin* or *service* to the user logged in horizon is "fixing" the issue, I mean that the cron trigger then works as expected, but it would be obviously a bad idea to do this for all normal users ;-) So my question: is it a config problem on our side ? is it a known bug? or is it a feature in the sense that cron triggers are for normal users? After digging in the keystone debug logs (see at the end below), I found that RBAC check identity:validate_token an deny the authorization. But according to the policy.json (in keystone and in horizon), rule:owner should be enough to grant it...:             "identity:validate_token": "rule:service_admin_or_owner",                 "service_admin_or_owner": "rule:service_or_admin or rule:owner",                     "service_or_admin": "rule:admin_required or rule:service_role",                         "service_role": "role:service",                     "owner": "user_id:%(user_id)s or user_id:%(target.token.user_id)s", Thank you in advance for your help. Best Regards Francois Scheurer Keystone logs:         2019-09-05 09:38:00.902 29 DEBUG keystone.policy.backends.rules [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom]             enforce identity:validate_token:             {                'service_project_id':None,                'service_user_id':None,                'service_user_domain_id':None,                'service_project_domain_id':None,                'trustor_id':None,                'user_domain_id':u'testdom',                'domain_id':None,                'trust_id':u'mytrustid',                'project_domain_id':u'testdom',                'service_roles':[],                'group_ids':[],                'user_id':u'fsc',                'roles':[                   u'_member_',                   u'creator',                   u'reader',                   u'heat_stack_owner',                   u'member',                   u'load-balancer_member'],                'system_scope':None,                'trustee_id':None,                'domain_name':None,                'is_admin_project':True,                'token':,                'project_id':u'fscproject'             } enforce /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom]             You are not authorized to perform the requested action: identity:validate_token.: *ForbiddenAction: You are not authorized to perform the requested action: identity:validate_token.* -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From chris at openstack.org Mon Sep 9 16:38:09 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 9 Sep 2019 09:38:09 -0700 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: In my personal experience, running Nova on a four core machine without limiting the number of database connections will easily exhaust the available connections to MySQL/MariaDB. Keep in mind that the limit applies to every instance of a service, so if Nova starts 'm' services replicated for 'n' cores with 'd' possible connections you'll be up to ‘m x n x d' connections. It gets big fast. The default setting of '0' (that is, unlimited) does not make for a good first-run experience, IMO. This issue comes up every few years or so, and the consensus previously is that 200-2000 connections is recommended based on your needs. Your database has to be configured to handle the load and looking at the configuration value across all your services and setting them consistently and appropriately is important. http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html > On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: > > Tagging with oslo as this sounds related to oslo.db. > > On 9/5/19 7:37 PM, Albert Braden wrote: >> After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. >> Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? > > I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. > > It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. > > 0: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size > 1: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.QueuePool.__init__ > 2: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.connection_recycle_time > >> *From:* Albert Braden >> *Sent:* Wednesday, September 4, 2019 10:19 AM >> *To:* openstack-discuss at lists.openstack.org >> *Cc:* Gaëtan Trellu >> *Subject:* RE: Nova causes MySQL timeouts >> We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: >> https://docs.openstack.org/keystone/stein/configuration/config-options.html >> Document says: >> [api_database] >> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >> max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. >> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >> [database] >> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >> min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. >> max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. >> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >> If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? >> My settings are default: >> [api_database]: >> #connection_recycle_time = 3600 >> #max_overflow = >> #max_pool_size = >> [database]: >> #connection_recycle_time = 3600 >> #min_pool_size = 1 >> #max_overflow = 50 >> #max_pool_size = 5 >> It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? >> *From:* Gaëtan Trellu > >> *Sent:* Tuesday, September 3, 2019 1:37 PM >> *To:* Albert Braden > >> *Cc:* openstack-discuss at lists.openstack.org >> *Subject:* Re: Nova causes MySQL timeouts >> Hi Albert, >> It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. >> Keep in mind than more workers you will have more connections will be opened on the database. >> Gaetan (goldyfruit) >> On Sep 3, 2019 4:31 PM, Albert Braden > wrote: >> It looks like nova is keeping mysql connections open until they time >> out. How are others responding to this issue? Do you just ignore the >> mysql errors, or is it possible to change configuration so that nova >> closes and reopens connections before they time out? Or is there a >> way to stop mysql from logging these aborted connections without >> hiding real issues? >> Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' >> (Got timeout reading communication packets) > From cboylan at sapwetik.org Mon Sep 9 16:41:02 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 09 Sep 2019 09:41:02 -0700 Subject: =?UTF-8?Q?Re:_[devstack][qa][python3]_"also_install_the_Python_2_dev_lib?= =?UTF-8?Q?rary"_-_still_needed=3F?= In-Reply-To: References: Message-ID: On Mon, Sep 9, 2019, at 1:22 AM, Peter Penchev wrote: > Hi, > > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has > been specified, this function tries to also install the development > library for Python 2.x, I guess just in case some package has not > declared proper Python 3 support or something. It then proceeds to > install the Python 3 version of the library and all its dependencies. > > Unfortunately there is a problem with that, and specifically with > script files installed in the system's executable files directory, e.g. > /usr/local/bin. The problem appears when some Python library has > already been installed for Python 3 (and has installed its script > files), but is now installed for Python 2 (overwriting the script > files) and is then not forcefully reinstalled for Python 3, since it is > already present. Thus, the script files are last modified by the Python > 2 library installation and they have a hashbang line saying `python2.x` > - so if something then tries to execute them, they will run and use > modules and libraries for Python 2 only. > > We experienced this problem when running the cinderlib tests from > Cinder's `playbooks/cinderlib-run.yaml` file - it finds a unit2 > executable (installed by the unittest2 library) and runs it, hoping > that unit2 will be able to discover and collect the cinderlib tests and > load the cinderlib modules. However, since unittest2 has last been > installed as a Python 2 library, unit2 runs with Python 2 and fails to > locate the cinderlib modules. (Yes, we know that there are other ways > to run the cinderlib tests; this message is about the problem exposed > by this way of running them) One option here is to explicitly run the file under the python version you want. I do this with `pbr freeze` frequently to ensure I'm looking at the correct version of software for the correct version of python. For example: python3 /usr/local/bin/pbr freeze | grep $packagename python2 /usr/local/bin/pbr freeze | grep $packagename Then as long as you have installed the utility (in my case pbr) under both python versions it should just work assuming they don't write different files for different versions of python at install time. > > The obvious solution would be to instruct the Python 2 pip to not > install script (or other shared) files at all; unfortunately, > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on > install"), detailing a very similar use case ("need it installed for > Python 2, but want to use it with Python 3") has been open for almost > exactly three years now with no progress. I wonder if I could try to > help, but even if this issue is resolved, there will be some time > before OpenStack can actually depend on a recent enough version of pip. Note OpenStack tests with, and as a result possibly requires, the latest version of pip. Fixing this in pip shouldn't be a problem as long as they make a release not long after. > > A horrible workaround would be to find the binary directory before > installing the Python 2 library (using something like `pip3.7 show > somepackage` and then running some heuristics on the "Location" field), > tar'ing it up and then restoring it... but I don't know if I even want > to think about this. > > Another possible way forward would be to consider whether we still want > the Python 2 libraries installed - is OpenStack's Python 3 transition > reached a far enough stage to assume that any projects that still > require Python 2 *and* fail to declare their Python 2 dependencies > properly are buggy? To be honest, this seems the most reasonable path > for me - drop the "also install the Python 2 libs" code and see what > happens. I could try to make this change in a couple of test runs in > our third-party Cinder CI system and see if something breaks. > snip Hope this helps, Clark From openstack at nemebean.com Mon Sep 9 16:49:53 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Sep 2019 11:49:53 -0500 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: On 9/9/19 11:38 AM, Chris Hoge wrote: > In my personal experience, running Nova on a four core machine without > limiting the number of database connections will easily exhaust the > available connections to MySQL/MariaDB. Keep in mind that the limit > applies to every instance of a service, so if Nova starts 'm' services > replicated for 'n' cores with 'd' possible connections you'll be up to > ‘m x n x d' connections. It gets big fast. > > The default setting of '0' (that is, unlimited) does not make for a good > first-run experience, IMO. We don't default to 0. We default to 5: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size > > This issue comes up every few years or so, and the consensus previously > is that 200-2000 connections is recommended based on your needs. Your > database has to be configured to handle the load and looking at the > configuration value across all your services and setting them > consistently and appropriately is important. > > http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html Thanks, I did not recall that discussion. If I'm reading it correctly, Jay is suggesting that for MySQL we should just disable connection pooling. As I noted earlier, I don't think we expose the ability to do that in oslo.db (patches welcome!), but setting max_pool_size to 1 would get you pretty close. Maybe we should add that to the help text for the option in oslo.db? > >> On Sep 6, 2019, at 7:34 AM, Ben Nemec wrote: >> >> Tagging with oslo as this sounds related to oslo.db. >> >> On 9/5/19 7:37 PM, Albert Braden wrote: >>> After more googling it appears that max_pool_size is a maximum limit on the number of connections that can stay open, and max_overflow is a maximum limit on the number of connections that can be temporarily opened when the pool has been consumed. It looks like the defaults are 5 and 10 which would keep 5 connections open all the time and allow 10 temp. >>> Do I need to set max_pool_size to 0 and max_overflow to the number of connections that I want to allow? Is that a reasonable and correct configuration? Intuitively that doesn't seem right, to have a pool size of 0, but if the "pool" is a group of connections that will remain open until they time out, then maybe 0 is correct? >> >> I don't think so. According to [0] and [1], a pool_size of 0 means unlimited. You could probably set it to 1 to minimize the number of connections kept open, but then I expect you'll have overhead from having to re-open connections frequently. >> >> It sounds like you could use a NullPool to eliminate connection pooling entirely, but I don't think we support that in oslo.db. Based on the error message you're seeing, I would take a look at connection_recycle_time[2]. I seem to recall seeing a comment that the recycle time needs to be shorter than any of the timeouts in the path between the service and the db (so anything like haproxy or mysql itself). Shortening that, or lengthening intervening timeouts, might get rid of these disconnection messages. >> >> 0: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.max_pool_size >> 1: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.QueuePool.__init__ >> 2: https://docs.openstack.org/oslo.db/stein/reference/opts.html#database.connection_recycle_time >> >>> *From:* Albert Braden >>> *Sent:* Wednesday, September 4, 2019 10:19 AM >>> *To:* openstack-discuss at lists.openstack.org >>> *Cc:* Gaëtan Trellu >>> *Subject:* RE: Nova causes MySQL timeouts >>> We’re not setting max_pool_size nor max_overflow option presently. I googled around and found this document: >>> https://docs.openstack.org/keystone/stein/configuration/config-options.html >>> Document says: >>> [api_database] >>> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >>> max_overflow = None (Integer) If set, use this value for max_overflow with SQLAlchemy. >>> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >>> [database] >>> connection_recycle_time = 3600 (Integer) Timeout before idle SQL connections are reaped. >>> min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool. >>> max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy. >>> max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool. >>> If min_pool_size is >0, would that cause at least 1 connection to remain open until it times out? What are the recommended values for these, to allow unused connections to close before they time out? Is “min_pool_size = 0” an acceptable setting? >>> My settings are default: >>> [api_database]: >>> #connection_recycle_time = 3600 >>> #max_overflow = >>> #max_pool_size = >>> [database]: >>> #connection_recycle_time = 3600 >>> #min_pool_size = 1 >>> #max_overflow = 50 >>> #max_pool_size = 5 >>> It’s not obvious what max_overflow does. Where can I find a document that explains more about these settings? >>> *From:* Gaëtan Trellu > >>> *Sent:* Tuesday, September 3, 2019 1:37 PM >>> *To:* Albert Braden > >>> *Cc:* openstack-discuss at lists.openstack.org >>> *Subject:* Re: Nova causes MySQL timeouts >>> Hi Albert, >>> It is a configuration issue, have a look to max_pool_size and max_overflow options under [database] section. >>> Keep in mind than more workers you will have more connections will be opened on the database. >>> Gaetan (goldyfruit) >>> On Sep 3, 2019 4:31 PM, Albert Braden > wrote: >>> It looks like nova is keeping mysql connections open until they time >>> out. How are others responding to this issue? Do you just ignore the >>> mysql errors, or is it possible to change configuration so that nova >>> closes and reopens connections before they time out? Or is there a >>> way to stop mysql from logging these aborted connections without >>> hiding real issues? >>> Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' >>> (Got timeout reading communication packets) >> > > From openstack at nemebean.com Mon Sep 9 16:51:34 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Sep 2019 11:51:34 -0500 Subject: [keystone] Pre-feature-freeze update In-Reply-To: <5729a3c4-ecf1-40ef-9d12-3d640e8661bc@www.fastmail.com> References: <5bac8e07-f63a-4bf9-82c1-fa0470a14b0e@www.fastmail.com> <5729a3c4-ecf1-40ef-9d12-3d640e8661bc@www.fastmail.com> Message-ID: <6b51299f-5b0d-f8fe-e1a2-cff029903aa5@nemebean.com> On 9/9/19 10:57 AM, Colleen Murphy wrote: > On Fri, Sep 6, 2019, at 21:57, Colleen Murphy wrote: > > [snipped] > >> >> * CI >> >> After skimming the meeting logs I saw the unit test timeout problem was >> discussed and a temporary workaround was proposed[8]. This sounded like >> a great idea but it seems that no one implemented it, so I did[9]. >> Unfortunately this will conflict with all the >> system-scope/default-roles patches in flight. With how many changes >> need to go in and how slow it will be with all of them needing to be >> rechecked and continually making the problem even worse, I propose we >> go ahead and merge the workaround ASAP and update all the in-flight >> changes to move the protection tests to the new location. >> > > Alternatively, we can raise the timeouts temporarily as proposed here[11], then merge all the policy changes, then merge the protection test split. Seems prudent (one rebase vs. many rebases), assuming the "merge all the policy changes" step can be done in a reasonable amount of time. > > [snipped] > >> [8] >> http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-08-27-16.01.log.html#l-84 >> [9]https://review.opendev.org/680788 > > [11] https://review.opendev.org/680798 > >> >> Colleen >> >> > From tpb at dyncloud.net Mon Sep 9 18:05:20 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 9 Sep 2019 14:05:20 -0400 Subject: [manila][ops] Shanghai Forum - Manila Topic Planning Message-ID: <20190909180520.fu5f6gi4edarvl65@barron.net> As mentioned several times in the Manila Community Meeting, we have posted an etherpad to brainstorm and gauge interest in topics for the Forum at the upcoming OpenInfra Summit in Shanghai. The point of the Forum sesssions is to get feedback from operators and users on things that need fixing, improvements and enhancements, and more generally about the strategic direction for Manila. So please take a look and update this etherpad with topic ideas and indicate your interest in topics already present if you have an interest in Manila. It doesn't matter whether you contribute to the project or not, or whether you will yourself be attending the Forum: https://etherpad.openstack.org/p/manila-shanghai-forum-brainstorming We will review this etherpad in our community meeting at 1500 UTC on 19 September in #openstack-meeting-alt on Freenode, one day before the Forum proposal deadline. Please feel free to join that meeting to discuss, and in any case please add to the brainstorming deadline before then. Cheers, -- Tom Barron From premdeep.xion at gmail.com Mon Sep 9 18:05:37 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Mon, 9 Sep 2019 23:35:37 +0530 Subject: [nova] Offline Installation of Openstack Message-ID: Hi Team, Requesting your help on below. We have a requirement to setup Openstack in an isolated infra. We will not be provided with Internet. How can we set it up? 1. Can we have a local repository (Rocky, Universal, etc)created? If so how do we manage it? 2. We have noticed lot of package dependencies while setting up Openstack Infra, so will creating a local repository help in an implementation when we do not have an internet. What is the success rate? Thanks Prem -------------- next part -------------- An HTML attachment was scrubbed... URL: From premdeep.xion at gmail.com Mon Sep 9 18:07:04 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Mon, 9 Sep 2019 23:37:04 +0530 Subject: [nova] Offline Installation of Openstack In-Reply-To: References: Message-ID: Additionally we would like to set up in ubuntu 18.04, Rocky version On Mon, Sep 9, 2019 at 11:35 PM Premdeep S wrote: > Hi Team, > > Requesting your help on below. > > We have a requirement to setup Openstack in an isolated infra. We will not > be provided with Internet. How can we set it up? > > 1. Can we have a local repository (Rocky, Universal, etc)created? If so > how do we manage it? > 2. We have noticed lot of package dependencies while setting up Openstack > Infra, so will creating a local repository help in an implementation when > we do not have an internet. What is the success rate? > > Thanks > Prem > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 9 18:16:58 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Sep 2019 18:16:58 +0000 Subject: [nova] Offline Installation of Openstack In-Reply-To: References: Message-ID: <20190909181657.aa3rti6ulflg7rbf@yuggoth.org> On 2019-09-09 23:35:37 +0530 (+0530), Premdeep S wrote: [...] > We have a requirement to setup Openstack in an isolated infra. We > will not be provided with Internet. How can we set it up? > > 1. Can we have a local repository (Rocky, Universal, etc)created? > If so how do we manage it? > > 2. We have noticed lot of package dependencies while setting up > Openstack Infra, so will creating a local repository help in an > implementation when we do not have an internet. What is the > success rate? I know Debian provides complete installation image sets for CD, DVD and Blu-ray you can use offline, and these incorporate all packages in their archive (including OpenStack): https://www.debian.org/releases/buster/debian-installer/ In your follow-up E-mail you mentioned Ubuntu specifically... I don't know whether they maintain similar installation media images, but if they do that may be a good solution. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From premdeep.xion at gmail.com Mon Sep 9 18:18:55 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Mon, 9 Sep 2019 23:48:55 +0530 Subject: [ceph][nova][DR] Openstack DR Setup Message-ID: Hi Team, We are looking to build a DR infrastructure. Our existing DC setup consists of multiple node Controller, Compute and Ceph nodes as the storage backend. We are using ubuntu 18.04 and Rocky version. Can someone please share any document or guide us on how we can build a DR infra for the existing DC? 1. Do we need to have the storage shared across (Ceph)? 2. What are the dependencies? 3. Is there a guide for the same Thanks Prem -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Sep 9 19:00:12 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 9 Sep 2019 21:00:12 +0200 Subject: [edge] Edge whitepaper and tutorial authors needed Message-ID: <20B34733-8B9A-421C-BCCB-2BEF1D87BB27@gmail.com> Hi, I’m reaching out to point you to two mail threads on the edge-computing mailing list. The edge working group is looking into writing up a second whitepaper with a few detailed use cases and information about the reference architecture work the group has been doing. If you are interested in this work please __reach out to me or check out this mail thread__: http://lists.openstack.org/pipermail/edge-computing/2019-September/000632.html The other work item is writing up edge tutorials about frameworks to a German magazine. __This is a short deadline activity, please reach out to me if you are interested in participating.__ For further information please see this mail thread: http://lists.openstack.org/pipermail/edge-computing/2019-September/000633.html Please let me know if you have questions to any of the above. Thanks, Ildikó From nate.johnston at redhat.com Mon Sep 9 19:18:52 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 9 Sep 2019 15:18:52 -0400 Subject: Bug Deputy September 2 - September 9 Message-ID: <20190909191758.5rhxosto6mumldiy@bishop> Neutrinos, Here is the bug deputy report for this past week. It was an exciting week of CI problems; thanks to everyone who pitched in to get us past those thorny issues. I would call out specifically the two bugs rated as High that have no assignee yet; they are both gate related - one for Neutron, and the other affecting the Patrole project. There is also one bug left Untriaged because I was unable to validate the bug; I would appreciate a triaging look if you can. Thanks! Nate ---- Critical - https://bugs.launchpad.net/bugs/1842482 "test_get_devices_info_veth_different_namespaces" fails because veth1_1 interface has a link device in the same namespace Status: Fix Committed (slaweq) https://review.opendev.org/680001 - https://bugs.launchpad.net/bugs/1842517 neutron-sanity-check command fails if netdev datapath is used Status: In Progress (deepak.tiwari) No change registered in LP bug - https://bugs.launchpad.net/bugs/1842657 Job networking-ovn-tempest-dsvm-ovs-release is failing 100% times Status: Fix Committed (maciej.josefczyk) https://review.opendev.org/661065 - https://bugs.launchpad.net/bugs/1842659 Funtional tests of start and restart services failing 100% times Status: Fix Committed (slaweq) https://review.opendev.org/680001 High - https://bugs.launchpad.net/bugs/1843285 Trunk scenario test test_subport_connectivity failing with iptables_hybrid fw driver Status: Unassigned - https://bugs.launchpad.net/bugs/1842666 Bulk port creation with supplied security group also adds default security group Status: In Progress (njohnston) https://review.opendev.org/679852 - https://bugs.launchpad.net/bugs/1843025 FWaaS v2 fails to add ICMPv6 rules via horizon Status: In Progress (haleyb) https://review.opendev.org/680753 - https://bugs.launchpad.net/bugs/1843282 Rally CI not working since jsonschema version bump Status: Fix Committed (ralonsoh) https://review.opendev.org/681001 - https://bugs.launchpad.net/bugs/1843290 Remove network flavor profile fails Status: Unassigned Note: Currently breaking the gate for the Patrole project Medium - https://bugs.launchpad.net/bugs/1842327 Report in logs when FIP associate and disassociate Status: In progress (ralonsoh) https://review.opendev.org/680976 Low - https://bugs.launchpad.net/bugs/1842934 multicast scenario test failing when guest image don't have python3 installed Status: In Progress (slaweq) https://review.opendev.org/680428 - https://bugs.launchpad.net/bugs/1842937 Some ports assigned to routers don't have the correspondent routerport register Status: In Progress (ralonsoh) No change registered in LP bug - https://bugs.launchpad.net/bugs/1843269 Nova notifier called even if set to False Status: In Progress (haleyb) https://review.opendev.org/681016 RFE - https://bugs.launchpad.net/bugs/1843218 allow to create record on default zone from tenants Untriaged - https://bugs.launchpad.net/bugs/1843211 network-ip-availabilities' result is not correct when the subnet has no allocation-pool Status: Unassigned From colleen at gazlene.net Mon Sep 9 19:19:19 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 09 Sep 2019 12:19:19 -0700 Subject: [keystone] cannot use 'openstack trust list' without admin role In-Reply-To: References: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> Message-ID: <720f0763-efaa-4b1b-b3bf-0befec246c7c@www.fastmail.com> Hi François, On Mon, Sep 9, 2019, at 08:36, Francois Scheurer wrote: > Hello > > > > I think this old link is explaining the reason behind this > "inconsistency" with the policy.json rules: > > https://bugs.launchpad.net/keystone/+bug/1373599 > > So to summarize, the RBAC is allowing identity:list_trusts for a non > admin user (cf. policy.json) but then hard coded policies deny the > request if non admin. > > Quote: > > The policies in policy.json can make these operations more restricted, > but not less restricted than the hard-coded restrictions. We can't > simply remove these settings from policy.json, as that would cause the > "default" rule to be used which makes trusts unusable in the case of > the default "default" rule of "admin_required". I wish I had known about this bug, as I would have reopened and closed it. You're correct that the trusts API was doing some unusal RBAC hardcoding, which we have just addressed by moving that logic into policy and then updating the policy defaults to be more sensible: https://review.opendev.org/#/q/topic:trust-policies That series is making its way through CI now and so will be available in the Train release. Unfortunately I don't think we can backport any of it because it introduces new functionality in the policies. Colleen > > > > Cheers > > Francois > > > > On 9/9/19 1:57 PM, Francois Scheurer wrote: > > Hi All > > > > > > I found an answer here > > > https://bugs.launchpad.net/keystone/+bug/1373599 > > > > > On 9/6/19 5:59 PM, Francois Scheurer wrote: > > Dear Keystone Experts, I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. 'openstack trust create/show' works. 'openstack trust list' is denied. But keystone policy.json says: > >     "identity:create_trust": "user_id:%(trust.trustor_user_id)s", >     "identity:list_trusts": "", >     "identity:list_roles_for_trust": "", >     "identity:get_role_for_trust": "", >     "identity:delete_trust": "", >     "identity:get_trust": "", > > So "openstack list trusts" is always allowed. In keystone log (I > replaced the uid's by names in the ouput below) I see that > 'identity:list_trusts()' was actually granted > but just after that a _*admin_required()*_ is getting checked and > fails... I wonder why... > > There is also a flag* is_admin_project=True* in the rbac creds for some reason... > > Any clue? Many thanks in advance! > > > Cheers > Francois > > > > #openstack --os-cloud stage-fsc trust create --project fscproject > --role creator fsc fsc > #=> fail because of the names and policy rules, but using uid's it works > openstack --os-cloud stage-fsc trust create --project > aeac4b07d8b144178c43c65f29fa9dac --role > 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c > 3e9b1a4fe95048a3b98fb5abebd44f6c > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b > +--------------------+----------------------------------+ > | Field              | Value                            | > +--------------------+----------------------------------+ > | deleted_at         | None                             | > | expires_at         | None                             | > | id                 | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation      | False                            | > | project_id         | fscproject | > | redelegation_count | 0                                | > | remaining_uses     | None                             | > | roles              | creator                          | > | trustee_user_id    | fsc | > | trustor_user_id    | fsc | > +--------------------+----------------------------------+ > > #this fails: > openstack --os-cloud stage-fsc trust list > > *You are not authorized to perform the requested action: admin_required. (HTTP 403)* > > > > > > > >  -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > Attachments: > * smime.p7s From nate.johnston at redhat.com Mon Sep 9 19:53:48 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 9 Sep 2019 15:53:48 -0400 Subject: [neutron] Bug Deputy September 2 - September 9 In-Reply-To: <20190909191758.5rhxosto6mumldiy@bishop> References: <20190909191758.5rhxosto6mumldiy@bishop> Message-ID: <20190909193358.3jalu6t7pbwxwwib@bishop> Apologies, omitted the "[neutron]" subject tag. On Mon, Sep 09, 2019 at 03:18:52PM -0400, Nate Johnston wrote: > Neutrinos, > > Here is the bug deputy report for this past week. It was an exciting week of > CI problems; thanks to everyone who pitched in to get us past those thorny > issues. I would call out specifically the two bugs rated as High that have no > assignee yet; they are both gate related - one for Neutron, and the other > affecting the Patrole project. There is also one bug left Untriaged because I > was unable to validate the bug; I would appreciate a triaging look if you can. > > Thanks! > > Nate > > ---- > > Critical > > - https://bugs.launchpad.net/bugs/1842482 > "test_get_devices_info_veth_different_namespaces" fails because veth1_1 interface has a link device in the same namespace > Status: Fix Committed (slaweq) https://review.opendev.org/680001 > > - https://bugs.launchpad.net/bugs/1842517 > neutron-sanity-check command fails if netdev datapath is used > Status: In Progress (deepak.tiwari) No change registered in LP bug > > - https://bugs.launchpad.net/bugs/1842657 > Job networking-ovn-tempest-dsvm-ovs-release is failing 100% times > Status: Fix Committed (maciej.josefczyk) https://review.opendev.org/661065 > > - https://bugs.launchpad.net/bugs/1842659 > Funtional tests of start and restart services failing 100% times > Status: Fix Committed (slaweq) https://review.opendev.org/680001 > > High > > - https://bugs.launchpad.net/bugs/1843285 > Trunk scenario test test_subport_connectivity failing with iptables_hybrid fw driver > Status: Unassigned > > - https://bugs.launchpad.net/bugs/1842666 > Bulk port creation with supplied security group also adds default security group > Status: In Progress (njohnston) https://review.opendev.org/679852 > > - https://bugs.launchpad.net/bugs/1843025 > FWaaS v2 fails to add ICMPv6 rules via horizon > Status: In Progress (haleyb) https://review.opendev.org/680753 > > - https://bugs.launchpad.net/bugs/1843282 > Rally CI not working since jsonschema version bump > Status: Fix Committed (ralonsoh) https://review.opendev.org/681001 > > - https://bugs.launchpad.net/bugs/1843290 > Remove network flavor profile fails > Status: Unassigned > Note: Currently breaking the gate for the Patrole project > > Medium > > - https://bugs.launchpad.net/bugs/1842327 > Report in logs when FIP associate and disassociate > Status: In progress (ralonsoh) https://review.opendev.org/680976 > > Low > > - https://bugs.launchpad.net/bugs/1842934 > multicast scenario test failing when guest image don't have python3 installed > Status: In Progress (slaweq) https://review.opendev.org/680428 > > - https://bugs.launchpad.net/bugs/1842937 > Some ports assigned to routers don't have the correspondent routerport register > Status: In Progress (ralonsoh) No change registered in LP bug > > - https://bugs.launchpad.net/bugs/1843269 > Nova notifier called even if set to False > Status: In Progress (haleyb) https://review.opendev.org/681016 > > RFE > > - https://bugs.launchpad.net/bugs/1843218 > allow to create record on default zone from tenants > > Untriaged > > - https://bugs.launchpad.net/bugs/1843211 > network-ip-availabilities' result is not correct when the subnet has no allocation-pool > Status: Unassigned From openstack-dev at storpool.com Mon Sep 9 22:23:04 2019 From: openstack-dev at storpool.com (Peter Penchev) Date: Tue, 10 Sep 2019 01:23:04 +0300 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: Message-ID: On Mon, Sep 9, 2019 at 7:42 PM Clark Boylan wrote: > On Mon, Sep 9, 2019, at 1:22 AM, Peter Penchev wrote: > > Hi, > > > > When devstack's `setup_dev_lib` function is invoked and USE_PYTHON3 has > > been specified, this function tries to also install the development > > library for Python 2.x, I guess just in case some package has not > > declared proper Python 3 support or something. It then proceeds to > > install the Python 3 version of the library and all its dependencies. > > > > Unfortunately there is a problem with that, and specifically with > > script files installed in the system's executable files directory, e.g. > > /usr/local/bin. The problem appears when some Python library has > > already been installed for Python 3 (and has installed its script > > files), but is now installed for Python 2 (overwriting the script > > files) and is then not forcefully reinstalled for Python 3, since it is > > already present. Thus, the script files are last modified by the Python > > 2 library installation and they have a hashbang line saying `python2.x` > > - so if something then tries to execute them, they will run and use > > modules and libraries for Python 2 only. > > > > We experienced this problem when running the cinderlib tests from > > Cinder's `playbooks/cinderlib-run.yaml` file - it finds a unit2 > > executable (installed by the unittest2 library) and runs it, hoping > > that unit2 will be able to discover and collect the cinderlib tests and > > load the cinderlib modules. However, since unittest2 has last been > > installed as a Python 2 library, unit2 runs with Python 2 and fails to > > locate the cinderlib modules. (Yes, we know that there are other ways > > to run the cinderlib tests; this message is about the problem exposed > > by this way of running them) > > One option here is to explicitly run the file under the python version you > want. I do this with `pbr freeze` frequently to ensure I'm looking at the > correct version of software for the correct version of python. For example: > > python3 /usr/local/bin/pbr freeze | grep $packagename > python2 /usr/local/bin/pbr freeze | grep $packagename > > Then as long as you have installed the utility (in my case pbr) under both > python versions it should just work assuming they don't write different > files for different versions of python at install time. > This is what we ended up doing (sorry, I might have mentioned that in the original message; it was a solved problem for our CI) - we modified the Ansible job to explicitly run "python3.7 unit2". So, yeah, my message was more to point out the general problem than to ask for help for our specific case, but still, yeah, thanks, that's exactly what we did. > > > The obvious solution would be to instruct the Python 2 pip to not > > install script (or other shared) files at all; unfortunately, > > https://github.com/pypa/pip/issues/3980 ("Option to exclude scripts on > > install"), detailing a very similar use case ("need it installed for > > Python 2, but want to use it with Python 3") has been open for almost > > exactly three years now with no progress. I wonder if I could try to > > help, but even if this issue is resolved, there will be some time > > before OpenStack can actually depend on a recent enough version of pip. > > Note OpenStack tests with, and as a result possibly requires, the latest > version of pip. Fixing this in pip shouldn't be a problem as long as they > make a release not long after. > Right, I did briefly wonder whether this was true while writing my mail, I should have taken the time to check and see that devstack actually installs its own version of pip and removes any versions installed by OS packages. Hm, I just might try my hand at that in the coming days or weeks, but I can't really make any promises. > > > > A horrible workaround would be to find the binary directory before > > installing the Python 2 library (using something like `pip3.7 show > > somepackage` and then running some heuristics on the "Location" field), > > tar'ing it up and then restoring it... but I don't know if I even want > > to think about this. > > > > Another possible way forward would be to consider whether we still want > > the Python 2 libraries installed - is OpenStack's Python 3 transition > > reached a far enough stage to assume that any projects that still > > require Python 2 *and* fail to declare their Python 2 dependencies > > properly are buggy? To be honest, this seems the most reasonable path > > for me - drop the "also install the Python 2 libs" code and see what > > happens. I could try to make this change in a couple of test runs in > > our third-party Cinder CI system and see if something breaks. > > > > snip > > Hope this helps, > Sure, thanks! Still, would you agree that for Ussuri this ought to be solved by ripping out the "also install a Python 2 version" part? G'luck, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Sep 9 22:29:37 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Sep 2019 22:29:37 +0000 Subject: [devstack][qa][python3] "also install the Python 2 dev library" - still needed? In-Reply-To: References: Message-ID: <20190909222936.nodrldkoc6ksmb2u@yuggoth.org> On 2019-09-10 01:23:04 +0300 (+0300), Peter Penchev wrote: [...] > would you agree that for Ussuri this ought to be solved by ripping > out the "also install a Python 2 version" part? At the very least, we ought to hide that functionality behind a config option so it's disabled by default. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Mon Sep 9 22:49:55 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Sep 2019 17:49:55 -0500 Subject: [oslo][nova] Nova causes MySQL timeouts In-Reply-To: References: <02fa1644-34a1-0fdf-9048-a668ae86de76@nemebean.com> Message-ID: <45a75ea5-d3c7-d0db-673d-69bba219e805@gmail.com> On 9/9/2019 11:49 AM, Ben Nemec wrote: > Maybe we should add that to the help text for the option in oslo.db? I was going to reply to Chris's email with something like this - sounds like the config option help could use some more details around how to calculate the value that's appropriate, what to look out for when it's miscalculated, things to try, etc. Lots of the DB tuning options suffer from the same kind of lack of info. I know I know patches welcome, I'm not helping by piling on, but I'm also not deep in this area. -- Thanks, Matt From cboylan at sapwetik.org Tue Sep 10 00:08:32 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 09 Sep 2019 17:08:32 -0700 Subject: [ironic] Ironic tempest jobs hitting retry_limit failures Message-ID: Hello Ironic, We've noticed that your tempest jobs have been hitting retry_limit failure recently. What this means is we attempted to run the job 3 times but each time the job failed due to "network" problems and Zuul eventually gave up. On further investigation I found that this is happening because the ironic tempest jobs are filling the root disk on rackspace nodes (which have a smaller root / + ephemeral drive mounted at /opt) with libvirt qcow2 images. This seems to cause ansible to fail to operate because it needs to write to /tmp and it thinks there is a "network" error. I've thrown my investigation into a bug for you [0]. It would be great if you could take a look at this as we are effectively spinning our wheels for about 9 hours every time this happens. I did hold the node I used to investigate. If you'd like to dig in yourselves just ask the infra team for access to nodepool node ubuntu-bionic-rax-ord-0011007873. Finally, to help debug these issues in the future I've started adding a cleanup-run playbook [1] which should give us network and disk info (can be expanded if necessary too) for every job when it is done running. Even if the disk is full. [0] https://storyboard.openstack.org/#!/story/2006520 [1] https://review.opendev.org/#/c/681100/ Clark From renat.akhmerov at gmail.com Tue Sep 10 04:59:08 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 10 Sep 2019 11:59:08 +0700 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: References: Message-ID: <4df13713-5db7-407b-b902-a52ca1f5cddd@Spark> Oleg, congrats! Welcome to the core team ) Thanks Renat Akhmerov @Nokia On 9 Sep 2019, 15:33 +0700, Dougal Matthews , wrote: > +1, seems like a good addition to the team! > > > On Thu, 5 Sep 2019 at 05:35, Renat Akhmerov wrote: > > > Andras, > > > > > > You just went one step ahead of me! I was going to promote Oleg in the end of this week :) I’m glad that we coincided at this. Thanks! I’m for it with my both hands! > > > > > > > > > Renat Akhmerov > > > @Nokia > > > On 4 Sep 2019, 17:33 +0700, András Kövi , wrote: > > > > I would like to invite Oleg Ovcharuk to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. > > > > > > > > Thank you, > > > > Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Tue Sep 10 08:27:14 2019 From: francois.scheurer at everyware.ch (=?iso-8859-1?Q?Scheurer_Fran=E7ois?=) Date: Tue, 10 Sep 2019 08:27:14 +0000 Subject: [keystone] cannot use 'openstack trust list' without admin role In-Reply-To: <720f0763-efaa-4b1b-b3bf-0befec246c7c@www.fastmail.com> References: <29841c08-d255-2ee4-346a-bcce04b7f4ad@everyware.ch> , <720f0763-efaa-4b1b-b3bf-0befec246c7c@www.fastmail.com> Message-ID: <1568104034539.98290@everyware.ch> Hi Colleen Thank you for your message. They also mentioned this in the patch proposal: https://review.opendev.org/#/c/123862/4/doc/source/configuration.rst : " I initially had the same reaction, but it arguably is desired to have hard-coded restrictions in some cases. The hard-coded restrictions prevent one from making a mistake in the policy file that opens up access to something that should never be authorized." So one should also take this into account. Best Regards Francois ________________________________________ From: Colleen Murphy Sent: Monday, September 9, 2019 9:19 PM To: openstack-discuss at lists.openstack.org Subject: Re: [keystone] cannot use 'openstack trust list' without admin role Hi François, On Mon, Sep 9, 2019, at 08:36, Francois Scheurer wrote: > Hello > > > > I think this old link is explaining the reason behind this > "inconsistency" with the policy.json rules: > > https://bugs.launchpad.net/keystone/+bug/1373599 > > So to summarize, the RBAC is allowing identity:list_trusts for a non > admin user (cf. policy.json) but then hard coded policies deny the > request if non admin. > > Quote: > > The policies in policy.json can make these operations more restricted, > but not less restricted than the hard-coded restrictions. We can't > simply remove these settings from policy.json, as that would cause the > "default" rule to be used which makes trusts unusable in the case of > the default "default" rule of "admin_required". I wish I had known about this bug, as I would have reopened and closed it. You're correct that the trusts API was doing some unusal RBAC hardcoding, which we have just addressed by moving that logic into policy and then updating the policy defaults to be more sensible: https://review.opendev.org/#/q/topic:trust-policies That series is making its way through CI now and so will be available in the Train release. Unfortunately I don't think we can backport any of it because it introduces new functionality in the policies. Colleen > > > > Cheers > > Francois > > > > On 9/9/19 1:57 PM, Francois Scheurer wrote: > > Hi All > > > > > > I found an answer here > > > https://bugs.launchpad.net/keystone/+bug/1373599 > > > > > On 9/6/19 5:59 PM, Francois Scheurer wrote: > > Dear Keystone Experts, I have an issue with the openstack client in stage (using Rocky), using a user 'fsc' without 'admin' role and with password auth. 'openstack trust create/show' works. 'openstack trust list' is denied. But keystone policy.json says: > > "identity:create_trust": "user_id:%(trust.trustor_user_id)s", > "identity:list_trusts": "", > "identity:list_roles_for_trust": "", > "identity:get_role_for_trust": "", > "identity:delete_trust": "", > "identity:get_trust": "", > > So "openstack list trusts" is always allowed. In keystone log (I > replaced the uid's by names in the ouput below) I see that > 'identity:list_trusts()' was actually granted > but just after that a _*admin_required()*_ is getting checked and > fails... I wonder why... > > There is also a flag* is_admin_project=True* in the rbac creds for some reason... > > Any clue? Many thanks in advance! > > > Cheers > Francois > > > > #openstack --os-cloud stage-fsc trust create --project fscproject > --role creator fsc fsc > #=> fail because of the names and policy rules, but using uid's it works > openstack --os-cloud stage-fsc trust create --project > aeac4b07d8b144178c43c65f29fa9dac --role > 085180eeaf354426b01908cca8e82792 3e9b1a4fe95048a3b98fb5abebd44f6c > 3e9b1a4fe95048a3b98fb5abebd44f6c > +--------------------+----------------------------------+ > | Field | Value | > +--------------------+----------------------------------+ > | deleted_at | None | > | expires_at | None | > | id | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation | False | > | project_id | fscproject | > | redelegation_count | 0 | > | remaining_uses | None | > | roles | creator | > | trustee_user_id | fsc | > | trustor_user_id | fsc | > +--------------------+----------------------------------+ > > openstack --os-cloud stage-fsc trust show e74bcdf125e049c69c2e0ab1b182df5b > +--------------------+----------------------------------+ > | Field | Value | > +--------------------+----------------------------------+ > | deleted_at | None | > | expires_at | None | > | id | e74bcdf125e049c69c2e0ab1b182df5b | > | impersonation | False | > | project_id | fscproject | > | redelegation_count | 0 | > | remaining_uses | None | > | roles | creator | > | trustee_user_id | fsc | > | trustor_user_id | fsc | > +--------------------+----------------------------------+ > > #this fails: > openstack --os-cloud stage-fsc trust list > > *You are not authorized to perform the requested action: admin_required. (HTTP 403)* > > > > > > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > Attachments: > * smime.p7s -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From chkumar246 at gmail.com Tue Sep 10 08:27:50 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 10 Sep 2019 13:57:50 +0530 Subject: Thank you Stackers for five amazing years! In-Reply-To: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> References: <6392B695-A400-4F59-9F12-AB5DC83EEB42@openstack.org> Message-ID: On Wed, Sep 4, 2019 at 9:57 PM Chris Hoge wrote: > > Hi everyone, > > After more than nine years working in cloud computing and on OpenStack, I've > decided that it is time for a change and will be moving on from the OpenStack > Foundation. For the last five years I've had the honor of helping to support > this vibrant community, and I'm going to deeply miss being a part of it. > OpenStack has been a central part of my life for so long that it's hard to > imagine a work life without it. I'm proud to have helped in some small way to > create a lasting project and community that has, and will continue to, > transform how infrastructure is managed. > > September 12 will officially be my last day with the OpenStack Foundation. As I > make the move away from my responsibilities, I'll be working with community > members to help ensure continuity of my efforts. > > Thank you to everyone for building such an incredible community filled with > talented, smart, funny, and kind people. You've built something special here, > and we're all better for it. I'll still be involved with open source. If you > ever want to get in touch, be it with questions about work I've been involved > with or to talk about some exciting new tech or to just catch up over a tasty > meal, I'm just a message away in all the usual places. > Thank you for the all the amazing work you have done in OpenStack. Sad to see you leaving. All the best for your future adventures. :-) Thanks, Chandan Kumar From ionut at fleio.com Tue Sep 10 10:38:50 2019 From: ionut at fleio.com (Ionut Biru) Date: Tue, 10 Sep 2019 13:38:50 +0300 Subject: [neutron][vmware][vsphere] integration Message-ID: Hello guys, I'm trying to integrate openstack stein with an already running vmware vsphere cluster. All the documentation that I found explain how do it with distributed switches or port groups but currently in my setup, vmware is using Standard Network. Openstack Stein was deployed using OSA , neutron was configured using ovs and i configured the integrated_bridge to br-int. I tried first to deployed using linux-bridge but when I tried to deployed an instance, neutron returned that only ovs or dvs method is supported. Now when ovs when I'm deploying an instance with network, nova returns an error message: 2019-09-10 10:15:19.010 22443 ERROR nova.compute.manager [instance: d8d1cbb8-5c1c-4b98-9739-bea0668cfaa5] VimFaultException: An error occurred during host configuration. 2019-09-10 10:15:19.010 22443 ERROR nova.compute.manager [instance: d8d1cbb8-5c1c-4b98-9739-bea0668cfaa5] Faults: ['PlatformConfigFault'] How do you guys integrate neutron with vmware vsphere with standard network? Is there a driver that I need to use? -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant at civo.com Tue Sep 10 10:40:09 2019 From: grant at civo.com (Grant Morley) Date: Tue, 10 Sep 2019 11:40:09 +0100 Subject: OSA upgrading Xenial Queens to Bionic Rocky Message-ID: Hi all, I was wondering if there was a guide for upgrading OpenStack Ansible  from Ubuntu 16.04 Queens to Ubuntu 18.04 Rocky? I remember a long time ago there was an etherpad set up for upgrading from 14.04 -> 16.04 but I can't seem to find anything similar for going to 18.04. Annoyingly as we don't have lots of hardware, we are going to have to upgrade in place. If there are any guides that would be much appreciated. Many thanks. -- Grant Morley Cloud Lead, Civo Ltd www.civo.com | Signup for an account! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Tue Sep 10 11:17:00 2019 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Tue, 10 Sep 2019 12:17:00 +0100 Subject: [openstack-ansible] OSA upgrading Xenial Queens to Bionic Rocky In-Reply-To: References: Message-ID: Hi Grant, You need to upgrade Queens to Rocky first on your 16.04 hosts. Rocky is the OSA transitional release which supports both 16.04 and 18.04. At that point you can choose to do an in-place operating system upgrade, or reinstall the hosts from fresh one by one. Either way you should not need any additional hardware as long as you have multiple controller nodes already. Drop into #openstack-ansible IRC and we can help you out. Regards, Jonathan. On 10/09/2019 11:40, Grant Morley wrote: > Hi all, > > I was wondering if there was a guide for upgrading OpenStack Ansible > from Ubuntu 16.04 Queens to Ubuntu 18.04 Rocky? I remember a long time > ago there was an etherpad set up for upgrading from 14.04 -> 16.04 but I > can't seem to find anything similar for going to 18.04. > > Annoyingly as we don't have lots of hardware, we are going to have to > upgrade in place. > > If there are any guides that would be much appreciated. > > Many thanks. > From tnakamura.openstack at gmail.com Tue Sep 10 11:23:59 2019 From: tnakamura.openstack at gmail.com (Tetsuro Nakamura) Date: Tue, 10 Sep 2019 20:23:59 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: Sorry for the late response. I was on a business trip in Southeast Asia, and needed time to get internal permission, but finally I’d like to announce my candidacy for the PTL role of Placement for the Ussuri cycle. I’ve been involved with Placement since Queens cycle. I helped to develop new features, to keep refactoring for better performance, and to help other projects to use it (like Blazar to meet NFV requirements using placement). In the U cycle, having new features on server side, I’d like to focus on client side: * Improve usability of osc-placement and catch up the latest microversion * Commonize client code that helps other projects to use placement easily and intuitively That would help us to get more projects to use it and to get more use cases, such as reservations for ironic standalone nodes. Thanks! 2019年9月6日(金) 0:26 Ghanshyam Mann : > Hello Everyone, > > With Ussuri Cycle PTL election completed, we left with Placement project > as leaderless[1]. > In today TC meeting[2], we discussed the few possibilities and decided to > reach out to the > eligible candidates to serve the PTL position. > > We would like to know if anyone from Placement core team, Nova core team > or PTL (as placement > main consumer) of any other interested/related developer is interested to > take the PTL position? > > [1] https://governance.openstack.org/election/results/ussuri/ptl.html > [2] > http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-250 > > -TC (gmann) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 10 11:37:48 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Sep 2019 20:37:48 +0900 Subject: [placement][ptl][tc] Call for Placement PTL position In-Reply-To: References: <16d0205c0b1.b18639584545.7154839133743907603@ghanshyammann.com> Message-ID: <16d1af7275c.1049b688929860.3445344073479624095@ghanshyammann.com> ---- On Tue, 10 Sep 2019 20:23:59 +0900 Tetsuro Nakamura wrote ---- > Sorry for the late response. > I was on a business trip in Southeast Asia, and needed time to get internal permission, > but finally I’d like to announce my candidacy for the PTL role of Placement for the Ussuri cycle. > I’ve been involved with Placement since Queens cycle. > I helped to develop new features, to keep refactoring for better performance, > and to help other projects to use it (like Blazar to meet NFV requirements using placement). > In the U cycle, having new features on server side, I’d like to focus on client side: > * Improve usability of osc-placement and catch up the latest microversion > * Commonize client code that helps other projects to use placement easily and intuitively > That would help us to get more projects to use it and to get more use cases, > such as reservations for ironic standalone nodes. > Thanks! Thanks Tetsuro. I have proposed the governance patch for that- https://review.opendev.org/#/c/681226/ -gmann > > 2019年9月6日(金) 0:26 Ghanshyam Mann : > Hello Everyone, > > With Ussuri Cycle PTL election completed, we left with Placement project as leaderless[1]. > In today TC meeting[2], we discussed the few possibilities and decided to reach out to the > eligible candidates to serve the PTL position. > > We would like to know if anyone from Placement core team, Nova core team or PTL (as placement > main consumer) of any other interested/related developer is interested to take the PTL position? > > [1] https://governance.openstack.org/election/results/ussuri/ptl.html > [2] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html#l-250 > > -TC (gmann) > > > From thierry at openstack.org Tue Sep 10 12:27:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 10 Sep 2019 14:27:37 +0200 Subject: [release][cyborg] os-acc status In-Reply-To: <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> References: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> Message-ID: <57c5857e-ec1c-4685-03a9-ee890b3394eb@openstack.org> Nadathur, Sundar wrote: > Hi Thierry and all, > Os-acc is not relevant and will be discontinued. This was communicated in [1]. A patch has been filed for the same [2]. > > I will start the work after Train-3 milestone. That was also mentioned in [3]. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008473.html > [2] https://review.opendev.org/#/c/676331/ > [3] https://review.opendev.org/#/c/680091/ Ah! I did not remember that when I spotted the absence of changes on that repository. Sorry for the false alarm! Regards, -- Thierry From sundar.nadathur at intel.com Tue Sep 10 13:05:37 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Tue, 10 Sep 2019 13:05:37 +0000 Subject: [release][cyborg] os-acc status In-Reply-To: <57c5857e-ec1c-4685-03a9-ee890b3394eb@openstack.org> References: <5ec9441fa8fed052bd958cf005a08ab18b88f91c.camel@redhat.com> <1CC272501B5BC543A05DB90AA509DED52760B6EC@fmsmsx122.amr.corp.intel.com> <57c5857e-ec1c-4685-03a9-ee890b3394eb@openstack.org> Message-ID: <1CC272501B5BC543A05DB90AA509DED52760BEC3@fmsmsx122.amr.corp.intel.com> NP, Thierry. Thanks for keeping tabs. Regards, Sundar > -----Original Message----- > From: Thierry Carrez > Sent: Tuesday, September 10, 2019 5:28 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [release][cyborg] os-acc status > > Nadathur, Sundar wrote: > > Hi Thierry and all, > > Os-acc is not relevant and will be discontinued. This was communicated in > [1]. A patch has been filed for the same [2]. > > > > I will start the work after Train-3 milestone. That was also mentioned in [3]. > > > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008 > > 473.html [2] https://review.opendev.org/#/c/676331/ > > [3] https://review.opendev.org/#/c/680091/ > > Ah! I did not remember that when I spotted the absence of changes on that > repository. Sorry for the false alarm! > > Regards, > > -- > Thierry From camille.rodriguez at canonical.com Tue Sep 10 13:07:19 2019 From: camille.rodriguez at canonical.com (Camille Rodriguez) Date: Tue, 10 Sep 2019 09:07:19 -0400 Subject: [Horizon] Help making custom theme - resend as still looking:) In-Reply-To: References: Message-ID: Hi Amy, I have done something similar with the charm-openstack-dashboard and Juju tools from Canonical previously. I also have some experience developing a Django website. I would be happy to help by testing your tutorial and provide feedback if you would like. I am also attending the GHC in October. Kind regards, Camille Rodriguez On Fri, Sep 6, 2019 at 4:23 PM Amy Marrich wrote: > > Just thought I'd resend this out to see if someone could help:) > > For the Grace Hopper Conference's Open Source Day we're doing a Horizon > based workshop for OpenStack (running Devstack Pike). The end goal is to > have the attendee teams create their own OpenStack theme supporting a > humanitarian effort of their choice in a few hours. I've tried modifying > the material theme thinking it would be the easiest route to go but that > might not be the best way to go about this.:) > > I've been getting some assistance from e0ne in the Horizon channel and my > logo now shows up on the login page, and I had already gotten the > SITE_BRAND attributes and the theme itself to show up after changing the > local_settings.py. > > If anyone has some tips or a tutorial somewhere it would be greatly > appreciated and I will gladly put together a tutorial for the repo when > done. > > Thanks! > > Amy (spotz) > -- Camille Rodriguez, Field Software Engineer Canonical -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Tue Sep 10 13:42:12 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 10 Sep 2019 09:42:12 -0400 Subject: [charms] Retiring charm-neutron-api-genericswitch Message-ID: Hi All, I'm going to retire charm-neutron-api-genericswitch today as it is currently not maintained. I've already discussed and received approval to do so from the original author and the current charms PTL, so this serves as a more broad announcement. Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 10 14:07:00 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 10 Sep 2019 10:07:00 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # New Projects - os_murano (under openstack-ansible) # General Changes - We made a few improvements while reviewing the separation of goal definition from goal selection: https://review.opendev.org/#/c/677938/ Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From vgvoleg at gmail.com Tue Sep 10 14:08:56 2019 From: vgvoleg at gmail.com (Oleg Ovcharuk) Date: Tue, 10 Sep 2019 17:08:56 +0300 Subject: Invite Oleg Ovcharuk to join the Mistral Core Team In-Reply-To: <4df13713-5db7-407b-b902-a52ca1f5cddd@Spark> References: <4df13713-5db7-407b-b902-a52ca1f5cddd@Spark> Message-ID: Wow! Good news! Thank for your trust guys! Hope I will be useful :) > 10 сент. 2019 г., в 7:59, Renat Akhmerov написал(а): > > Oleg, congrats! Welcome to the core team ) > > > Thanks > > Renat Akhmerov > @Nokia > On 9 Sep 2019, 15:33 +0700, Dougal Matthews , wrote: >> +1, seems like a good addition to the team! >> >> On Thu, 5 Sep 2019 at 05:35, Renat Akhmerov > wrote: >> Andras, >> >> You just went one step ahead of me! I was going to promote Oleg in the end of this week :) I’m glad that we coincided at this. Thanks! I’m for it with my both hands! >> >> >> Renat Akhmerov >> @Nokia >> On 4 Sep 2019, 17:33 +0700, András Kövi >, wrote: >>> I would like to invite Oleg Ovcharuk > to join the Mistral Core Team. Oleg has been a very active and enthusiastic contributor to the project. He has definitely earned his way into our community. >>> >>> Thank you, >>> Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Sep 10 14:14:13 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 10 Sep 2019 10:14:13 -0400 Subject: [tc] monthly meeting summary Message-ID: Hi everyone, The TC held it’s monthly meeting on the 5th of September 2019 and this email provides a summary of that meeting. We contacted Alan to mention that TC will have some presence at the Shanghai leadership meeting alongside other OSF projects on November 3rd, so the day before the summit. Rico is currently working on an etherpad to update SIG guidelines to simplify the process for new SIGs. Once the draft version is done, they will ask SIG chairs to join in the editing part. We still have to contact interested parties for a new ‘large scale’ SIG so we will follow up again on that action item in the next meeting. Graham is currently in the process of testing the code in order to make the proposal bot for propose project-template patches for specific releases. We’re working on adding some forum sessions ideas for the TC and we’ve got volunteers in the forum selection committee. Thierry finished making goal selection a two-step process and it's been merged. There are a few projects that lacked a PTL elected, we’ve discussed the following points for each: - Cyborg: Sundar self-nominated but only on the mailing list, therefore we will appoint them. - Designate: The developers who have expressed interest didn’t have commits, so Graham will sync with both of them to see how if can make it work. - OpenstackSDK: Monty might have missed the notice since they were traveling so Thierry will reach out to him to see if they want to take it again or has suggestions. - I18n: Ian Y. Choi expressed interest but couldn’t run because they were an election official. - {PowerVM,Win}stackers: The code and review activity was quiet and they missed election twice in a row so we proposed to remove them from project teams list and if they want to continue they can as a SIG. - Placement: we are trying to find someone to volunteer by reaching out to Placement and Nova team. We started discussing a change in the process for release names and the rest of that discussion carried over into office hours. I hope that I covered most of what we discussed, for the full meeting logs, you can find them here: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-09-05-14.00.log.html Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mthode at mthode.org Tue Sep 10 15:23:08 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 10 Sep 2019 10:23:08 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> Message-ID: <20190910152308.oe75ltvwtdlnsynm@mthode.org> On 19-08-23 20:09:31, Ghanshyam Mann wrote: > ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > > NOVA: > > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > > websockify===0.9.0 tempest test failing > > > > > > > > > > KEYSTONE: > > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > > > NEUTRON: > > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > > > KURYR: > > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > > https://review.opendev.org/665352 > > > > > > > > > > MISC: > > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > > it's a testtools bug: > > > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > > for python 3. This was done to fix a huge number of compatibility issues people > > > > had reported when trying to run stdlib unittest suites using stestr on > > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > > for openstack (more specificially tempest) is that it's built off of testtools > > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > > and treats it as an unhandled exception which is a test failure, instead of the > > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > > attribute that gets checked before the test method is executed instead of > > > > relying on an exception, which is why this is mostly only an issue for tempest > > > > because it does a lot of run time skips via exceptions. > > > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > > merge access but haven't had much free time to actively maintain it > > > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > > version/branch( > > is good option to me. I think your PR to remove the unittest2 use form testtools > > > make sense to me [1]. A workaround in Tempest can be last option for us. > > > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > > solution, unittest2 is still needed for python < 3.5 in testtools and > > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > > can rework that PR so that it's conditional and always uses stdlib unittest > > for python >= 3.5 but then testtools ends up maintaining two separate paths > > depending on python version. I'd like to continue thinking about that is as a > > long term solution because I don't know when I'll have the time to keep pushing > > that PR forward. > > Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till > 29th Aug. I will be able to check the workaround on testtools or tempest side after that > only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. > > -gmann > > > > > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > > not the options you like. > > > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > > > -gmann > > > > > > > > > > > -Matt Treinish > > > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > > would be appreciated. > > > > > > > > > > -- > > > > > Matthew Thode > > > > > > > > > > > > > > > > > > > > > > Any progress on this, at the moment only stestr-2.5.1 is being held back. https://review.opendev.org/680914 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Sep 10 15:42:03 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Sep 2019 00:42:03 +0900 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190910152308.oe75ltvwtdlnsynm@mthode.org> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> <20190910152308.oe75ltvwtdlnsynm@mthode.org> Message-ID: <16d1bd6c83c.b5b4707242927.3734104778674628098@ghanshyammann.com> ---- On Wed, 11 Sep 2019 00:23:08 +0900 Matthew Thode wrote ---- > On 19-08-23 20:09:31, Ghanshyam Mann wrote: > > ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > > > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > > > NOVA: > > > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > > > websockify===0.9.0 tempest test failing > > > > > > > > > > > > KEYSTONE: > > > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > > > > > NEUTRON: > > > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > > > > > KURYR: > > > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > > > https://review.opendev.org/665352 > > > > > > > > > > > > MISC: > > > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > > > it's a testtools bug: > > > > > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > > > for python 3. This was done to fix a huge number of compatibility issues people > > > > > had reported when trying to run stdlib unittest suites using stestr on > > > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > > > for openstack (more specificially tempest) is that it's built off of testtools > > > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > > > and treats it as an unhandled exception which is a test failure, instead of the > > > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > > > attribute that gets checked before the test method is executed instead of > > > > > relying on an exception, which is why this is mostly only an issue for tempest > > > > > because it does a lot of run time skips via exceptions. > > > > > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > > > merge access but haven't had much free time to actively maintain it > > > > > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > > > version/branch( > > > is good option to me. I think your PR to remove the unittest2 use form testtools > > > > make sense to me [1]. A workaround in Tempest can be last option for us. > > > > > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > > > solution, unittest2 is still needed for python < 3.5 in testtools and > > > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > > > can rework that PR so that it's conditional and always uses stdlib unittest > > > for python >= 3.5 but then testtools ends up maintaining two separate paths > > > depending on python version. I'd like to continue thinking about that is as a > > > long term solution because I don't know when I'll have the time to keep pushing > > > that PR forward. > > > > Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till > > 29th Aug. I will be able to check the workaround on testtools or tempest side after that > > only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. > > > > -gmann > > > > > > > > > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > > > not the options you like. > > > > > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > > > > > -gmann > > > > > > > > > > > > > > -Matt Treinish > > > > > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > > > would be appreciated. > > > > > > > > > > > > -- > > > > > > Matthew Thode > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Any progress on this, at the moment only stestr-2.5.1 is being held > back. > > https://review.opendev.org/680914 There is no progress on this yet. As unittest2 cannot be dropped from testtools, we need to get some workaround in Tempest. I need more time to try the failure and fix. -gmann > > -- > Matthew Thode > From gmann at ghanshyammann.com Wed Sep 11 03:25:04 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Sep 2019 12:25:04 +0900 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <16d1bd6c83c.b5b4707242927.3734104778674628098@ghanshyammann.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> <20190910152308.oe75ltvwtdlnsynm@mthode.org> <16d1bd6c83c.b5b4707242927.3734104778674628098@ghanshyammann.com> Message-ID: <16d1e5a672e.eff625a852394.455530881521903034@ghanshyammann.com> ---- On Wed, 11 Sep 2019 00:42:03 +0900 Ghanshyam Mann wrote ---- > ---- On Wed, 11 Sep 2019 00:23:08 +0900 Matthew Thode wrote ---- > > On 19-08-23 20:09:31, Ghanshyam Mann wrote: > > > ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > > > > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > > > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > > > > NOVA: > > > > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > > > > websockify===0.9.0 tempest test failing > > > > > > > > > > > > > > KEYSTONE: > > > > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > > > > > > > NEUTRON: > > > > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > > > > > > > KURYR: > > > > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > > > > https://review.opendev.org/665352 > > > > > > > > > > > > > > MISC: > > > > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > > > > it's a testtools bug: > > > > > > > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > > > > for python 3. This was done to fix a huge number of compatibility issues people > > > > > > had reported when trying to run stdlib unittest suites using stestr on > > > > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > > > > for openstack (more specificially tempest) is that it's built off of testtools > > > > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > > > > and treats it as an unhandled exception which is a test failure, instead of the > > > > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > > > > attribute that gets checked before the test method is executed instead of > > > > > > relying on an exception, which is why this is mostly only an issue for tempest > > > > > > because it does a lot of run time skips via exceptions. > > > > > > > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > > > > merge access but haven't had much free time to actively maintain it > > > > > > > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > > > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > > > > version/branch( > > > > is good option to me. I think your PR to remove the unittest2 use form testtools > > > > > make sense to me [1]. A workaround in Tempest can be last option for us. > > > > > > > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > > > > solution, unittest2 is still needed for python < 3.5 in testtools and > > > > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > > > > can rework that PR so that it's conditional and always uses stdlib unittest > > > > for python >= 3.5 but then testtools ends up maintaining two separate paths > > > > depending on python version. I'd like to continue thinking about that is as a > > > > long term solution because I don't know when I'll have the time to keep pushing > > > > that PR forward. > > > > > > Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till > > > 29th Aug. I will be able to check the workaround on testtools or tempest side after that > > > only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. > > > > > > -gmann > > > > > > > > > > > > > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > > > > not the options you like. > > > > > > > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > -Matt Treinish > > > > > > > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > > > > would be appreciated. > > > > > > > > > > > > > > -- > > > > > > > Matthew Thode > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Any progress on this, at the moment only stestr-2.5.1 is being held > > back. > > > > https://review.opendev.org/680914 > > There is no progress on this yet. As unittest2 cannot be dropped from testtools, > we need to get some workaround in Tempest. I need more time to try the failure and fix. Trying to workaround it in Tempest - https://review.opendev.org/#/c/681340/ but it seems it needs to handle too many cases in Tempest (with py and stestr versions ) Let's see if we can properly do it in Tempest. -gmann > > > -gmann > > > > > -- > > Matthew Thode > > > From li.canwei2 at zte.com.cn Wed Sep 11 06:01:07 2019 From: li.canwei2 at zte.com.cn (li.canwei2 at zte.com.cn) Date: Wed, 11 Sep 2019 14:01:07 +0800 (CST) Subject: =?UTF-8?B?W1dhdGNoZXJdIHRlYW0gbWVldGluZyBhdCAwODowMCBVVEMgdG9kYXk=?= Message-ID: <201909111401077845519@zte.com.cn> Hi team, Watcher team will have a meeting at 08:00 UTC today in the #openstack-meeting-alt channel. The agenda is available on https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda feel free to add any additional items. Thanks! Canwei Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordan.ansell at catalyst.net.nz Wed Sep 11 07:14:28 2019 From: jordan.ansell at catalyst.net.nz (Jordan Ansell) Date: Wed, 11 Sep 2019 19:14:28 +1200 Subject: [nova][glance][entropy][database] update glance metadata for nova instance In-Reply-To: References: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> <00606cba-2f08-df2c-4342-fc997ec87342@gmail.com> Message-ID: On 27/08/19 11:00 AM, Jordan Ansell wrote: > On 27/08/19 1:47 AM, Brian Rosmaita wrote: >> On 8/26/19 4:24 AM, Sean Mooney wrote: >>> On Mon, 2019-08-26 at 18:18 +1200, Jordan Ansell wrote: >>>> Hi Openstack Discuss, >>>> >>>> I have an issue with nova not synchronizing changes between a glance >>>> image and it's local image meta information in nova. >>>> >>>> I have updated a glance image with the property "hw_rng_model=virtio", >>>> and that successfully passes that to new instances created using the >>>> updated image. However existing instances do not receive this new property. >>>> >>>> I have located the image metadata within the nova database, in the >>>> **instance_system_metadata** table, and can see it's not updated for the >>>> existing instances, and only adding the relevant rows for instances that >>>> are created when that property is present. The key being >>>> "image_hw_rng_model" and "virtio" being the value. >>>> >>>> Is there a way to tell nova to update the table for existing instances, >>>> and synchronizing the two databases? Or is this the kind of thing that >>>> would need to be done *shudder* manually...? >>> this is idealy not something you would do at all. >>> nova create a local copy of the image metadata the instace was booted with >>> intionally to not pick up chagne you make to the image metadata after you boot >>> the instance. in some case those change could invalidate the host the image is on so >>> it in general in not considerd safe to just sync them >>> >>> for the random number generator it should be ok but if you were to add a trait requirement >>> of alter the numa topology then it could invalidate the host as a candiate for that instance. >>> so if you want to do this then you need to update it manually as nova is working as >>> intended by not syncing the data. >>>> If so, are there any >>>> experts out there who can point me to some documentation on doing this >>>> correctly before I go butcher a couple of dummy nova database? >>> there is no docs for doing this as it is not a supported feature. >>> you are circumventing a safty feature we have in nova to prevent change to running instances >>> after they are first booted by change to the flavor extra spec or image metadata. >>>> Regards, >>>> Jordan >>>> >>>> >> I agree with everything Sean says here. I just want to remind you that >> if you use the nova image-create action on an instance, the image >> properties put on the new image are pulled from the nova database. So >> if you do decide to update the DB manually (not that I am recommending >> that!), don't forget that any already existing snapshot images will have >> the "wrong" value for the property. (You can update them via the Images >> API.) >> > Thanks Sean and Brian..! > > I hadn't considered the snapshots.. that's a really good point! And > thank you for the warnings, I can see why this isn't something that's > synchronized automatically :S > > Regards, > Jordan > Hi all, I wanted to share a follow-up to this with two points: * We've found another way to "give" and existing instance entropy using the API following an update to flavor and image metadata. * The documentation on entropy rates **everywhere** seems to be incorrect and could do with some updating.. Instead of updating the nova database and re-scheduling an instance, one can create a snapshot, add the "hw_rng_model=virtio" property to the snapshot, then launch the instance from that image using a flavor with the entropy properties. And boom! We have a copy of an existing instance with the addition of entropy :). Not perfect, but potentially better than an unsupported and risky operation. With regard to the flavor documentation, it's written in the libvirt documentation [1] that the unit of the period attribute is *milliseconds* not seconds. However all documentation I came across for the "hw_rng:rate_period" of a flavor says this is in *seconds*. I've submitted bugs on the docs.openstack.org site, however if you are in charge of some other documentation please update your info :) There's a big difference between 100 bytes every millisecond and 100 bytes every 1000 milliseconds..! Regards, Jordan [1] https://libvirt.org/formatdomain.html#elementsRng [2] https://bugs.launchpad.net/nova/+bug/1843541 [3] https://bugs.launchpad.net/nova/+bug/1843542 From a.settle at outlook.com Wed Sep 11 08:39:15 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 11 Sep 2019 08:39:15 +0000 Subject: [all] [tc] [ptls] PDF Goal Update Message-ID: Hi all, According to the Train schedule, this week is the week of "Train Community Goals completed" [1]. In the last few weeks, we've been working hard on the goal to enable PDF support in the project docs. We have successfully completed... 1. Creating a workable solution [2] 2. Communicating this solution via ML [3] As far as the success of the goal is, that has to be measured by the individual teams. But it looks like we're all going really well at implementing the new changes [4]. Thanks to everyone who has jumped in from across the board to make this a success! I wanted to touch base with the teams and gather a status update from the PTLs or project liaisons on where they are at, what questions they may have, and how we (the docs team and TC) can help. Over the next week I will reach out to each team and gather a status update of sorts. Thanks, Alex -- Alexandra Settle IRC: asettle [1] https://releases.openstack.org/train/schedule.html#t-goals-complete [2] https://etherpad.openstack.org/p/train-pdf-support-goal [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/ 008503.html (amongst others) [4] https://review.opendev.org/#/q/topic:build-pdf-docs From n.sameshima at w.ntt.com Wed Sep 11 10:25:53 2019 From: n.sameshima at w.ntt.com (Naohiro Sameshima) Date: Wed, 11 Sep 2019 19:25:53 +0900 Subject: [dev] [glance] proposal for S3 store driver re-support as galnce_store backend Message-ID: Hi all, I know that glance_store had supported S3 backend until version OpenStack Mitaka, and it has already been removed due to lack of maintainers [1][2]. I started refactoring the S3 driver to work with version OpenStack Stein and recently completed it. (e.g. Add Multi Store Support, Using the latest AWS SDK) So, it would be great if glance_store could support the S3 driver again. However, I'm not familiar with the procedure for that. Would it be possible to discuss this? Thanks, Naohiro [1] https://docs.openstack.org/releasenotes/glance/newton.html [2] https://opendev.org/openstack/glance_store/src/branch/master/releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml From stig.openstack at telfer.org Wed Sep 11 10:52:33 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 11 Sep 2019 11:52:33 +0100 Subject: [scientific-sig] No IRC meeting today Message-ID: <5F26878F-ED60-4F81-AA41-FEA3262A5F01@telfer.org> Hi all - Apologies, there will not be a Scientific SIG IRC meeting today, due to chair availability. Cheers, Stig From liam.young at canonical.com Wed Sep 11 10:56:45 2019 From: liam.young at canonical.com (Liam Young) Date: Wed, 11 Sep 2019 11:56:45 +0100 Subject: [masakari] Message-ID: Hi, I have a patch up for masakari and another for masakari-monitors: https://review.opendev.org/#/c/647756/ https://review.opendev.org/#/c/675734/ If any of the masakari devs have cycles I'd really love to get them landed. Thanks Liam -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbechtold at suse.com Wed Sep 11 12:48:42 2019 From: tbechtold at suse.com (Thomas Bechtold) Date: Wed, 11 Sep 2019 14:48:42 +0200 Subject: [rpm-packaging] Proposing new core member Message-ID: <6b176899-15c3-b0c6-2c0b-8cbab05e844c@suse.com> Hi, I would like to nominate Ralf Haferkamp for rpm-packaging core. Ralf has be active in doing very valuable reviews since some time so I feel he would be a great addition to the team. Please give your +1/-1 in the next days. Cheers, Tom From tobias.rydberg at citynetwork.eu Wed Sep 11 13:59:50 2019 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Wed, 11 Sep 2019 15:59:50 +0200 Subject: [sigs][publiccloud][publiccloud-wg][publiccloud-sig] Bi-weekly meeting for the Public Cloud SIG tomorrow Message-ID: <30460365-552a-26ff-8d81-149243267a99@citynetwork.eu> Hi all, It is time for a new meeting for the Public Cloud SIG! Would love to see as many of you there as possible! Topics for the meeting includes Shanghai Forum topics and moving forward on the billing initiative. Time and place: Tomorrow, 12th September at 1400 UTC in #openstack-publiccloud! Agenda can be found at https://etherpad.openstack.org/p/publiccloud-sig Feel free to add topics to the agenda! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From lpetrut at cloudbasesolutions.com Wed Sep 11 14:08:37 2019 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Wed, 11 Sep 2019 14:08:37 +0000 Subject: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance Message-ID: <64050966FCE0B948BCE2B28DB6E0B7D557AA4A56@CBSEX1.cloudbase.local> Hi, I had a chat with my team and we think it would be best if we could keep Winstackers as a separate team. This is mostly because of the associated projects, which are essential for the Windows – Openstack integration effort. Other teams may not be interested in adopting those projects, which would be required if we chose the SIG route. Despite missing this election, I can assure you that we’re quite active in this endeavor. I’m willing to take the PTL role, offloading this task from Claudiu, whose time was quite limited recently. Regards, Lucian Petrut Cloudbase Solutions ________________________________________ From: Mohammed Naser [mnaser at vexxhost.com] Sent: Monday, September 09, 2019 3:05 PM To: Thierry Carrez Cc: OpenStack Discuss Subject: Re: [winstackers][powervmstackers][tc] removing winstackers and PowerVMStackers from TC governance On Fri, Sep 6, 2019 at 5:10 AM Thierry Carrez wrote: > > Divya K Konoor wrote: > > Missing the deadline for a PTL nomination cannot be the reason for > > removing governance. > > I agree with that, but missing the deadline twice in a row is certainly > a sign of some disconnect with the rest of the OpenStack community. > Project teams require a minimal amount of reactivity and presence, so it > is fair to question whether PowerVMStackers should continue as a project > team in the future. > > > PowerVMStackers continue to be an active project > > and would want to be continued to be governed under OpenStack. For PTL, > > an eligible candidate can still be appointed . > > There is another option, to stay under OpenStack governance but without > the constraints of a full project team: PowerVMStackers could be made an > OpenStack SIG. > > I already proposed that 6 months ago (last time there was no PTL nominee > for the team), on the grounds that interest in PowerVM was clearly a > special interest, and a SIG might be a better way to regroup people > interested in supporting PowerVM in OpenStack. > > The objection back then was that PowerVMStackers maintained a number of > PowerVM-related code, plugins and drivers that should ideally be adopted > by their consuming project teams (nova, neutron, ceilometer), and that > making it a SIG would endanger that adoption process. > > I still think it makes sense to consider PowerVMStackers as a Special > Interest Group. As long as the PowerVM-related code is not adopted by > the consuming projects, it is arguably a special interest, and not a > completely-integrated part of OpenStack components. > > The only difference in being a SIG (compared to being a project team) > would be to reduce the amount of mandatory tasks (like designating a PTL > every 6 months). You would still be able to own repositories, get room > at OpenStack events, vote on TC election... > > It would seem to be the best solution in your case. I echo all of this and I think at this point, it's better for the deliverables to be within a SIG. > -- > Thierry Carrez (ttx) > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Sep 11 14:50:03 2019 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 11 Sep 2019 10:50:03 -0400 Subject: How to boot 2 VMs in openstack on same subnet? Message-ID: <20190911145003.GB31877@localhost.localdomain> Greetings, We use a few openstack public clouds for testing in the ansible project, specifically using nodepool. We have a use case, where we need to boot 2 VMs on the same public subnet for testing reasons. However, the majority of the clouds we are using, do not have a single subnet for their entire public IP range. Up until now, we boot the 2 VMs, then hope they land on the same subnet, but this isn't really efficient. Basically looking to see if there is a better way to handle this either via openstacksdk or some other configuration we need cloud side. Also note, we'd like to do this with public provider network (which we don't have control over) and avoid using private network for now. Paul From marek.lycka at ultimum.io Wed Sep 11 15:21:27 2019 From: marek.lycka at ultimum.io (=?UTF-8?B?TWFyZWsgTHnEjWth?=) Date: Wed, 11 Sep 2019 17:21:27 +0200 Subject: [Horizon] Paging and Angular... In-Reply-To: References: Message-ID: Hi all, > We can't review your patches, because we don't understand them. For the patches to be merged, we > need more than one person, so that they can review each other's patches. Well, yes. That's what I'm trying to address. Even if another person appeared to review javascript code, it wouldn't change anything unless he had +2 and +W rights though. And even then, it wouldn't be enough, because two +2 are currently expected for the CR process to go ahead. > JavaScript is fine. We all know how to write and how to review JavaScript code, and there doesn't > have to be much of it — Horizon is not the kind of tool that has to bee all shiny and animated. It's a tool > for getting work done. This isn't about being shiny and animated though. This is about basic functionality, usability and performance. I did some stress testing with large datasets [1], and the non-angularized versions of basic functionality like sorting, paging and filtering in table panels are either non-existent, not working at all or basically unusable (for a multitude of reasons). Removing them would force reimplementations in pure JQuery and I strongly suspect that those implementations would be much messier and cost a considerable amount of time and effort. >AngularJS is a problem, because you can't tell what the code does just by looking >at the code, and so you can neither review nor fix it. This is clearly a matter of opinion. I find Angular code easier to deal with than JQuery spaghetti. > There has been a lot of work put into mixing Horizon with Angular, but I disagree that it has solved problems, > and in fact it has introduced a lot of regressions. I'm not saying the NG implementations are perfect, but they mostly work where it counts and can be improved where they do not. > Just to take a simple example, the translations are currently broken for en.AU and en.GB languages, > and date display is not localized. And nobody cares. It's difficult for me to judge which features are broken in NG and how much interest there is in having them fixed, but they can be fixed once reported. What I can say for sure is that I keep hitting this issue because of actual feature requests from actual users. See [2] for an example. I'm not sure implementing that in pure JQuery would be nearly as simple as it was in Angular. > We had automated tests before Angular. There weren't many of them, because we also didn't have much > JavaScript code. If I remember correctly, those tests were ripped out during the Angularization. Fair enough. > Arguably, improvements are, on average, impossible to add to Angular I disagree. Yes, pure JQuery is probably easier when dealing with very simple things, but once feature complexity increases beyond the basics, you'll very quickly find the features offered by the framework relevant - things like MVC decoupling, browser-side templating, reusable components, functionality injection etc. Again, see [2] for an example. On a side note, some horizon plugins (such as octavia-dashboard) use Angular extensively. Removing it would at the very least break them. Whatever the community decision is though, I feel like it needs to be made so that related issues can be addressed with a reasonable expectation of being reviewed and merged. [1] Networks, Roles and Images in the low thousands [2] https://review.opendev.org/#/c/618173/ pá 6. 9. 2019 v 18:44 odesílatel Dale Bewley napsal: > As an uninformed user I would just like to say Horizon is seen _as_ > Openstack to new users and I appreciate ever effort to improve it. > > Without discounting past work, the Horizon experience leaves much to be > desired and it colors the perspective on the entire platform. > > On Fri, Sep 6, 2019 at 05:01 Radomir Dopieralski > wrote: > >> >> >> On Fri, Sep 6, 2019 at 11:33 AM Marek Lyčka >> wrote: >> >>> Hi, >>> >>> > we need people familiar with Angular and Horizon's ways of using >>> Angular (which seem to be very >>> > non-standard) that would be willing to write and review code. >>> Unfortunately the people who originally >>> > introduced Angular in Horizon and designed how it is used are no >>> longer interested in contributing, >>> > and there don't seem to be any new people able to handle this. >>> >>> I've been working with Horizon's Angular for quite some time and don't >>> mind keeping at it, but >>> it's useless unless I can get my code merged, hence my original message. >>> >>> As far as attracting new developers goes, I think that removing some >>> barriers to entry couldn't hurt - >>> seeing commits simply lost to time being one of them. I can see it as >>> being fairly demoralizing. >>> >> >> We can't review your patches, because we don't understand them. For the >> patches to be merged, we >> need more than one person, so that they can review each other's patches. >> >> >>> > Personally, I think that a better long-time strategy would be to >>> remove all >>> > Angular-based views from Horizon, and focus on maintaining one >>> language and one set of tools. >>> >>> Removing AngularJS wouldn't remove JavaScript from horizon. We'd still >>> be left with a home-brewish >>> framework (which is buggy as is). I don't think removing js completely >>> is realistic either: we'd lose >>> functionality and worsen user experience. I think that keeping Angular >>> is the better alternative: >>> >>> 1) A lot of work has already been put into Angularization, solving many >>> problems >>> 2) Unlike legacy js, Angular code is covered by automated tests >>> 3) Arguably, improvments are, on average, easier to add to Angular than >>> pure js implementations >>> >>> Whatever reservations there may be about the current implementation can >>> be identified and addressed, but >>> all in all, I think removing it at this point would be counterproductive. >>> >> >> JavaScript is fine. We all know how to write and how to review JavaScript >> code, and there doesn't >> have to be much of it — Horizon is not the kind of tool that has to bee >> all shiny and animated. It's a tool >> for getting work done. AngularJS is a problem, because you can't tell >> what the code does just by looking >> at the code, and so you can neither review nor fix it. >> >> There has been a lot of work put into mixing Horizon with Angular, but I >> disagree that it has solved problems, >> and in fact it has introduced a lot of regressions. Just to take a simple >> example, the translations are currently >> broken for en.AU and en.GB languages, and date display is not localized. >> And nobody cares. >> >> We had automated tests before Angular. There weren't many of them, >> because we also didn't have much JavaScript code. >> If I remember correctly, those tests were ripped out during the >> Angularization. >> >> Arguably, improvements are, on average, impossible to add to Angular, >> because the code makes no sense on its own. >> >> >> -- Marek Lyčka Linux Developer Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic marek.lycka at ultimum.io *https://ultimum.io * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Wed Sep 11 15:28:07 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 11 Sep 2019 11:28:07 -0400 Subject: [sahara] Cancelling Sahara meeting September 12 Message-ID: Hi all, There will be no Sahara meeting 2019-09-12, the reason being that Luigi is not around and there is not much to discuss anyway. Holler if you need anything. Thanks, Jeremy From colleen at gazlene.net Wed Sep 11 16:06:13 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 11 Sep 2019 09:06:13 -0700 Subject: [rpm-packaging] Proposing new core member In-Reply-To: <6b176899-15c3-b0c6-2c0b-8cbab05e844c@suse.com> References: <6b176899-15c3-b0c6-2c0b-8cbab05e844c@suse.com> Message-ID: On Wed, Sep 11, 2019, at 05:48, Thomas Bechtold wrote: > Hi, > > I would like to nominate Ralf Haferkamp for rpm-packaging core. > Ralf has be active in doing very valuable reviews since some time so I > feel he would be a great addition to the team. > > Please give your +1/-1 in the next days. > > Cheers, > > Tom > > +1 will be great to have Ralf on board. Colleen From nicolas.bock at suse.com Wed Sep 11 16:18:04 2019 From: nicolas.bock at suse.com (Nicolas Bock) Date: Wed, 11 Sep 2019 10:18:04 -0600 Subject: [rpm-packaging] Proposing new core member In-Reply-To: <62caa0b06e184db0a92abf094aa43220@DM5PR1801MB2012.namprd18.prod.outlook.com> References: <62caa0b06e184db0a92abf094aa43220@DM5PR1801MB2012.namprd18.prod.outlook.com> Message-ID: <99f3435e-c2a5-dd3c-1d52-cda44ed178c6@suse.com> On 9/11/19 6:48 AM, Thomas Bechtold wrote: > Hi, > > I would like to nominate Ralf Haferkamp for rpm-packaging core. > Ralf has be active in doing very valuable reviews since some time so I > feel he would be a great addition to the team. > > Please give your +1/-1 in the next days. +1 > Cheers, > > Tom > > From gr at ham.ie Wed Sep 11 16:40:34 2019 From: gr at ham.ie (Graham Hayes) Date: Wed, 11 Sep 2019 17:40:34 +0100 Subject: [tc] TC Chair Nominations - closing soon Message-ID: Hello all new and returning TC members! Welcome (back) to the TC. We now have to do some of the standard post election paperwork / processes. One of the first things we need to do is elect a chair for this cycle! We currently have 2 nominations, and nominations will remain open until 23:59 UTC tomorrow 12-Sept-2019. At that point, we will start a CIVS election for the chair, and select them. To nominate yourself, just add a review to the governance repo like so : [1][2]. If you are interested in the chair, please do consider running - It is open to everyone, new and less new on the TC, and the job has been documented by previous chairs and TC members [3] If you have any questions - reply to this mail, ask in the #openstack-tc IRC room, reply to me and I will see who I can put you in contact with, who may know, or ping mnaser, who is the current chair. I propose the following timeline: Nominations Close: 2019-09-12 @ 23:59 UTC. Election created: Morning (EU timezone) of 13 Sept Election finish: Evening (EU timezone) of 18 Sept or when all TC members have voted. Thanks all, and please reach out with any questions! - Graham 1 - https://review.opendev.org/#/c/681285/2/reference/members.yaml 2 - https://review.opendev.org/#/c/680414/2/reference/members.yaml 3 - https://opendev.org/openstack/governance/src/branch/master/CHAIR.rst -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From smooney at redhat.com Wed Sep 11 16:52:31 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Sep 2019 17:52:31 +0100 Subject: How to boot 2 VMs in openstack on same subnet? In-Reply-To: <20190911145003.GB31877@localhost.localdomain> References: <20190911145003.GB31877@localhost.localdomain> Message-ID: <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> On Wed, 2019-09-11 at 10:50 -0400, Paul Belanger wrote: > Greetings, > > We use a few openstack public clouds for testing in the ansible project, > specifically using nodepool. We have a use case, where we need to boot 2 > VMs on the same public subnet for testing reasons. However, the majority > of the clouds we are using, do not have a single subnet for their entire > public IP range. Up until now, we boot the 2 VMs, then hope they land > on the same subnet, but this isn't really efficient. you can just specify the subnet as part of the boot request. so if you know the subnet ahead of time its pretty trivial to do this im not sure if nodepool can do that but it should not be hard to since nova supports it. at the nodepool leve you can specify the netwrok at teh pool or lable level https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.networks https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.labels.networks that coudl be extended to the subnet in theory. > > Basically looking to see if there is a better way to handle this either > via openstacksdk or some other configuration we need cloud side. > > Also note, we'd like to do this with public provider network (which we > don't have control over) and avoid using private network for now. > > Paul > From smooney at redhat.com Wed Sep 11 16:56:15 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Sep 2019 17:56:15 +0100 Subject: How to boot 2 VMs in openstack on same subnet? In-Reply-To: <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> References: <20190911145003.GB31877@localhost.localdomain> <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> Message-ID: On Wed, 2019-09-11 at 17:52 +0100, Sean Mooney wrote: > On Wed, 2019-09-11 at 10:50 -0400, Paul Belanger wrote: > > Greetings, > > > > We use a few openstack public clouds for testing in the ansible project, > > specifically using nodepool. We have a use case, where we need to boot 2 > > VMs on the same public subnet for testing reasons. However, the majority > > of the clouds we are using, do not have a single subnet for their entire > > public IP range. Up until now, we boot the 2 VMs, then hope they land > > on the same subnet, but this isn't really efficient. > > you can just specify the subnet as part of the boot request. > so if you know the subnet ahead of time its pretty trivial to do this > im not sure if nodepool can do that but it should not be hard to > since nova supports it. > > at the nodepool leve you can specify the netwrok at teh pool or lable level > https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.networks > https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.labels.networks > that coudl be extended to the subnet in theory. actully i am wrong we can olny specify the network we can select a subnet if we pass fixed ips on that network but we cant pass the subnet uuid. > > > > > Basically looking to see if there is a better way to handle this either > > via openstacksdk or some other configuration we need cloud side. > > > > Also note, we'd like to do this with public provider network (which we > > don't have control over) and avoid using private network for now. > > > > Paul > > > > From mriedemos at gmail.com Wed Sep 11 17:26:56 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 11 Sep 2019 12:26:56 -0500 Subject: How to boot 2 VMs in openstack on same subnet? In-Reply-To: References: <20190911145003.GB31877@localhost.localdomain> <7b6f522e14f9aaa914d31f9ac7d2d3f2de555500.camel@redhat.com> Message-ID: On 9/11/2019 11:56 AM, Sean Mooney wrote: > we can olny specify the network Or ports, so pre-create two ports on the same subnet and provide them to nova when creating the server. -- Thanks, Matt From ekcs.openstack at gmail.com Wed Sep 11 17:52:26 2019 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 11 Sep 2019 10:52:26 -0700 Subject: [self-healing][autohealing][PTG][Forum] brainstorming etherpads for Shanghai Message-ID: Hello healers, The brainstorming etherpads for Self-healing Forum and PTG sessions are up: https://etherpad.openstack.org/p/SHA-self-healing-SIG Please add your topics there. Looking forward to productive discussions in Shanghai! From fungi at yuggoth.org Wed Sep 11 18:57:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 11 Sep 2019 18:57:16 +0000 Subject: [infra] Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: <20190814192440.GA3048@sm-workstation> <20190903190337.GA14785@sm-workstation> <20190903192248.b2mqozqobsxqgj7e@yuggoth.org> Message-ID: <20190911185716.xhw2j2yn2pcjltpy@yuggoth.org> On 2019-09-09 12:53:26 +0530 (+0530), Yatin Karel wrote: [...] > Can someone from Release or Infra Team can do the needful of > removing stable/ocata and stable/pike branch for TripleO projects > being EOLed for pike/ocata in > https://review.opendev.org/#/c/677478/ and > https://review.opendev.org/#/c/678154/. I've attempted to extract the lists of projects from the changes you linked. I believe you're asking to have the stable/ocata branch deleted from these projects: openstack/instack-undercloud openstack/instack openstack/os-apply-config openstack/os-cloud-config openstack/os-collect-config openstack/os-net-config openstack/os-refresh-config openstack/puppet-tripleo openstack/python-tripleoclient openstack/tripleo-common openstack/tripleo-heat-templates openstack/tripleo-image-elements openstack/tripleo-puppet-elements openstack/tripleo-ui openstack/tripleo-validations And the stable/pike branch deleted from these projects: openstack/instack-undercloud openstack/instack openstack/os-apply-config openstack/os-collect-config openstack/os-net-config openstack/os-refresh-config openstack/paunch openstack/puppet-tripleo openstack/python-tripleoclient openstack/tripleo-common openstack/tripleo-heat-templates openstack/tripleo-image-elements openstack/tripleo-puppet-elements openstack/tripleo-ui openstack/tripleo-validations Can you confirm? Also, have you checked for and abandoned all open changes on the affected branches? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Sep 12 00:52:35 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 11 Sep 2019 17:52:35 -0700 Subject: [all][PTL] Call for Cycle Highlights for Train In-Reply-To: References: Message-ID: Reminder that cycle highlights are due the end of this week! -Kendall (diablo_rojo) On Thu, 5 Sep 2019, 11:48 am Kendall Nelson, wrote: > Hello Everyone! > > As you may or may not have read last week in the release update from Sean, > its time to call out 'cycle-highlights' in your deliverables! > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > Rocky[1] or this one for Stein[2]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. > > *The deadline for cycle highlights is the end of the R-5 week [3] on Sept > 13th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/train/$PROJECT.yaml in the > openstack/releases repo similar to this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/train/highlights.html file > or the output of the build-openstack-sphinx-docs job under html/train/ > highlights.html. > > Thanks :) > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/rocky/highlights.html > [2] https://releases.openstack.org/stein/highlights.html > [3] https://releases.openstack.org/train/schedule.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From masayuki.igawa at gmail.com Thu Sep 12 02:01:01 2019 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Thu, 12 Sep 2019 11:01:01 +0900 Subject: [qa] forum sessions brainstorming Message-ID: <84f255e0-7dd7-48c4-811d-635b77e7d4f7@www.fastmail.com> Hi All, I have created the below etherpad[0] to collect the forum ideas related to QA for Shanghai Summit. Please write up your ideas with your IRC name on the etherpad. [0] https://etherpad.openstack.org/p/PVG-forum-qa-brainstorming -- Masayuki From premdeep.xion at gmail.com Thu Sep 12 07:57:00 2019 From: premdeep.xion at gmail.com (Premdeep S) Date: Thu, 12 Sep 2019 13:27:00 +0530 Subject: [ceph][nova][DR] Openstack DR Setup In-Reply-To: References: Message-ID: Hi Team, Can anyone help on this please? On Mon, Sep 9, 2019, 11:48 PM Premdeep S wrote: > Hi Team, > > We are looking to build a DR infrastructure. Our existing DC setup > consists of multiple node Controller, Compute and Ceph nodes as the storage > backend. We are using ubuntu 18.04 and Rocky version. > > Can someone please share any document or guide us on how we can build a DR > infra for the existing DC? > > 1. Do we need to have the storage shared across (Ceph)? > 2. What are the dependencies? > 3. Is there a guide for the same > > Thanks > Prem > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rony.khan at brilliant.com.bd Thu Sep 12 09:35:05 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Thu, 12 Sep 2019 15:35:05 +0600 Subject: Rabbitmq error report Message-ID: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> Hi, I'm getting this error continuously in rabbitmq log. Though all operation going normal, but slow. Sometimes taking long time to perform operation. Please help me to solve this. rabbitmq version: rabbitmq_server-3.6.16 =ERROR REPORT==== 12-Sep-2019::13:04:55 === Channel error on connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: operation queue.declare caused a channel exception not_found: failed to perform operation on queue 'versioned_notifications.info' in vhost '/' due to timeout =WARNING REPORT==== 12-Sep-2019::13:04:55 === closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672 - nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', user: 'openstack'): client unexpectedly closed TCP connection Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Thu Sep 12 10:03:39 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 12 Sep 2019 10:03:39 +0000 Subject: [tc] TC Chair Nominations - closing soon In-Reply-To: References: Message-ID: On Wed, 2019-09-11 at 17:40 +0100, Graham Hayes wrote: > Hello all new and returning TC members! Hooray! > > Welcome (back) to the TC. We now have to do some of the standard post > election paperwork / processes. > > One of the first things we need to do is elect a chair for this > cycle! > > We currently have 2 nominations, and nominations will remain open > until > 23:59 UTC tomorrow 12-Sept-2019. At that point, we will start a CIVS > election for the chair, and select them. > > To nominate yourself, just add a review to the governance repo like > so : [1][2]. > > If you are interested in the chair, please do consider running - > It is open to everyone, new and less new on the TC, and the > job has been documented by previous chairs and TC members [3] > > If you have any questions - reply to this mail, ask in the > #openstack-tc > IRC room, reply to me and I will see who I can put you in contact > with, who may know, or ping mnaser, who is the current chair. Thanks for setting this up. As current vice-chair, if anyone's interested in that role - let me know and we can chat about what this entails. > > I propose the following timeline: > > Nominations Close: 2019-09-12 @ 23:59 UTC. > Election created: Morning (EU timezone) of 13 Sept > Election finish: Evening (EU timezone) of 18 Sept > or when all TC members have voted. Thanks mugsie! > > Thanks all, and please reach out with any questions! > > - Graham > > 1 - https://review.opendev.org/#/c/681285/2/reference/members.yaml > 2 - https://review.opendev.org/#/c/680414/2/reference/members.yaml > 3 - https://opendev.org/openstack/governance/src/branch/master/CHAIR. > rst > -- Alexandra Settle IRC: asettle From thierry at openstack.org Thu Sep 12 10:13:55 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 12 Sep 2019 12:13:55 +0200 Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables Message-ID: Hi everyone, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Train cycle: - freezer and freezer-web-ui - karbor and karbor-dashboard - kuryr-kubernetes - magnum-ui - manila-ui - monasca-agent, monasca-api, monasca-ceilometer, monasca-events-api, monasca-log-api, monasca-notification, monasca-persister and monasca-transform - networking-hyperv - neutron-fwaas-dashboard and neutron-vpnaas-dashboard - senlin-dashboard - tacker-horizon Those should be released ASAP, and in all cases before September 26th, so that we have a release to include in the final Train release. Thanks in advance, -- Thierry Carrez (ttx) From lpetrut at cloudbasesolutions.com Thu Sep 12 10:43:29 2019 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Thu, 12 Sep 2019 10:43:29 +0000 Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables In-Reply-To: References: Message-ID: <64050966FCE0B948BCE2B28DB6E0B7D557AAFFD5@CBSEX1.cloudbase.local> Hi, Thanks for the heads up! I’ve just requested a networking-hyperv release: https://review.opendev.org/#/c/681707/ Lucian Petrut From: Thierry Carrez Sent: Thursday, September 12, 2019 1:15 PM To: openstack-discuss at lists.openstack.org Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables Hi everyone, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Train cycle: - freezer and freezer-web-ui - karbor and karbor-dashboard - kuryr-kubernetes - magnum-ui - manila-ui - monasca-agent, monasca-api, monasca-ceilometer, monasca-events-api, monasca-log-api, monasca-notification, monasca-persister and monasca-transform - networking-hyperv - neutron-fwaas-dashboard and neutron-vpnaas-dashboard - senlin-dashboard - tacker-horizon Those should be released ASAP, and in all cases before September 26th, so that we have a release to include in the final Train release. Thanks in advance, -- Thierry Carrez (ttx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 12 13:36:21 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 12 Sep 2019 08:36:21 -0500 Subject: Rabbitmq error report In-Reply-To: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> References: <1e4601d5694d$5c674e10$1535ea30$@brilliant.com.bd> Message-ID: <2d2076f9-0eb1-98e8-f9e0-1067b4472f23@nemebean.com> Have you checked that your notification queues aren't filling up? It can cause performance problems in Rabbit if nothing is clearing out those queues. On 9/12/19 4:35 AM, Md. Farhad Hasan Khan wrote: > Hi, > > I’m getting this error continuously in rabbitmq log. Though all > operation going normal, but slow. Sometimes taking long time to perform > operation. Please help me to solve this. > > rabbitmq version: rabbitmq_server-3.6.16 > > =ERROR REPORT==== 12-Sep-2019::13:04:55 === > > Channel error on connection <0.8105.3> (192.168.21.56:60116 -> > 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: > > operation queue.declare caused a channel exception not_found: failed to > perform operation on queue 'versioned_notifications.info' in vhost '/' > due to timeout > > =WARNING REPORT==== 12-Sep-2019::13:04:55 === > > closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> > 192.168.21.11:5672 - > nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', > user: 'openstack'): > > client unexpectedly closed TCP connection > > Thanks & B’Rgds, > > Rony > From francois.scheurer at everyware.ch Thu Sep 12 14:41:21 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Thu, 12 Sep 2019 16:41:21 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: Hello Apparently other people have the same issue and cannot use cron triggers anymore: https://bugs.launchpad.net/mistral/+bug/1843175 We also tried with following patch installed but the same error persists: https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split Cheers Francois On 9/9/19 6:23 PM, Francois Scheurer wrote: > > Dear All > > > We are using Mistral 7.0.1.1 with  Openstack Rocky. (with federated users) > > We can create and execute a workflow via horizon, but cron triggers > always fail with this error: > >     { >         "result": >             "The action raised an exception [ > action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, >                     action_cls=' 'mistral.actions.action_factory.NovaAction'>', >                     attributes='{u'client_method_name': > u'servers.find'}', >                     params='{ >                         u'action_region': u'ch-zh1', >                         u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa' >                     }' >                 ] >                 \n NovaAction.servers.find failed: You are not > authorized to perform the requested action: identity:validate_token. > (HTTP 403) (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33) >             " >     } > > Adding the role *admin* or *service* to the user logged in horizon is > "fixing" the issue, I mean that the cron trigger then works as expected, > > but it would be obviously a bad idea to do this for all normal users ;-) > > So my question: is it a config problem on our side ? is it a known > bug? or is it a feature in the sense that cron triggers are for normal > users? > > > After digging in the keystone debug logs (see at the end below), I > found that RBAC check identity:validate_token an deny the authorization. > > But according to the policy.json (in keystone and in horizon), > rule:owner should be enough to grant it...: > >             "identity:validate_token": "rule:service_admin_or_owner", >                 "service_admin_or_owner": "rule:service_or_admin or > rule:owner", >                     "service_or_admin": "rule:admin_required or > rule:service_role", >                         "service_role": "role:service", >                     "owner": "user_id:%(user_id)s or > user_id:%(target.token.user_id)s", > > Thank you in advance for your help. > > > Best Regards > > Francois Scheurer > > > > > Keystone logs: > >         2019-09-05 09:38:00.902 29 DEBUG > keystone.policy.backends.rules > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom > testdom] >             enforce identity:validate_token: >             { >                'service_project_id':None, >                'service_user_id':None, >                'service_user_domain_id':None, >                'service_project_domain_id':None, >                'trustor_id':None, >                'user_domain_id':u'testdom', >                'domain_id':None, >                'trust_id':u'mytrustid', >                'project_domain_id':u'testdom', >                'service_roles':[], >                'group_ids':[], >                'user_id':u'fsc', >                'roles':[ >                   u'_member_', >                   u'creator', >                   u'reader', >                   u'heat_stack_owner', >                   u'member', >                   u'load-balancer_member'], >                'system_scope':None, >                'trustee_id':None, >                'domain_name':None, >                'is_admin_project':True, >                'token': audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, >                'project_id':u'fscproject' >             } enforce > /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 >         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom > testdom] >             You are not authorized to perform the requested action: > identity:validate_token.: *ForbiddenAction: You are not authorized to > perform the requested action: identity:validate_token.* > > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail:francois.scheurer at everyware.ch > web:http://www.everyware.ch -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Sep 12 15:39:27 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 12 Sep 2019 23:39:27 +0800 Subject: [tc][uc][meta-sig] How to help SIGs to have a better life? (Needs feedback for SIGs guideline) Message-ID: Hi all, The question I would like to ask is, what else we can do here to help SIGs to have a better life? What can we do from here? And is there any feedback regarding on experience of participate a SIG or chairing one? For some work in progress (or completed) actions: I'm working on a guideline for SIGs ( https://etherpad.openstack.org/p/SIGs-guideline ) because I believe it might provide some value for SIGs, especially new-formed SIGs. Please kindly provide your feedback on it. Will send a patch to update current document under governance-sigs once we got good enough confident on it. On the other hand, the reason I start this work is because we're thinking `How to help SIGs to have a better life?` There're some actions I can think of and the most easier answers are to get SIGs status, update guidelines and explain why we need SIG in general. So actions: I'm working on SIG guideline ( https://etherpad.openstack.org/p/SIGs-guideline ) and document `Comparison of Official Group Structures` ( https://review.opendev.org/#/c/668093/ ). Also, reach out to SIGs earlier this year to collect help most needed information for SIGs and WGs ( https://etherpad.openstack.org/p/DEN-help-most-needed-for-sigs-and-wgs ) Also, I know Belmiro Moreira (UC member) has reached out to SIGs too, so there are some up to date information. I will try to put all the above information together for share. And now, back to the question, what can we do from here? Or is there any other feedback? Before I start to disturb everyone with crazy ideas in my mind, would like to hear feedback from all of you. Finally, feedback on SIG guideline is desired. Thanks! -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Sep 12 16:23:40 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 12 Sep 2019 11:23:40 -0500 Subject: [oslo][release][requirements] FFE request for oslo.policy, privsep, and service Message-ID: <77fe5106-10ae-d735-32e5-42e01677e8ce@nemebean.com> Hi, As discussed in the release meeting today, I'm requesting an FFE for oslo.policy, oslo.privsep, and oslo.service. The latter two are only release notes for things that landed late in the cycle, and oslo.policy is a small bugfix in sample policy generation. These should all be backportable if necessary, but for convenience we'd like to get them out now. Thanks. -Ben From mnaser at vexxhost.com Thu Sep 12 17:04:09 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 12 Sep 2019 13:04:09 -0400 Subject: [openstack-ansible] office hours update Message-ID: Hi everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. We finished installing the Python3 for ansible-runtime virtual environment. We discussed how Centos 7.7 wasn’t out yet but we want to move to Python 3 so we’ll start using the CR repository. The placement extract and upgrade jobs are in progress. We clarified the whole definition around freezing a milestone and features and what it implies. The roles for Train milestones were frozen. We’ll wait for Python3, placement and bind-to-mgmt before proposing the milestone. Galera is still having issues and we’re having trouble understanding and fixing them but it has something to do with listening to localhost. We tested Ansible 2.9 and are trying to figure out if we want to use it for Train. We’re having issues with bumping up os-vif for Stein because they seem to be only for testing according to OpenStack Requirements. We talked about maybe using the in-repository local constraints or creating a tag, but we don’t think they can be bumped. It seems later that it was clarified that we can do that, and os-vif made a new release today so we can check it out. Finally, we discussed a journal logging error on Stein. There’s a case of python-systemd missing for logging. Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mthode at mthode.org Thu Sep 12 17:17:59 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 12 Sep 2019 12:17:59 -0500 Subject: [oslo][release][requirements] FFE request for oslo.policy, privsep, and service In-Reply-To: <77fe5106-10ae-d735-32e5-42e01677e8ce@nemebean.com> References: <77fe5106-10ae-d735-32e5-42e01677e8ce@nemebean.com> Message-ID: <20190912171759.dxijyymox5vxnrbv@mthode.org> On 19-09-12 11:23:40, Ben Nemec wrote: > Hi, > > As discussed in the release meeting today, I'm requesting an FFE for > oslo.policy, oslo.privsep, and oslo.service. The latter two are only release > notes for things that landed late in the cycle, and oslo.policy is a small > bugfix in sample policy generation. > > These should all be backportable if necessary, but for convenience we'd like > to get them out now. > > Thanks. > > -Ben > Looks good to me, thanks for the email -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rony.khan at novotel-bd.com Thu Sep 12 09:28:52 2019 From: rony.khan at novotel-bd.com (Md. Farhad Hasan Khan) Date: Thu, 12 Sep 2019 15:28:52 +0600 Subject: Rabbitmq error report Message-ID: <7FDE461E16EF1E4587C3A3333C7DA2D910E34D2D35@Email.novotel-bd.com> Hi, I'm getting this error continuously in rabbitmq log. Though all operation going normal, but slow. Sometimes taking long time to perform operation. Please help me to solve this. rabbitmq version: rabbitmq_server-3.6.16 =ERROR REPORT==== 12-Sep-2019::13:04:55 === Channel error on connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672, vhost: '/', user: 'openstack'), channel 1: operation queue.declare caused a channel exception not_found: failed to perform operation on queue 'versioned_notifications.info' in vhost '/' due to timeout =WARNING REPORT==== 12-Sep-2019::13:04:55 === closing AMQP connection <0.8105.3> (192.168.21.56:60116 -> 192.168.21.11:5672 - nova-compute:3493037:e6757c9b-1cdc-43cd-bfd3-dcb58aa4974a, vhost: '/', user: 'openstack'): client unexpectedly closed TCP connection Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From James.Benson at utsa.edu Thu Sep 12 16:11:12 2019 From: James.Benson at utsa.edu (James Benson) Date: Thu, 12 Sep 2019 16:11:12 +0000 Subject: [nova] Deprecating the XenAPI driver In-Reply-To: <> Message-ID: Matt, I am currently working on trying to deploy Xen OpenStack. Currently I have been trying to get it working on Rocky with Xen6.0 and will code fix for Stein/Train as well if possible. Trying to get a working solution with Rocky then will patch up the line. I have reached out to the last person who submitted a bug fix in Xen (with no response), but I am actively trying to get this working. Unfortunately it is a one-man job, so it is taking a lot of time. Currently facing issues with Nova and Neutron. James -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at swiftstack.com Thu Sep 12 19:23:55 2019 From: tim at swiftstack.com (Tim Burke) Date: Thu, 12 Sep 2019 12:23:55 -0700 Subject: [Openstack-stable-maint] Stable check of openstack/swift for ref refs/heads/stable/pike failed In-Reply-To: References: Message-ID: <6a93a7addb49268be191947deb1f06a239806af7.camel@swiftstack.com> Wrote up https://bugs.launchpad.net/swift/+bug/1843816 to describe the issue; tl;dr is that python's http.client/httplib got more picky about sending only RFC-compliant HTTP requests, but Swift's proxy was happy to accept non-compliant query strings and try to forward them on to backend servers. Fix for master is up at https://review.opendev.org/#/c/681875/, and a backport for pike is up at https://review.opendev.org/#/c/681879/. Once I see passing checks there, I'll propose backports for everyone in between, plus ocata. Tim On Wed, 2019-09-11 at 06:43 +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: > Build failed. > > - build-openstack-sphinx-docs > https://zuul.opendev.org/t/openstack/build/d7030406f5224d78baebbe5dbe80b4d5 > : SUCCESS in 6m 39s > - openstack-tox-py27 > https://zuul.opendev.org/t/openstack/build/a4ee29fb61684505995fda21718fcd89 > : FAILURE in 7m 15s > > _______________________________________________ > Openstack-stable-maint mailing list > Openstack-stable-maint at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint From lucioseki at gmail.com Thu Sep 12 21:49:26 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Thu, 12 Sep 2019 18:49:26 -0300 Subject: [neutron] DevStack with IPv6 Message-ID: Hi folks, I'm having troubles to ping6 a VM running over DevStack from its hypervisor. Could you please help me troubleshooting it? I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, and manually created the networks, subnets and router. Following is my router: $ openstack router show router1 -c external_gateway_info -c interfaces_info +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | external_gateway_info | {"network_id": "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": "fd12:67:1::3c"}]} | | interfaces_info | [{"subnet_id": "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] | +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ I'm trying to ping6 the following VM: $ openstack server list +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ I intend to reach it via br-ex interface of the hypervisor: $ ip a show dev br-ex 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff inet6 fd12:67:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::c82:a1ff:feba:774c/64 scope link valid_lft forever preferred_lft forever The hypervisor has the following routes: $ ip -6 route fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium fe80::/64 dev ens3 proto kernel metric 256 pref medium fe80::/64 dev br-ex proto kernel metric 256 pref medium fe80::/64 dev br-int proto kernel metric 256 pref medium fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium And within the VM has the following routes: root at ubuntu:~# ip -6 route root at ubuntu:~# ip -6 route fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref medium fe80::/64 dev ens3 proto kernel metric 256 pref medium default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 expires 260sec hoplimit 64 pref medium Though the ping6 from VM to hypervisor doesn't work: root at ubuntu:~# ping6 fd12:67:1::1 -c4 PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes --- fd12:67:1::1 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss I'm able to tcpdump inside the router1 netns and see that request packet is passing there, but can't see any reply packets: $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump -l -i any icmp6 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 0, length 64 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 1, length 64 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 2, length 64 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 3, length 64 The same happens from hypervisor to VM. I only acan see the request packets, but no reply packets. Thanks in advance, Lucio Seki -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 12 23:03:41 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 12 Sep 2019 18:03:41 -0500 Subject: [nova] Deprecating the XenAPI driver In-Reply-To: References: Message-ID: <2d8a9d19-4e73-ae53-7c04-760e5b904888@gmail.com> On 9/12/2019 11:11 AM, James Benson wrote: > I am currently working on trying to deploy Xen OpenStack.  Currently I > have been trying to get it working on Rocky with Xen6.0 and will code > fix for Stein/Train as well if possible. Trying to get a working > solution with Rocky then will patch up the line. I have reached out to > the last person who submitted a bug fix in Xen (with no response), but I > am actively trying to get this working.  Unfortunately it is a one-man > job, so it is taking a lot of time. Currently facing issues with Nova > and Neutron. Thanks for letting us know you're trying to get nova working with the xenapi driver James. The last time there was sustained effort on that driver was in Rocky so I would not be surprised if there are issues in Stein or Train. If you have fixes please contribute them upstream. However, I think we should still move forward with deprecation of the driver as a clear indication of the lack of maintainers on the driver. If that changes in the Ussuri release we have the option to undeprecate but I think it's important to clearly signal the state of maintenance for parts of nova so people don't start using them just to find out later they'll be in a bad state (which you might have already found out). -- Thanks, Matt From smooney at redhat.com Thu Sep 12 23:51:30 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 13 Sep 2019 00:51:30 +0100 Subject: [nova] Deprecating the XenAPI driver In-Reply-To: <2d8a9d19-4e73-ae53-7c04-760e5b904888@gmail.com> References: <2d8a9d19-4e73-ae53-7c04-760e5b904888@gmail.com> Message-ID: On Thu, 2019-09-12 at 18:03 -0500, Matt Riedemann wrote: > On 9/12/2019 11:11 AM, James Benson wrote: > > I am currently working on trying to deploy Xen OpenStack. Currently I > > have been trying to get it working on Rocky with Xen6.0 and will code > > fix for Stein/Train as well if possible. Trying to get a working > > solution with Rocky then will patch up the line. I have reached out to > > the last person who submitted a bug fix in Xen (with no response), but I > > am actively trying to get this working. Unfortunately it is a one-man > > job, so it is taking a lot of time. Currently facing issues with Nova > > and Neutron. > > Thanks for letting us know you're trying to get nova working with the > xenapi driver James. The last time there was sustained effort on that > driver was in Rocky so I would not be surprised if there are issues in > Stein or Train. If you have fixes please contribute them upstream. > However, I think we should still move forward with deprecation of the > driver as a clear indication of the lack of maintainers on the driver. > If that changes in the Ussuri release we have the option to undeprecate > but I think it's important to clearly signal the state of maintenance > for parts of nova so people don't start using them just to find out > later they'll be in a bad state (which you might have already found out). i dont think this applies to libvirt + xen but i think the direct to xen server implmenation requires a specific version of like python 2.6 or an early version of 2.7 to work or put another it wont work with python 3. that it might have changed but i remember trying to help somomn debug the xenserver driver in kolla aboud a year ago and i dont think that any work has been done to make it python 3 compatiable. so if we are to keep it in Ussuri some heavy lifting would be needed to make it run python 3 only. > From miguel at mlavalle.com Fri Sep 13 01:10:29 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 12 Sep 2019 20:10:29 -0500 Subject: [openstack-dev] [neutron] Cancelling Neutron Drivers meeting on September 13th Message-ID: Dear Neutrinos, We don't have RFEs ready to be discussed during this week's drivers meeting. As a consequence, let's skip it. However, last week we discussed https://bugs.launchpad.net/neutron/+bug/1837847 and asked the submitter to write a spec, which he did: https://review.opendev.org/#/c/680990/. Please review it and let's be ready to go back the this RFE during the meeting on the 20th Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Fri Sep 13 08:54:34 2019 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Fri, 13 Sep 2019 08:54:34 +0000 Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support Message-ID: Hi Dharmendra and all Core reviewers In train cycle ,we are committed to implement spec “VNF packages support for VNF onboarding” [1]. All patches [2] are uploaded on the gerrit and code review is in progress but as we have dependency on tosca-parser library, patches are not yet merged. Now, tosca-parser library new version 1.6.0. is released but we are waiting for patch [3] to merge which will update the constraints of tosca-parser to 1.6.0 in requirements project. Once that happens, we will make changes to the tacker patch [4] to update the lower constraints of tosca-parser to 1.6.0 which will run all functional and unit tests added for this feature successfully on the CI job. I would like to request feature freeze exception for “VNF packages support for VNF onboarding” [1]. We will make sure all the review comments on the patches will be fixed promptly so that we can merge them as soon as possible. [1] : https://review.opendev.org/#/c/582930/ [2] : https://review.opendev.org/#/q/topic:bp/tosca-csar-mgmt-driver+(status:open+OR+status:merged) [3] : https://review.opendev.org/#/c/681819/ [4]: https://review.opendev.org/#/c/675600/ Thanks, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From thierry at openstack.org Fri Sep 13 09:04:05 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 13 Sep 2019 11:04:05 +0200 Subject: [tc][uc][meta-sig] How to help SIGs to have a better life? (Needs feedback for SIGs guideline) In-Reply-To: References: Message-ID: <0e3947c8-8f4d-44de-b361-7ebb9e30fd84@openstack.org> Rico Lin wrote: > The question I would like to ask is, what else we can do here to help > SIGs to have a better life? What can we do from here? And is there any > feedback regarding on experience of participate a SIG or chairing one? I think the best thing we can do to help SIGs to have a better life is to make it as lightweight as possible to run one. > For some work in progress (or completed) actions: > I'm working on a guideline for SIGs ( > https://etherpad.openstack.org/p/SIGs-guideline ) because I believe it > might provide some value for SIGs, especially new-formed SIGs. Please > kindly provide your feedback on it. Will send a patch to update current > document under governance-sigs once we got good enough confident on it. In the spirit of keeping things lightweight, I feel like this document is already overwhelming. I understand it's meant as a resource guide in case SIGs need guidance, but as it stands it looks a bit intimidating, with its 7 bullet points for "Creating a SIG". Actually the only thing needed to create a SIG is the first bullet point (patch to governance-sigs), everything else is VERY optional. I wonder if this should not be made a SIG guide (under the model of the Project Team Guide), with: 1. When to create a SIG 1.1 What's a SIG 1.2 SIGs compared to other working groups in OpenStack 2. Process to create a SIG (file that patch, with name, lead(s) and scope) 3. Optional resources available to SIGs 3.1 Communications 3.2 Meetings (in person and online) 3.3 Documentation (wiki...) 3.4 Git Repositories 3.5 Task tracker 4. SIG lifecycle 4.1 Keeping SIG leads and URLs up to date 4.2 Marking SIGs inactive 4.3 Removing a SIG While it would make a larger document overall, it would IMHO make it clearer what's necessary and what's guidance / optional. I'm happy to help setting this up as a separate documentation repo, if that sounds like a good idea. -- Thierry Carrez (ttx) From dharmendra.kushwaha at india.nec.com Fri Sep 13 10:10:04 2019 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Fri, 13 Sep 2019 10:10:04 +0000 Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support In-Reply-To: References: Message-ID: Hi Tushar, Thanks for your hard effort. I had released tosce-parser1.6.0 as in [1], and lets wait [2] to get merged. Regarding tackerclient code, we already have merged it, and will release tackerclient today. Tacker have cycle-with-rc release model, So ok, we can wait some time for this feature(server patches). We just needs to make sure that no broken code goes in the last movement and can be tested before rc release. [1]: https://review.opendev.org/#/c/681240 [2]: https://review.opendev.org/#/c/681819 Thanks & Regards Dharmendra Kushwaha ________________________________________ From: Patil, Tushar Sent: Friday, September 13, 2019 2:24 PM To: openstack-discuss at lists.openstack.org Subject: [tacker] Feature Freeze Exception Request - Add VNF packages support Hi Dharmendra and all Core reviewers In train cycle ,we are committed to implement spec “VNF packages support for VNF onboarding” [1]. All patches [2] are uploaded on the gerrit and code review is in progress but as we have dependency on tosca-parser library, patches are not yet merged. Now, tosca-parser library new version 1.6.0. is released but we are waiting for patch [3] to merge which will update the constraints of tosca-parser to 1.6.0 in requirements project. Once that happens, we will make changes to the tacker patch [4] to update the lower constraints of tosca-parser to 1.6.0 which will run all functional and unit tests added for this feature successfully on the CI job. I would like to request feature freeze exception for “VNF packages support for VNF onboarding” [1]. We will make sure all the review comments on the patches will be fixed promptly so that we can merge them as soon as possible. [1] : https://review.opendev.org/#/c/582930/ [2] : https://review.opendev.org/#/q/topic:bp/tosca-csar-mgmt-driver+(status:open+OR+status:merged) [3] : https://review.opendev.org/#/c/681819/ [4]: https://review.opendev.org/#/c/675600/ Thanks, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. From cdent+os at anticdent.org Fri Sep 13 11:19:51 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 13 Sep 2019 12:19:51 +0100 (BST) Subject: [placement] update 19-35 Message-ID: HTML: https://anticdent.org/placement-update-19-36.html Here's placement update 19-36. There won't be one next week, I will be away. Because of my forthcoming "less time available for OpenStack" I will also be stopping these updates at some point in the next month or so so I can focus the limited time I will have on reviewing and coding. There will be at least one more. # Most Important The big news this week is that after returning from a trip (that meant he was away during the nomination period) Tetsuro has stepped up to be the PTL for placement in Ussuri. Thanks very much to him for taking this up, I'm sure he will be excellent. We need to work on useful documentation for the features developed this cycle. I've also made a [now worklist](https://storyboard.openstack.org/#!/worklist/754) in StoryBoard to draw attention to placement project stories that are relevant to the next few weeks, making it easier to ignore those that are not relevant now, but may be later. # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 23 (-1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 5 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 10 (-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 5 (1) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 6 (0). # osc-placement * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * This has some good comments on it from melwitt. I'm going to be away next week, so if someone else would like to address them that would be great. If it is deemed fit to merge, we should, despite feature freeze passing, since we haven't had much churn lately. If it doesn't make it in Train, that's fine too. The goal is to have it ready for Nova in Ussuri as early as possible. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. Performance related explorations continue: * Refactor initialization of research context. This puts the code that might cause an exit earlier in the process so we can avoid useless work. One outcome of the performance work needs to be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. * These are the patches for meeting the build pdf docs goal for the various placement projects. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There are three [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And two [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). The latter are docs-related. # Other Service Users New reviews are added to the end of the list. Reviews that haven't had attention in a long time (boo!) or have merged or approved (yay!) are removed. * helm: add placement chart * Nova: WIP: Add a placement audit command * tempest: Add placement API methods for testing routed provider nets * Nova: cross cell resize * Nova: Scheduler translate properties to traits * Nova: single pass instance info fetch in host manager * Nova: using provider config file for custom resource providers * Nova: clean up some lingering placement stuff * OSA: Add nova placement to placement migration * Charms: Disable nova placement API in Train * Nova: stop using @safe_connect in report client # End 🐈 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From donny at fortnebula.com Fri Sep 13 13:15:53 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 09:15:53 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Security group rules? Donny Davis c: 805 814 6800 On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: > Hi folks, I'm having troubles to ping6 a VM running over DevStack from its > hypervisor. > Could you please help me troubleshooting it? > > I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, > and manually created the networks, subnets and router. Following is my > router: > > $ openstack router show router1 -c external_gateway_info -c interfaces_info > > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > > | > > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | external_gateway_info | {"network_id": > "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, > "external_fixed_ips": [{"subnet_id": > "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, > {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": > "fd12:67:1::3c"}]} | > | interfaces_info | [{"subnet_id": > "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", > "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] > > | > > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > I'm trying to ping6 the following VM: > > $ openstack server list > > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | ID | Name | Status | Networks > | Image | Flavor | > > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | > private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | > > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > > I intend to reach it via br-ex interface of the hypervisor: > > $ ip a show dev br-ex > 9: br-ex: mtu 1500 qdisc noqueue state > UNKNOWN group default qlen 1000 > link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff > inet6 fd12:67:1::1/64 scope global > valid_lft forever preferred_lft forever > inet6 fe80::c82:a1ff:feba:774c/64 scope link > valid_lft forever preferred_lft forever > > The hypervisor has the following routes: > > $ ip -6 route > fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > fe80::/64 dev br-ex proto kernel metric 256 pref medium > fe80::/64 dev br-int proto kernel metric 256 pref medium > fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium > > And within the VM has the following routes: > > root at ubuntu:~# ip -6 route > root at ubuntu:~# ip -6 route > fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium > fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref > medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 > expires 260sec hoplimit 64 pref medium > > Though the ping6 from VM to hypervisor doesn't work: > root at ubuntu:~# ping6 fd12:67:1::1 -c4 > PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes > --- fd12:67:1::1 ping statistics --- > 4 packets transmitted, 0 packets received, 100% packet loss > > I'm able to tcpdump inside the router1 netns and see that request packet > is passing there, but can't see any reply packets: > > $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump > -l -i any icmp6 > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 > bytes > 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 0, length 64 > 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: > ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 > 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: > ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 > 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 1, length 64 > 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 2, length 64 > 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, > echo request, seq 3, length 64 > > The same happens from hypervisor to VM. I only acan see the request > packets, but no reply packets. > > Thanks in advance, > Lucio Seki > -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Fri Sep 13 13:23:12 2019 From: saphi070 at gmail.com (Sa Pham) Date: Fri, 13 Sep 2019 22:23:12 +0900 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: Hi Francois, You can try this patch: https://review.opendev.org/#/c/680858/ Sa Pham On Thu, Sep 12, 2019 at 11:49 PM Francois Scheurer < francois.scheurer at everyware.ch> wrote: > Hello > > > > Apparently other people have the same issue and cannot use cron triggers > anymore: > > https://bugs.launchpad.net/mistral/+bug/1843175 > > > We also tried with following patch installed but the same error persists: > > > https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split > > > > Cheers > > Francois > > > > > On 9/9/19 6:23 PM, Francois Scheurer wrote: > > Dear All > > > We are using Mistral 7.0.1.1 with Openstack Rocky. (with federated users) > > We can create and execute a workflow via horizon, but cron triggers always > fail with this error: > > { > "result": > "The action raised an exception [ > action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, > action_cls=' 'mistral.actions.action_factory.NovaAction'>', > attributes='{u'client_method_name': u'servers.find'}', > params='{ > u'action_region': u'ch-zh1', > u'name': u'42724489-1912-44d1-9a59-6c7a4bebebfa' > }' > ] > \n NovaAction.servers.find failed: You are not authorized > to perform the requested action: identity:validate_token. (HTTP 403) > (Request-ID: req-ec1aea36-c198-4307-bf01-58aca74fad33) > " > } > > Adding the role *admin* or *service* to the user logged in horizon is > "fixing" the issue, I mean that the cron trigger then works as expected, > > but it would be obviously a bad idea to do this for all normal users ;-) > > So my question: is it a config problem on our side ? is it a known bug? or > is it a feature in the sense that cron triggers are for normal users? > > > After digging in the keystone debug logs (see at the end below), I found > that RBAC check identity:validate_token an deny the authorization. > > But according to the policy.json (in keystone and in horizon), rule:owner > should be enough to grant it...: > > "identity:validate_token": "rule:service_admin_or_owner", > "service_admin_or_owner": "rule:service_or_admin or > rule:owner", > "service_or_admin": "rule:admin_required or > rule:service_role", > "service_role": "role:service", > "owner": "user_id:%(user_id)s or > user_id:%(target.token.user_id)s", > > Thank you in advance for your help. > > > Best Regards > > Francois Scheurer > > > > > Keystone logs: > > 2019-09-05 09:38:00.902 29 DEBUG keystone.policy.backends.rules > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom] > enforce identity:validate_token: > { > 'service_project_id':None, > 'service_user_id':None, > 'service_user_domain_id':None, > 'service_project_domain_id':None, > 'trustor_id':None, > 'user_domain_id':u'testdom', > 'domain_id':None, > 'trust_id':u'mytrustid', > 'project_domain_id':u'testdom', > 'service_roles':[], > 'group_ids':[], > 'user_id':u'fsc', > 'roles':[ > u'_member_', > u'creator', > u'reader', > u'heat_stack_owner', > u'member', > u'load-balancer_member'], > 'system_scope':None, > 'trustee_id':None, > 'domain_name':None, > 'is_admin_project':True, > 'token': audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, > 'project_id':u'fscproject' > } enforce > /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 > 2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi > [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - testdom testdom] > You are not authorized to perform the requested action: > identity:validate_token.: *ForbiddenAction: You are not authorized to > perform the requested action: identity:validate_token.* > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: francois.scheurer at everyware.ch > web: http://www.everyware.ch > > -- Sa Pham Dang Master Student - Soongsil University Kakaotalk: sapd95 Skype: great_bn -------------- next part -------------- An HTML attachment was scrubbed... URL: From francois.scheurer at everyware.ch Fri Sep 13 13:32:20 2019 From: francois.scheurer at everyware.ch (Francois Scheurer) Date: Fri, 13 Sep 2019 15:32:20 +0200 Subject: [mistral] cron triggers execution fails on identity:validate_token with non-admin users In-Reply-To: References: <241f5d5e-8b21-9081-c1d1-66e908047335@everyware.ch> Message-ID: Hi Sa Pham Yes this is the good one. Bo Tran pointed it to me yesterday as well and it fixed the issue. See also: https://bugs.launchpad.net/mistral/+bug/1843175 Many Thanks to both of you ! Best Regards Francois Scheurer On 9/13/19 3:23 PM, Sa Pham wrote: > Hi Francois, > > You can try this patch: https://review.opendev.org/#/c/680858/ > > Sa Pham > > On Thu, Sep 12, 2019 at 11:49 PM Francois Scheurer > > wrote: > > Hello > > > > Apparently other people have the same issue and cannot use cron > triggers anymore: > > https://bugs.launchpad.net/mistral/+bug/1843175 > > > We also tried with following patch installed but the same error > persists: > > https://opendev.org/openstack/mistral/commit/6102c5251e29c1efe73c92935a051feff0f649c7?style=split > > > > Cheers > > Francois > > > > > On 9/9/19 6:23 PM, Francois Scheurer wrote: >> >> Dear All >> >> >> We are using Mistral 7.0.1.1 with  Openstack Rocky. (with >> federated users) >> >> We can create and execute a workflow via horizon, but cron >> triggers always fail with this error: >> >>     { >>         "result": >>             "The action raised an exception [ >> action_ex_id=ef878c48-d0ad-4564-9b7e-a06f07a70ded, >>                     action_cls='> 'mistral.actions.action_factory.NovaAction'>', >>                     attributes='{u'client_method_name': >> u'servers.find'}', >>                     params='{ >>                         u'action_region': u'ch-zh1', >>                         u'name': >> u'42724489-1912-44d1-9a59-6c7a4bebebfa' >>                     }' >>                 ] >>                 \n NovaAction.servers.find failed: You are not >> authorized to perform the requested action: >> identity:validate_token. (HTTP 403) (Request-ID: >> req-ec1aea36-c198-4307-bf01-58aca74fad33) >>             " >>     } >> >> Adding the role *admin* or *service* to the user logged in >> horizon is "fixing" the issue, I mean that the cron trigger then >> works as expected, >> >> but it would be obviously a bad idea to do this for all normal >> users ;-) >> >> So my question: is it a config problem on our side ? is it a >> known bug? or is it a feature in the sense that cron triggers are >> for normal users? >> >> >> After digging in the keystone debug logs (see at the end below), >> I found that RBAC check identity:validate_token an deny the >> authorization. >> >> But according to the policy.json (in keystone and in horizon), >> rule:owner should be enough to grant it...: >> >>             "identity:validate_token": "rule:service_admin_or_owner", >>                 "service_admin_or_owner": "rule:service_or_admin >> or rule:owner", >>                     "service_or_admin": "rule:admin_required or >> rule:service_role", >>                         "service_role": "role:service", >>                     "owner": "user_id:%(user_id)s or >> user_id:%(target.token.user_id)s", >> >> Thank you in advance for your help. >> >> >> Best Regards >> >> Francois Scheurer >> >> >> >> >> Keystone logs: >> >>         2019-09-05 09:38:00.902 29 DEBUG >> keystone.policy.backends.rules >> [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - >> testdom testdom] >>             enforce identity:validate_token: >>             { >>                'service_project_id':None, >>                'service_user_id':None, >>                'service_user_domain_id':None, >>                'service_project_domain_id':None, >>                'trustor_id':None, >>                'user_domain_id':u'testdom', >>                'domain_id':None, >>                'trust_id':u'mytrustid', >>                'project_domain_id':u'testdom', >>                'service_roles':[], >>                'group_ids':[], >>                'user_id':u'fsc', >>                'roles':[ >>                   u'_member_', >>                   u'creator', >>                   u'reader', >>                   u'heat_stack_owner', >>                   u'member', >>                   u'load-balancer_member'], >>                'system_scope':None, >>                'trustee_id':None, >>                'domain_name':None, >>                'is_admin_project':True, >>                'token':> (audit_id=0LAsW_0dQMWXh2cTZTLcWA, >> audit_chain_id=[u'0LAsW_0dQMWXh2cTZTLcWA']) at 0x7f208f4a3bd0>, >>                'project_id':u'fscproject' >>             } enforce >> /var/lib/kolla/venv/local/lib/python2.7/site-packages/keystone/policy/backends/rules.py:33 >>         2019-09-05 09:38:00.920 29 WARNING keystone.common.wsgi >> [req-1a276b9d-8276-4ec3-b516-f51f86cd1df6 fsc fscproject - >> testdom testdom] >>             You are not authorized to perform the requested >> action: identity:validate_token.: *ForbiddenAction: You are not >> authorized to perform the requested action: identity:validate_token.* >> >> >> -- >> >> >> EveryWare AG >> François Scheurer >> Senior Systems Engineer >> Zurlindenstrasse 52a >> CH-8003 Zürich >> >> tel: +41 44 466 60 00 >> fax: +41 44 466 60 10 >> mail:francois.scheurer at everyware.ch >> web:http://www.everyware.ch > > -- > > > EveryWare AG > François Scheurer > Senior Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail:francois.scheurer at everyware.ch > web:http://www.everyware.ch > > > > -- > Sa Pham Dang > Master Student - Soongsil University > Kakaotalk: sapd95 > Skype: great_bn > > -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From corey.bryant at canonical.com Fri Sep 13 13:58:20 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 13 Sep 2019 09:58:20 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-0) Message-ID: This is the goal-0 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. Today is the final day for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == We're down to 3 projects with failing tests, and 2 projects with successful tests. Barbican and PowerVM are actively working on getting patches landed. I've not been successful in making contact with the Freezer PTL. Thank you to all who have contributed their time and fixes to enable patches to land! == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 13 14:00:44 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 13 Sep 2019 09:00:44 -0500 Subject: [release] Release countdown for week R-4, September 16-20 Message-ID: <20190913140044.GA20572@sm-workstation> Development Focus ----------------- We just passed feature freeze! Until release branches are cut, you should stop accepting featureful changes to deliverables following the cycle-with-rc release model, or to libraries. Exceptions should be discussed on separate threads on the mailing-list, and approved by the team's PTL. Focus should be on finding and fixing release-critical bugs, so that release candidates and final versions of the Train deliverables can be proposed, well ahead of the final Train release date. General Information ------------------- We are still finishing up processing a few release requests, but the Train release requirements are now frozen. If new library releases are needed to fix release-critical bugs in Train, you must request a Feature Freeze Exception (FFE) from the requirements team before we can do a new release to avoid having something released in Train that is not actually usable. This is done by posting to the openstack-discuss mailing list with a subject line similar to: [$PROJECT][requirements] FFE requested for $PROJECT_LIB Include justification/reasoning for why a FFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. A soft String freeze is now in effect, in order to let the I18N team do the translation work in good conditions. In Horizon and the various dashboard plugins, you should stop accepting changes that modify user-visible strings. Exceptions should be discussed on the mailing-list. By September 26 this will become a hard string freeze, with no changes in user-visible strings allowed. Actions --------- stable/train branches should be created soon for all not-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update. Please review those in priority so that the branch can be functional ASAP. The Prelude section of reno release notes is rendered as the top level overview for the release. Any important overall messaging for Train changes should be added there to make sure the consumers of your release notes see them. Finally, if you haven't proposed Train cycle-highlights yet, you are already late to the party. Please see http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009137.html for details. Upcoming Deadlines & Dates -------------------------- RC1 deadline: September 26 (R-3 week) Final RC deadline: October 10 (R-1 week) Final Train release: October 16 Forum+PTG at Shanghai summit: November 4 From haleyb.dev at gmail.com Fri Sep 13 14:10:23 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 13 Sep 2019 10:10:23 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: <24283fad-a8b6-6672-549e-bd1d27a9747b@gmail.com> On 9/12/19 5:49 PM, Lucio Seki wrote: > Hi folks, I'm having troubles to ping6 a VM running over DevStack from > its hypervisor. > Could you please help me troubleshooting it? > > I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, I think this is your problem. When this is set to True, create_neutron_initial_network() is called, which does a little "hacking" by bringing interfaces up, moving addresses and adding routes so that you can communicate with floating IP and IPv6 addresses. You would have to look at that code and do similar things manually. -Brian > and manually created the networks, subnets and router. Following is my > router: > > $ openstack router show router1 -c external_gateway_info -c interfaces_info > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field                 | Value > > > >        | > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | external_gateway_info | {"network_id": > "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, > "external_fixed_ips": [{"subnet_id": > "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, > {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": > "fd12:67:1::3c"}]} | > | interfaces_info       | [{"subnet_id": > "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", > "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] > >                                       | > +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > I'm trying to ping6 the following VM: > > $ openstack server list > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | ID                                   | Name    | Status | Networks >                             | Image  | Flavor | > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | > private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | > +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ > > I intend to reach it via br-ex interface of the hypervisor: > > $ ip a show dev br-ex > 9: br-ex: mtu 1500 qdisc noqueue state > UNKNOWN group default qlen 1000 >     link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >     inet6 fd12:67:1::1/64 scope global >        valid_lft forever preferred_lft forever >     inet6 fe80::c82:a1ff:feba:774c/64 scope link >        valid_lft forever preferred_lft forever > > The hypervisor has the following routes: > > $ ip -6 route > fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > fe80::/64 dev br-ex proto kernel metric 256 pref medium > fe80::/64 dev br-int proto kernel metric 256 pref medium > fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium > > And within the VM has the following routes: > > root at ubuntu:~# ip -6 route > root at ubuntu:~# ip -6 route > fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium > fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref > medium > fe80::/64 dev ens3 proto kernel metric 256 pref medium > default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 > expires 260sec hoplimit 64 pref medium > > Though the ping6 from VM to hypervisor doesn't work: > root at ubuntu:~# ping6 fd12:67:1::1 -c4 > PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes > --- fd12:67:1::1 ping statistics --- > 4 packets transmitted, 0 packets received, 100% packet loss > > I'm able to tcpdump inside the router1 netns and see that request packet > is passing there, but can't see any reply packets: > > $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 > tcpdump -l -i any icmp6 > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on any, link-type LINUX_SLL (Linux cooked), capture size > 262144 bytes > 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 0, length 64 > 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has > fe80::f816:3eff:fe0e:17c3, length 32 > 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is > fe80::f816:3eff:fe0e:17c3, length 24 > 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 1, length 64 > 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 2, length 64 > 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: > ICMP6, echo request, seq 3, length 64 > > The same happens from hypervisor to VM. I only acan see the request > packets, but no reply packets. > > Thanks in advance, > Lucio Seki From skaplons at redhat.com Fri Sep 13 14:45:33 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 13 Sep 2019 16:45:33 +0200 Subject: [neutron] DevStack with IPv6 In-Reply-To: <24283fad-a8b6-6672-549e-bd1d27a9747b@gmail.com> References: <24283fad-a8b6-6672-549e-bd1d27a9747b@gmail.com> Message-ID: <9205585D-DE05-4D02-947B-F2248F250004@redhat.com> Hi, > On 13 Sep 2019, at 16:10, Brian Haley wrote: > > On 9/12/19 5:49 PM, Lucio Seki wrote: >> Hi folks, I'm having troubles to ping6 a VM running over DevStack from its hypervisor. >> Could you please help me troubleshooting it? >> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, > > I think this is your problem. When this is set to True, create_neutron_initial_network() is called, which does a little "hacking" by bringing interfaces up, moving addresses and adding routes so that you can communicate with floating IP and IPv6 addresses. You would have to look at that code and do similar things manually. I agree with Brian. Probably You need to add IP address from same subnet to br-ex interface that Your floating IPs will be reachable via br-ex. That is the way how this is done by Devstack by default IIRC. > > -Brian > > >> and manually created the networks, subnets and router. Following is my router: >> $ openstack router show router1 -c external_gateway_info -c interfaces_info >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value | >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | external_gateway_info | {"network_id": "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": "fd12:67:1::3c"}]} | >> | interfaces_info | [{"subnet_id": "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] | >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> I'm trying to ping6 the following VM: >> $ openstack server list >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | ID | Name | Status | Networks | Image | Flavor | >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> I intend to reach it via br-ex interface of the hypervisor: >> $ ip a show dev br-ex >> 9: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 >> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >> inet6 fd12:67:1::1/64 scope global >> valid_lft forever preferred_lft forever >> inet6 fe80::c82:a1ff:feba:774c/64 scope link >> valid_lft forever preferred_lft forever >> The hypervisor has the following routes: >> $ ip -6 route >> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> fe80::/64 dev br-ex proto kernel metric 256 pref medium >> fe80::/64 dev br-int proto kernel metric 256 pref medium >> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >> And within the VM has the following routes: >> root at ubuntu:~# ip -6 route >> root at ubuntu:~# ip -6 route >> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 expires 260sec hoplimit 64 pref medium >> Though the ping6 from VM to hypervisor doesn't work: >> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >> --- fd12:67:1::1 ping statistics --- >> 4 packets transmitted, 0 packets received, 100% packet loss >> I'm able to tcpdump inside the router1 netns and see that request packet is passing there, but can't see any reply packets: >> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump -l -i any icmp6 >> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode >> listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes >> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 0, length 64 >> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0e:17c3, length 32 >> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is fe80::f816:3eff:fe0e:17c3, length 24 >> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 1, length 64 >> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 2, length 64 >> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: ICMP6, echo request, seq 3, length 64 >> The same happens from hypervisor to VM. I only acan see the request packets, but no reply packets. >> Thanks in advance, >> Lucio Seki > — Slawek Kaplonski Senior software engineer Red Hat From dmellado at redhat.com Fri Sep 13 14:57:09 2019 From: dmellado at redhat.com (Daniel Mellado) Date: Fri, 13 Sep 2019 16:57:09 +0200 Subject: [release][freezer][karbor][kuryr][magnum][manila][monasca][neutron][senlin][tacker][winstackers] Missing releases for some deliverables In-Reply-To: References: Message-ID: <2ce98e15-4e72-af49-6ef0-03a7539932fa@redhat.com> Hi Thierry, I've put https://review.opendev.org/#/c/682073/ for now, waiting on Michal review. Best! Daniel On 9/12/19 12:13 PM, Thierry Carrez wrote: > Hi everyone, > > Quick reminder that we'll need a release very soon for a number of > deliverables following a cycle-with-intermediary release model but which > have not done *any* release yet in the Train cycle: > > - freezer and freezer-web-ui > - karbor and karbor-dashboard > - kuryr-kubernetes > - magnum-ui > - manila-ui > - monasca-agent, monasca-api, monasca-ceilometer, monasca-events-api, > monasca-log-api, monasca-notification, monasca-persister and > monasca-transform > - networking-hyperv > - neutron-fwaas-dashboard and neutron-vpnaas-dashboard > - senlin-dashboard > - tacker-horizon > > Those should be released ASAP, and in all cases before September 26th, > so that we have a release to include in the final Train release. > > Thanks in advance, > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From rosmaita.fossdev at gmail.com Fri Sep 13 15:36:20 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 13 Sep 2019 11:36:20 -0400 Subject: [dev] [glance] proposal for S3 store driver re-support as galnce_store backend In-Reply-To: References: Message-ID: <4f48c659-d216-31a7-f34f-c09e9d51f31d@gmail.com> On 9/11/19 6:25 AM, Naohiro Sameshima wrote: > Hi all, > > I know that glance_store had supported S3 backend until version > OpenStack Mitaka, > and it has already been removed due to lack of maintainers [1][2]. > > I started refactoring the S3 driver to work with version OpenStack Stein > and recently completed it. > (e.g. Add Multi Store Support, Using the latest AWS SDK) > > So, it would be great if glance_store could support the S3 driver again. > > However, I'm not familiar with the procedure for that. > > Would it be possible to discuss this? >From what I've heard, there's a revival of interest in the S3 driver, so it's great that you've decided to work on it. You've missed the Train for this cycle, however, (sorry, I couldn't resist) as the final release for nonclient libraries was last week. The easiest way to discuss getting S3 support into Usurri would be at the weekly Glance meeting on Thursdays at 1400 UTC. You can put an item on the agenda: https://etherpad.openstack.org/p/glance-team-meeting-agenda If that's not good for your time zone, you can continue the discussion with the Glance community on this mailing list. Basically, what will have to happen is you'll propose a spec or spec-lite for glance_store (see [0]; Abhishek can tell you which one he'll prefer). The key issues will be finding a committed maintainer (you?) and a testing strategy. Once that's figured out, it's just a matter of putting up a patch with your code and getting it reviewed and approved. (Just a quick reminder that one way to facilitate getting your code reviewed is to review other people's code.) cheers, brian [0] https://docs.openstack.org/glance/latest/contributor/blueprints.html > Thanks, > > Naohiro > > [1] https://docs.openstack.org/releasenotes/glance/newton.html > [2] https://opendev.org/openstack/glance_store/src/branch/master/releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml > From lucioseki at gmail.com Fri Sep 13 13:24:36 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 10:24:36 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Hi Donny, following are the rules: $ openstack security group list --project admin +--------------------------------------+---------+------------------------+----------------------------------+------+ | ID | Name | Description | Project | Tags | +--------------------------------------+---------+------------------------+----------------------------------+------+ | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group | 68e3942285a24fb5bd1aed30e166aaee | [] | +--------------------------------------+---------+------------------------+----------------------------------+------+ $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd +--------------------------------------+-------------+----------+------------+--------------------------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+----------+------------+--------------------------------------+ | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 | None | | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | | None | | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 | None | | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | | None | +--------------------------------------+-------------+----------+------------+--------------------------------------+ $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-09-03T16:51:41Z | | description | | | direction | egress | | ether_type | IPv6 | | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 | | location | Munch({'project': Munch({'domain_id': 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | 68e3942285a24fb5bd1aed30e166aaee | | protocol | ipv6-icmp | | remote_group_id | None | | remote_ip_prefix | None | | revision_number | 0 | | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | tags | [] | | updated_at | 2019-09-03T16:51:41Z | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-09-03T16:51:30Z | | description | | | direction | ingress | | ether_type | IPv6 | | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf | | location | Munch({'project': Munch({'domain_id': 'default', 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | 68e3942285a24fb5bd1aed30e166aaee | | protocol | ipv6-icmp | | remote_group_id | None | | remote_ip_prefix | None | | revision_number | 0 | | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | | tags | [] | | updated_at | 2019-09-03T16:51:30Z | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ On Fri, Sep 13, 2019 at 10:16 AM Donny Davis wrote: > Security group rules? > > Donny Davis > c: 805 814 6800 > > On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: > >> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >> its hypervisor. >> Could you please help me troubleshooting it? >> >> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >> and manually created the networks, subnets and router. Following is my >> router: >> >> $ openstack router show router1 -c external_gateway_info -c >> interfaces_info >> >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> >> | >> >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | external_gateway_info | {"network_id": >> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >> "external_fixed_ips": [{"subnet_id": >> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >> "fd12:67:1::3c"}]} | >> | interfaces_info | [{"subnet_id": >> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >> >> | >> >> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> I'm trying to ping6 the following VM: >> >> $ openstack server list >> >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | ID | Name | Status | Networks >> | Image | Flavor | >> >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >> >> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >> >> I intend to reach it via br-ex interface of the hypervisor: >> >> $ ip a show dev br-ex >> 9: br-ex: mtu 1500 qdisc noqueue state >> UNKNOWN group default qlen 1000 >> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >> inet6 fd12:67:1::1/64 scope global >> valid_lft forever preferred_lft forever >> inet6 fe80::c82:a1ff:feba:774c/64 scope link >> valid_lft forever preferred_lft forever >> >> The hypervisor has the following routes: >> >> $ ip -6 route >> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> fe80::/64 dev br-ex proto kernel metric 256 pref medium >> fe80::/64 dev br-int proto kernel metric 256 pref medium >> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >> >> And within the VM has the following routes: >> >> root at ubuntu:~# ip -6 route >> root at ubuntu:~# ip -6 route >> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref >> medium >> fe80::/64 dev ens3 proto kernel metric 256 pref medium >> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >> expires 260sec hoplimit 64 pref medium >> >> Though the ping6 from VM to hypervisor doesn't work: >> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >> --- fd12:67:1::1 ping statistics --- >> 4 packets transmitted, 0 packets received, 100% packet loss >> >> I'm able to tcpdump inside the router1 netns and see that request packet >> is passing there, but can't see any reply packets: >> >> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 tcpdump >> -l -i any icmp6 >> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode >> listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 >> bytes >> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 0, length 64 >> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >> fe80::f816:3eff:fe0e:17c3, length 32 >> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >> fe80::f816:3eff:fe0e:17c3, length 24 >> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 1, length 64 >> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 2, length 64 >> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >> ICMP6, echo request, seq 3, length 64 >> >> The same happens from hypervisor to VM. I only acan see the request >> packets, but no reply packets. >> >> Thanks in advance, >> Lucio Seki >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Sep 13 17:22:17 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 13:22:17 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Well here is the output from my rule list that is in prod right now with ipv6 +--------------------------------------+-------------+-----------+------------+-----------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+------------+-----------------------+ | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | | None | | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | | None | | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | | None | | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | | None | | ec1ea961-9025-4229-92cf-618026a1851b | None | None | | None | +--------------------------------------+-------------+-----------+------------+-----------------------+ +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-07-30T00:50:25Z | | description | | | direction | ingress | | ether_type | IPv6 | | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | e8fd161dc34c421a979a9e6421f823e9 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | ::/0 | | revision_number | 0 | | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 | | tags | [] | | updated_at | 2019-07-30T00:50:25Z | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: > Hi Donny, following are the rules: > > $ openstack security group list --project admin > > +--------------------------------------+---------+------------------------+----------------------------------+------+ > | ID | Name | Description > | Project | Tags | > > +--------------------------------------+---------+------------------------+----------------------------------+------+ > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group > | 68e3942285a24fb5bd1aed30e166aaee | [] | > > +--------------------------------------+---------+------------------------+----------------------------------+------+ > > $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > | ID | IP Protocol | IP Range | Port > Range | Remote Security Group | > > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 > | None | > | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | > | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | > | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | > | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | > | None | > | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 > | None | > | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | > | None | > > +--------------------------------------+-------------+----------+------------+--------------------------------------+ > > $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-09-03T16:51:41Z > > | > | description | > > | > | direction | egress > > | > | ether_type | IPv6 > > | > | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 > > | > | location | Munch({'project': Munch({'domain_id': 'default', > 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': > None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | 68e3942285a24fb5bd1aed30e166aaee > > | > | protocol | ipv6-icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | None > > | > | revision_number | 0 > > | > | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > | > | tags | [] > > | > | updated_at | 2019-09-03T16:51:41Z > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-09-03T16:51:30Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf > > | > | location | Munch({'project': Munch({'domain_id': 'default', > 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': > None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | 68e3942285a24fb5bd1aed30e166aaee > > | > | protocol | ipv6-icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | None > > | > | revision_number | 0 > > | > | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd > > | > | tags | [] > > | > | updated_at | 2019-09-03T16:51:30Z > > | > > +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > On Fri, Sep 13, 2019 at 10:16 AM Donny Davis wrote: > >> Security group rules? >> >> Donny Davis >> c: 805 814 6800 >> >> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >> >>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >>> its hypervisor. >>> Could you please help me troubleshooting it? >>> >>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>> and manually created the networks, subnets and router. Following is my >>> router: >>> >>> $ openstack router show router1 -c external_gateway_info -c >>> interfaces_info >>> >>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> >>> | >>> >>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | external_gateway_info | {"network_id": >>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>> "external_fixed_ips": [{"subnet_id": >>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>> "fd12:67:1::3c"}]} | >>> | interfaces_info | [{"subnet_id": >>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>> >>> | >>> >>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> I'm trying to ping6 the following VM: >>> >>> $ openstack server list >>> >>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>> | ID | Name | Status | Networks >>> | Image | Flavor | >>> >>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>> >>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>> >>> I intend to reach it via br-ex interface of the hypervisor: >>> >>> $ ip a show dev br-ex >>> 9: br-ex: mtu 1500 qdisc noqueue state >>> UNKNOWN group default qlen 1000 >>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>> inet6 fd12:67:1::1/64 scope global >>> valid_lft forever preferred_lft forever >>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>> valid_lft forever preferred_lft forever >>> >>> The hypervisor has the following routes: >>> >>> $ ip -6 route >>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>> >>> And within the VM has the following routes: >>> >>> root at ubuntu:~# ip -6 route >>> root at ubuntu:~# ip -6 route >>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref >>> medium >>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>> expires 260sec hoplimit 64 pref medium >>> >>> Though the ping6 from VM to hypervisor doesn't work: >>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>> --- fd12:67:1::1 ping statistics --- >>> 4 packets transmitted, 0 packets received, 100% packet loss >>> >>> I'm able to tcpdump inside the router1 netns and see that request packet >>> is passing there, but can't see any reply packets: >>> >>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>> tcpdump -l -i any icmp6 >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> decode >>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>> 262144 bytes >>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 0, length 64 >>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>> fe80::f816:3eff:fe0e:17c3, length 32 >>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>> fe80::f816:3eff:fe0e:17c3, length 24 >>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 1, length 64 >>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 2, length 64 >>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>> ICMP6, echo request, seq 3, length 64 >>> >>> The same happens from hypervisor to VM. I only acan see the request >>> packets, but no reply packets. >>> >>> Thanks in advance, >>> Lucio Seki >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Sep 13 17:24:00 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 13:24:00 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Also I have no v6 address on my br-ex On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: > Well here is the output from my rule list that is in prod right now with > ipv6 > > +--------------------------------------+-------------+-----------+------------+-----------------------+ > | ID | IP Protocol | IP Range | Port > Range | Remote Security Group | > > +--------------------------------------+-------------+-----------+------------+-----------------------+ > | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | > | None | > | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | > | None | > | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | > | None | > | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | > | None | > | ec1ea961-9025-4229-92cf-618026a1851b | None | None | > | None | > > +--------------------------------------+-------------+-----------+------------+-----------------------+ > > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-07-30T00:50:25Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > | protocol | icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | ::/0 > > | > | revision_number | 0 > > | > | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > | tags | [] > > | > | updated_at | 2019-07-30T00:50:25Z > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > > > On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: > >> Hi Donny, following are the rules: >> >> $ openstack security group list --project admin >> >> +--------------------------------------+---------+------------------------+----------------------------------+------+ >> | ID | Name | Description >> | Project | Tags | >> >> +--------------------------------------+---------+------------------------+----------------------------------+------+ >> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security group >> | 68e3942285a24fb5bd1aed30e166aaee | [] | >> >> +--------------------------------------+---------+------------------------+----------------------------------+------+ >> >> $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd >> >> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >> | ID | IP Protocol | IP Range | Port >> Range | Remote Security Group | >> >> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 >> | None | >> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >> | None | >> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 >> | None | >> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >> | None | >> >> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >> >> $ openstack security group rule show 759edd06-b698-45ca-94cd-44e0cc2cc848 >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-09-03T16:51:41Z >> >> | >> | description | >> >> | >> | direction | egress >> >> | >> | ether_type | IPv6 >> >> | >> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >> >> | >> | location | Munch({'project': Munch({'domain_id': 'default', >> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >> | name | None >> >> | >> | port_range_max | None >> >> | >> | port_range_min | None >> >> | >> | project_id | 68e3942285a24fb5bd1aed30e166aaee >> >> | >> | protocol | ipv6-icmp >> >> | >> | remote_group_id | None >> >> | >> | remote_ip_prefix | None >> >> | >> | revision_number | 0 >> >> | >> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >> >> | >> | tags | [] >> >> | >> | updated_at | 2019-09-03T16:51:41Z >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-09-03T16:51:30Z >> >> | >> | description | >> >> | >> | direction | ingress >> >> | >> | ether_type | IPv6 >> >> | >> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >> >> | >> | location | Munch({'project': Munch({'domain_id': 'default', >> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >> | name | None >> >> | >> | port_range_max | None >> >> | >> | port_range_min | None >> >> | >> | project_id | 68e3942285a24fb5bd1aed30e166aaee >> >> | >> | protocol | ipv6-icmp >> >> | >> | remote_group_id | None >> >> | >> | remote_ip_prefix | None >> >> | >> | revision_number | 0 >> >> | >> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >> >> | >> | tags | [] >> >> | >> | updated_at | 2019-09-03T16:51:30Z >> >> | >> >> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> >> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >> wrote: >> >>> Security group rules? >>> >>> Donny Davis >>> c: 805 814 6800 >>> >>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>> >>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >>>> its hypervisor. >>>> Could you please help me troubleshooting it? >>>> >>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>> and manually created the networks, subnets and router. Following is my >>>> router: >>>> >>>> $ openstack router show router1 -c external_gateway_info -c >>>> interfaces_info >>>> >>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> >>>> | >>>> >>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | external_gateway_info | {"network_id": >>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>> "external_fixed_ips": [{"subnet_id": >>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>> "fd12:67:1::3c"}]} | >>>> | interfaces_info | [{"subnet_id": >>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>> >>>> | >>>> >>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> I'm trying to ping6 the following VM: >>>> >>>> $ openstack server list >>>> >>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>> | ID | Name | Status | Networks >>>> | Image | Flavor | >>>> >>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>> >>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>> >>>> I intend to reach it via br-ex interface of the hypervisor: >>>> >>>> $ ip a show dev br-ex >>>> 9: br-ex: mtu 1500 qdisc noqueue >>>> state UNKNOWN group default qlen 1000 >>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>> inet6 fd12:67:1::1/64 scope global >>>> valid_lft forever preferred_lft forever >>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>> valid_lft forever preferred_lft forever >>>> >>>> The hypervisor has the following routes: >>>> >>>> $ ip -6 route >>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>> >>>> And within the VM has the following routes: >>>> >>>> root at ubuntu:~# ip -6 route >>>> root at ubuntu:~# ip -6 route >>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec pref >>>> medium >>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>> expires 260sec hoplimit 64 pref medium >>>> >>>> Though the ping6 from VM to hypervisor doesn't work: >>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>> --- fd12:67:1::1 ping statistics --- >>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>> >>>> I'm able to tcpdump inside the router1 netns and see that request >>>> packet is passing there, but can't see any reply packets: >>>> >>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>> tcpdump -l -i any icmp6 >>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>> decode >>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>> 262144 bytes >>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 0, length 64 >>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 1, length 64 >>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 2, length 64 >>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>> ICMP6, echo request, seq 3, length 64 >>>> >>>> The same happens from hypervisor to VM. I only acan see the request >>>> packets, but no reply packets. >>>> >>>> Thanks in advance, >>>> Lucio Seki >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Sep 13 19:03:04 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 13 Sep 2019 12:03:04 -0700 Subject: Long, Slow Zuul Queues and Why They Happen Message-ID: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> Hello, We've been fielding a fair bit of questions and suggestions around Zuul's long change (and job) queues over the last week or so. As a result I tried to put a quick FAQ type document [0] on how we schedule jobs, why we schedule that way, and how we can improve the long queues. Hoping that gives us all a better understanding of why were are in the current situation and ideas on how we can help to improve things. [0] https://docs.openstack.org/infra/manual/testing.html#why-are-jobs-for-changes-queued-for-a-long-time Thanks, Clark From mriedemos at gmail.com Fri Sep 13 19:44:19 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 13 Sep 2019 14:44:19 -0500 Subject: Long, Slow Zuul Queues and Why They Happen In-Reply-To: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> References: <7fb77bf6-9c1d-4bba-87a6-41235e113009@www.fastmail.com> Message-ID: <9aaf8782-92d1-dae7-c3b1-1a1d720bdd7f@gmail.com> On 9/13/2019 2:03 PM, Clark Boylan wrote: > We've been fielding a fair bit of questions and suggestions around Zuul's long change (and job) queues over the last week or so. As a result I tried to put a quick FAQ type document [0] on how we schedule jobs, why we schedule that way, and how we can improve the long queues. > > Hoping that gives us all a better understanding of why were are in the current situation and ideas on how we can help to improve things. > > [0]https://docs.openstack.org/infra/manual/testing.html#why-are-jobs-for-changes-queued-for-a-long-time Thanks for writing this up Clark. As for the current status of the gate, several nova devs have been closely monitoring the gate since we have 3 fairly lengthy series of feature changes approved since yesterday and we're trying to shepherd those through but we're seeing failures and trying to react to them. Two issues of note this week: 1. http://status.openstack.org/elastic-recheck/index.html#1843615 I had pushed a fix for that one earlier in the week but there was a bug in my fix which Takashi has fixed: https://review.opendev.org/#/c/682025/ That was promoted to the gate earlier today but failed on... 2. http://status.openstack.org/elastic-recheck/index.html#1813147 We have a couple of patches up for that now which might get promoted once we are reasonably sure those are going to pass check (promote to gate means skipping check which is risky because if it fails in the gate we have to re-queue the gate as the doc above explains). As far as overall failure classifications we're pretty good there in elastic-recheck: http://status.openstack.org/elastic-recheck/data/integrated_gate.html Meaning for the most part we know what's failing, we just need to fix the bugs. One that continues to dog us (and by "us" I mean OpenStack, not just nova) is this one: http://status.openstack.org/elastic-recheck/gate.html#1686542 The QA team's work to split apart the big tempest full jobs into service-oriented jobs like tempest-integrated-compute should have helped here but we're still seeing there are lots of jobs timing out which likely means there are some really slow tests running in too many jobs and those require investigation. It could also be devstack setup that is taking a long time like Clark identified with OSC usage awhile back: http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008071.html If you have questions about how elastic-recheck works or how to help investigate some of these failures, like with using logstash.openstack.org, please reach out to me (mriedem), clarkb and/or gmann in #openstack-qa. -- Thanks, Matt From lucioseki at gmail.com Fri Sep 13 18:48:32 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 15:48:32 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: Hmm OK, I'll try to figure out what hacking create_neutron_initial_network does... BTW, I noticed that I can ping6 the router interface at private subnet from the DevStack host: $ ping6 fd12:67:1:1::1 PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms And also I can ping6 the public subnet interface from the VM: root at ubuntu:~# ping6 fd12:67:1::3c PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes ping: getnameinfo: Temporary failure in name resolution 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms ping: getnameinfo: Temporary failure in name resolution 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms ping: getnameinfo: Temporary failure in name resolution 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms Not sure if it means that there's something missing within the router itself... On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: > Also I have no v6 address on my br-ex > > On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: > >> Well here is the output from my rule list that is in prod right now with >> ipv6 >> >> +--------------------------------------+-------------+-----------+------------+-----------------------+ >> | ID | IP Protocol | IP Range | Port >> Range | Remote Security Group | >> >> +--------------------------------------+-------------+-----------+------------+-----------------------+ >> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >> | None | >> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >> | None | >> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >> | None | >> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >> | None | >> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >> | None | >> >> +--------------------------------------+-------------+-----------+------------+-----------------------+ >> >> >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | Field | Value >> >> | >> >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> | created_at | 2019-07-30T00:50:25Z >> >> | >> | description | >> >> | >> | direction | ingress >> >> | >> | ether_type | IPv6 >> >> | >> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >> >> | >> | location | Munch({'cloud': '', 'region_name': 'regionOne', >> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >> | name | None >> >> | >> | port_range_max | None >> >> | >> | port_range_min | None >> >> | >> | project_id | e8fd161dc34c421a979a9e6421f823e9 >> >> | >> | protocol | icmp >> >> | >> | remote_group_id | None >> >> | >> | remote_ip_prefix | ::/0 >> >> | >> | revision_number | 0 >> >> | >> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >> >> | >> | tags | [] >> >> | >> | updated_at | 2019-07-30T00:50:25Z >> >> | >> >> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> >> >> >> >> >> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >> >>> Hi Donny, following are the rules: >>> >>> $ openstack security group list --project admin >>> >>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>> | ID | Name | Description >>> | Project | Tags | >>> >>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>> >>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>> >>> $ openstack security group rule list d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>> >>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>> | ID | IP Protocol | IP Range | Port >>> Range | Remote Security Group | >>> >>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 >>> | None | >>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>> | None | >>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 >>> | None | >>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>> | None | >>> >>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>> >>> $ openstack security group rule show >>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | created_at | 2019-09-03T16:51:41Z >>> >>> | >>> | description | >>> >>> | >>> | direction | egress >>> >>> | >>> | ether_type | IPv6 >>> >>> | >>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>> >>> | >>> | location | Munch({'project': Munch({'domain_id': 'default', >>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>> | name | None >>> >>> | >>> | port_range_max | None >>> >>> | >>> | port_range_min | None >>> >>> | >>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>> >>> | >>> | protocol | ipv6-icmp >>> >>> | >>> | remote_group_id | None >>> >>> | >>> | remote_ip_prefix | None >>> >>> | >>> | revision_number | 0 >>> >>> | >>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>> >>> | >>> | tags | [] >>> >>> | >>> | updated_at | 2019-09-03T16:51:41Z >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> $ openstack security group rule show 81f3588d-4159-4af2-ad50-ff6b76add9cf >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | created_at | 2019-09-03T16:51:30Z >>> >>> | >>> | description | >>> >>> | >>> | direction | ingress >>> >>> | >>> | ether_type | IPv6 >>> >>> | >>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>> >>> | >>> | location | Munch({'project': Munch({'domain_id': 'default', >>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>> | name | None >>> >>> | >>> | port_range_max | None >>> >>> | >>> | port_range_min | None >>> >>> | >>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>> >>> | >>> | protocol | ipv6-icmp >>> >>> | >>> | remote_group_id | None >>> >>> | >>> | remote_ip_prefix | None >>> >>> | >>> | revision_number | 0 >>> >>> | >>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>> >>> | >>> | tags | [] >>> >>> | >>> | updated_at | 2019-09-03T16:51:30Z >>> >>> | >>> >>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> >>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>> wrote: >>> >>>> Security group rules? >>>> >>>> Donny Davis >>>> c: 805 814 6800 >>>> >>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>> >>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack from >>>>> its hypervisor. >>>>> Could you please help me troubleshooting it? >>>>> >>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>> and manually created the networks, subnets and router. Following is my >>>>> router: >>>>> >>>>> $ openstack router show router1 -c external_gateway_info -c >>>>> interfaces_info >>>>> >>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> >>>>> | >>>>> >>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | external_gateway_info | {"network_id": >>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>> "external_fixed_ips": [{"subnet_id": >>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>> "fd12:67:1::3c"}]} | >>>>> | interfaces_info | [{"subnet_id": >>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>> >>>>> | >>>>> >>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> I'm trying to ping6 the following VM: >>>>> >>>>> $ openstack server list >>>>> >>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>> | ID | Name | Status | Networks >>>>> | Image | Flavor | >>>>> >>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>> >>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>> >>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>> >>>>> $ ip a show dev br-ex >>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>> state UNKNOWN group default qlen 1000 >>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>> inet6 fd12:67:1::1/64 scope global >>>>> valid_lft forever preferred_lft forever >>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>> valid_lft forever preferred_lft forever >>>>> >>>>> The hypervisor has the following routes: >>>>> >>>>> $ ip -6 route >>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>> >>>>> And within the VM has the following routes: >>>>> >>>>> root at ubuntu:~# ip -6 route >>>>> root at ubuntu:~# ip -6 route >>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>> pref medium >>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>> expires 260sec hoplimit 64 pref medium >>>>> >>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>> --- fd12:67:1::1 ping statistics --- >>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>> >>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>> packet is passing there, but can't see any reply packets: >>>>> >>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>> tcpdump -l -i any icmp6 >>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>> decode >>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>> 262144 bytes >>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 0, length 64 >>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 1, length 64 >>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 2, length 64 >>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>> ICMP6, echo request, seq 3, length 64 >>>>> >>>>> The same happens from hypervisor to VM. I only acan see the request >>>>> packets, but no reply packets. >>>>> >>>>> Thanks in advance, >>>>> Lucio Seki >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Sep 13 18:55:20 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 13 Sep 2019 14:55:20 -0400 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: So outbound traffic works, but inbound traffic doesn't? Here is my icmp security group rule for ipv6. +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-07-30T00:50:25Z | | description | | | direction | ingress | | ether_type | IPv6 | | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | | location | Munch({'cloud': '', 'region_name': 'regionOne', 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | | name | None | | port_range_max | None | | port_range_min | None | | project_id | e8fd161dc34c421a979a9e6421f823e9 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | ::/0 | | revision_number | 0 | | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 | | tags | [] | | updated_at | 2019-07-30T00:50:25Z | +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > Hmm OK, I'll try to figure out what hacking create_neutron_initial_network > does... > > BTW, I noticed that I can ping6 the router interface at private subnet > from the DevStack host: > > $ ping6 fd12:67:1:1::1 > PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes > 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms > 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms > 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms > 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms > > And also I can ping6 the public subnet interface from the VM: > > root at ubuntu:~# ping6 fd12:67:1::3c > PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes > ping: getnameinfo: Temporary failure in name resolution > 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms > ping: getnameinfo: Temporary failure in name resolution > 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms > ping: getnameinfo: Temporary failure in name resolution > 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms > > Not sure if it means that there's something missing within the router > itself... > > On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: > >> Also I have no v6 address on my br-ex >> >> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis wrote: >> >>> Well here is the output from my rule list that is in prod right now with >>> ipv6 >>> >>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>> | ID | IP Protocol | IP Range | Port >>> Range | Remote Security Group | >>> >>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >>> | None | >>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >>> | None | >>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >>> | None | >>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >>> | None | >>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >>> | None | >>> >>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>> >>> >>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> | >>> >>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | created_at | 2019-07-30T00:50:25Z >>> >>> | >>> | description | >>> >>> | >>> | direction | ingress >>> >>> | >>> | ether_type | IPv6 >>> >>> | >>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >>> >>> | >>> | location | Munch({'cloud': '', 'region_name': 'regionOne', >>> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >>> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>> | name | None >>> >>> | >>> | port_range_max | None >>> >>> | >>> | port_range_min | None >>> >>> | >>> | project_id | e8fd161dc34c421a979a9e6421f823e9 >>> >>> | >>> | protocol | icmp >>> >>> | >>> | remote_group_id | None >>> >>> | >>> | remote_ip_prefix | ::/0 >>> >>> | >>> | revision_number | 0 >>> >>> | >>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >>> >>> | >>> | tags | [] >>> >>> | >>> | updated_at | 2019-07-30T00:50:25Z >>> >>> | >>> >>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> >>> >>> >>> >>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>> >>>> Hi Donny, following are the rules: >>>> >>>> $ openstack security group list --project admin >>>> >>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>> | ID | Name | Description >>>> | Project | Tags | >>>> >>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>> >>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>> >>>> $ openstack security group rule list >>>> d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>> >>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>> | ID | IP Protocol | IP Range | Port >>>> Range | Remote Security Group | >>>> >>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | 22:22 >>>> | None | >>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>>> | None | >>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | 22:22 >>>> | None | >>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>>> | None | >>>> >>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>> >>>> $ openstack security group rule show >>>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-09-03T16:51:41Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | egress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>> >>>> | >>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>> >>>> | >>>> | protocol | ipv6-icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | None >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-09-03T16:51:41Z >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> $ openstack security group rule show >>>> 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-09-03T16:51:30Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | ingress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>> >>>> | >>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>> >>>> | >>>> | protocol | ipv6-icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | None >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-09-03T16:51:30Z >>>> >>>> | >>>> >>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> >>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>>> wrote: >>>> >>>>> Security group rules? >>>>> >>>>> Donny Davis >>>>> c: 805 814 6800 >>>>> >>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>> >>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack >>>>>> from its hypervisor. >>>>>> Could you please help me troubleshooting it? >>>>>> >>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>> and manually created the networks, subnets and router. Following is >>>>>> my router: >>>>>> >>>>>> $ openstack router show router1 -c external_gateway_info -c >>>>>> interfaces_info >>>>>> >>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | Field | Value >>>>>> >>>>>> >>>>>> | >>>>>> >>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | external_gateway_info | {"network_id": >>>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>>> "external_fixed_ips": [{"subnet_id": >>>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>>> "fd12:67:1::3c"}]} | >>>>>> | interfaces_info | [{"subnet_id": >>>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>>> >>>>>> | >>>>>> >>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> >>>>>> I'm trying to ping6 the following VM: >>>>>> >>>>>> $ openstack server list >>>>>> >>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>> | ID | Name | Status | Networks >>>>>> | Image | Flavor | >>>>>> >>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>> >>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>> >>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>> >>>>>> $ ip a show dev br-ex >>>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>>> state UNKNOWN group default qlen 1000 >>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>> inet6 fd12:67:1::1/64 scope global >>>>>> valid_lft forever preferred_lft forever >>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>> valid_lft forever preferred_lft forever >>>>>> >>>>>> The hypervisor has the following routes: >>>>>> >>>>>> $ ip -6 route >>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>>> >>>>>> And within the VM has the following routes: >>>>>> >>>>>> root at ubuntu:~# ip -6 route >>>>>> root at ubuntu:~# ip -6 route >>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>>> pref medium >>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>>> expires 260sec hoplimit 64 pref medium >>>>>> >>>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>>> --- fd12:67:1::1 ping statistics --- >>>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>>> >>>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>>> packet is passing there, but can't see any reply packets: >>>>>> >>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>>> tcpdump -l -i any icmp6 >>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>>> decode >>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>>> 262144 bytes >>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 0, length 64 >>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 1, length 64 >>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 2, length 64 >>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>> ICMP6, echo request, seq 3, length 64 >>>>>> >>>>>> The same happens from hypervisor to VM. I only acan see the request >>>>>> packets, but no reply packets. >>>>>> >>>>>> Thanks in advance, >>>>>> Lucio Seki >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucioseki at gmail.com Fri Sep 13 19:38:45 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 16:38:45 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: I drawed the environment I have [1]. Also attached it as an image. Currently I have the interfaces 1 pinging 3, and 4 pinging 2. When I attempt to make 1 ping 4, I can only see the request packets at 2. When I attempt to make 4 ping 1, I can only see the request packets at 3. [1] https://docs.google.com/drawings/d/1zhgN9TCINrVIlQpZT9hlCrHxWrQerjIo62oRmTGx0-c/edit?usp=sharing On Fri, Sep 13, 2019 at 3:55 PM Donny Davis wrote: > So outbound traffic works, but inbound traffic doesn't? > > Here is my icmp security group rule for ipv6. > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-07-30T00:50:25Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > | protocol | icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | ::/0 > > | > | revision_number | 0 > > | > | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > | tags | [] > > | > | updated_at | 2019-07-30T00:50:25Z > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > >> Hmm OK, I'll try to figure out what hacking >> create_neutron_initial_network does... >> >> BTW, I noticed that I can ping6 the router interface at private subnet >> from the DevStack host: >> >> $ ping6 fd12:67:1:1::1 >> PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes >> 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms >> >> And also I can ping6 the public subnet interface from the VM: >> >> root at ubuntu:~# ping6 fd12:67:1::3c >> PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms >> >> Not sure if it means that there's something missing within the router >> itself... >> >> On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: >> >>> Also I have no v6 address on my br-ex >>> >>> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis >>> wrote: >>> >>>> Well here is the output from my rule list that is in prod right now >>>> with ipv6 >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | ID | IP Protocol | IP Range | Port >>>> Range | Remote Security Group | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >>>> | None | >>>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >>>> | None | >>>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >>>> | None | >>>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >>>> | None | >>>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >>>> | None | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | ingress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >>>> >>>> | >>>> | location | Munch({'cloud': '', 'region_name': 'regionOne', >>>> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >>>> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | e8fd161dc34c421a979a9e6421f823e9 >>>> >>>> | >>>> | protocol | icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | ::/0 >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> >>>> >>>> >>>> >>>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>>> >>>>> Hi Donny, following are the rules: >>>>> >>>>> $ openstack security group list --project admin >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | ID | Name | Description >>>>> | Project | Tags | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>>>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> >>>>> $ openstack security group rule list >>>>> d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | ID | IP Protocol | IP Range | Port >>>>> Range | Remote Security Group | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>>>> | None | >>>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>>>> | None | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | egress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | ingress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> >>>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>>>> wrote: >>>>> >>>>>> Security group rules? >>>>>> >>>>>> Donny Davis >>>>>> c: 805 814 6800 >>>>>> >>>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>>> >>>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack >>>>>>> from its hypervisor. >>>>>>> Could you please help me troubleshooting it? >>>>>>> >>>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>>> and manually created the networks, subnets and router. Following is >>>>>>> my router: >>>>>>> >>>>>>> $ openstack router show router1 -c external_gateway_info -c >>>>>>> interfaces_info >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | Field | Value >>>>>>> >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | external_gateway_info | {"network_id": >>>>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>>>> "external_fixed_ips": [{"subnet_id": >>>>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>>>> "fd12:67:1::3c"}]} | >>>>>>> | interfaces_info | [{"subnet_id": >>>>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> >>>>>>> I'm trying to ping6 the following VM: >>>>>>> >>>>>>> $ openstack server list >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | ID | Name | Status | Networks >>>>>>> | Image | Flavor | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> >>>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>>> >>>>>>> $ ip a show dev br-ex >>>>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>>>> state UNKNOWN group default qlen 1000 >>>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>>> inet6 fd12:67:1::1/64 scope global >>>>>>> valid_lft forever preferred_lft forever >>>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>>> valid_lft forever preferred_lft forever >>>>>>> >>>>>>> The hypervisor has the following routes: >>>>>>> >>>>>>> $ ip -6 route >>>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev br-ex proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev br-int proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev tapa5cf4799-9f proto kernel metric 256 pref medium >>>>>>> >>>>>>> And within the VM has the following routes: >>>>>>> >>>>>>> root at ubuntu:~# ip -6 route >>>>>>> root at ubuntu:~# ip -6 route >>>>>>> fd12:67:1::/64 via fd12:67:1:1::1 dev ens3 metric 1024 pref medium >>>>>>> fd12:67:1:1::/64 dev ens3 proto kernel metric 256 expires 86360sec >>>>>>> pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> default via fe80::f816:3eff:feb3:bd56 dev ens3 proto ra metric 1024 >>>>>>> expires 260sec hoplimit 64 pref medium >>>>>>> >>>>>>> Though the ping6 from VM to hypervisor doesn't work: >>>>>>> root at ubuntu:~# ping6 fd12:67:1::1 -c4 >>>>>>> PING fd12:67:1::1 (fd12:67:1::1): 56 data bytes >>>>>>> --- fd12:67:1::1 ping statistics --- >>>>>>> 4 packets transmitted, 0 packets received, 100% packet loss >>>>>>> >>>>>>> I'm able to tcpdump inside the router1 netns and see that request >>>>>>> packet is passing there, but can't see any reply packets: >>>>>>> >>>>>>> $ sudo ip netns exec qrouter-5172472c-bbe7-4907-832a-e2239c8badb4 >>>>>>> tcpdump -l -i any icmp6 >>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>>>>>> decode >>>>>>> listening on any, link-type LINUX_SLL (Linux cooked), capture size >>>>>>> 262144 bytes >>>>>>> 21:29:29.351358 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 0, length 64 >>>>>>> 21:29:30.033316 IP6 fe80::f816:3eff:feb3:bd56 > >>>>>>> fe80::f816:3eff:fe0e:17c3: ICMP6, neighbor solicitation, who has >>>>>>> fe80::f816:3eff:fe0e:17c3, length 32 >>>>>>> 21:29:30.035807 IP6 fe80::f816:3eff:fe0e:17c3 > >>>>>>> fe80::f816:3eff:feb3:bd56: ICMP6, neighbor advertisement, tgt is >>>>>>> fe80::f816:3eff:fe0e:17c3, length 24 >>>>>>> 21:29:30.353646 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 1, length 64 >>>>>>> 21:29:31.355410 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 2, length 64 >>>>>>> 21:29:32.357239 IP6 fd12:67:1:1:f816:3eff:fe0e:17c3 > fd12:67:1::1: >>>>>>> ICMP6, echo request, seq 3, length 64 >>>>>>> >>>>>>> The same happens from hypervisor to VM. I only acan see the request >>>>>>> packets, but no reply packets. >>>>>>> >>>>>>> Thanks in advance, >>>>>>> Lucio Seki >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DevStack IPv6.png Type: image/png Size: 29595 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Sep 13 19:54:01 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 13 Sep 2019 14:54:01 -0500 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <20190913195401.GA10452@sm-workstation> On Fri, Sep 13, 2019 at 04:07:54PM +0000, zuul at openstack.org wrote: > Build failed. > > - tag-releases https://zuul.opendev.org/t/openstack/build/c95672e425294127821c55ddf1176218 : RETRY_LIMIT in 1m 18s > - publish-tox-docs-static https://zuul.opendev.org/t/openstack/build/None : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures Just to make sure there is a record of it - this was a temporary infrastructure sync issue that was quickly resolved. The job was reenqueued and everything completed fine. Looks like everything is now good and no further action is needed, but of course if anything odd is seen related to this, please let us know. Sean From emilien at redhat.com Fri Sep 13 22:00:30 2019 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 13 Sep 2019 18:00:30 -0400 Subject: [tripleo] Deprecating paunch CLI? Message-ID: With our long-term goal to simplify TripleO and focus on what people actually deploy and how they operate their clouds, it appears that the Paunch CLI hasn't been a critical piece in our project and I propose that we deprecate it; create an Ansible module to call Paunch as a library only. I've been playing with it a little today: https://review.opendev.org/#/c/682093/ https://review.opendev.org/#/c/682094/ Here is how you would call paunch: - name: Start containers for step {{ step }} paunch: config: "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step }}.json" config_id: "tripleo_step{{ step }}" action: apply container_cli: "{{ container_cli }}" managed_by: "tripleo-{{ tripleo_role_name }}" A few benefits: - Deployment tasks in THT would call the new module instead of a shell command - More Pythonic and clean for Ansible, to interact with the actual task during the run - Removing some code in Paunch, make it easier to maintain for us For now, the Ansible module only covers "paunch apply", we will probably cover "delete" and "cleanup" eventually. Please let me know if you have any questions or concerns, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmar.salih1 at gmail.com Fri Sep 13 22:29:38 2019 From: anmar.salih1 at gmail.com (Anmar Salih) Date: Fri, 13 Sep 2019 18:29:38 -0400 Subject: Execute a script on every object upload event (Swift+aodh) Message-ID: Hey all, Need help to configure swift and aodh. The idea is to trigger aodh alarm on very object upload event on swift. Once the alarm triggered, a small script should be executed. So the sequence of operations should be like this: 1- Object just uploaded to swift container 2- Alarm triggered by aodh 3- Once alarm triggered , execute python script. I am using Devstack Stein release installed on virtual box. Best regards. Anmar Salih -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Fri Sep 13 23:19:14 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Fri, 13 Sep 2019 18:19:14 -0500 Subject: [tripleo] Deprecating paunch CLI? In-Reply-To: References: Message-ID: +1 - I think this is a great idea and will help simplify quite a bit. -- Kevin Carter IRC: Cloudnull On Fri, Sep 13, 2019 at 5:07 PM Emilien Macchi wrote: > With our long-term goal to simplify TripleO and focus on what people > actually deploy and how they operate their clouds, it appears that the > Paunch CLI hasn't been a critical piece in our project and I propose that > we deprecate it; create an Ansible module to call Paunch as a library only. > > I've been playing with it a little today: > https://review.opendev.org/#/c/682093/ > https://review.opendev.org/#/c/682094/ > > Here is how you would call paunch: > - name: Start containers for step {{ step }} > paunch: > config: > "/var/lib/tripleo-config/hashed-container-startup-config-step_{{ step > }}.json" > config_id: "tripleo_step{{ step }}" > action: apply > container_cli: "{{ container_cli }}" > managed_by: "tripleo-{{ tripleo_role_name }}" > > A few benefits: > - Deployment tasks in THT would call the new module instead of a shell > command > - More Pythonic and clean for Ansible, to interact with the actual task > during the run > - Removing some code in Paunch, make it easier to maintain for us > > For now, the Ansible module only covers "paunch apply", we will probably > cover "delete" and "cleanup" eventually. > > Please let me know if you have any questions or concerns, > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucioseki at gmail.com Fri Sep 13 20:23:24 2019 From: lucioseki at gmail.com (Lucio Seki) Date: Fri, 13 Sep 2019 17:23:24 -0300 Subject: [neutron] DevStack with IPv6 In-Reply-To: References: Message-ID: I recreated my security group rules, to set remote_ip_prefix to ::/0 instead of None as in Donny's environment, but made no difference. :-( On Fri, Sep 13, 2019 at 3:55 PM Donny Davis wrote: > So outbound traffic works, but inbound traffic doesn't? > > Here is my icmp security group rule for ipv6. > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | created_at | 2019-07-30T00:50:25Z > > | > | description | > > | > | direction | ingress > > | > | ether_type | IPv6 > > | > | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa > > | > | location | Munch({'cloud': '', 'region_name': 'regionOne', > 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', > 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | > | name | None > > | > | port_range_max | None > > | > | port_range_min | None > > | > | project_id | e8fd161dc34c421a979a9e6421f823e9 > > | > | protocol | icmp > > | > | remote_group_id | None > > | > | remote_ip_prefix | ::/0 > > | > | revision_number | 0 > > | > | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 > > | > | tags | [] > > | > | updated_at | 2019-07-30T00:50:25Z > > | > > +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > > > On Fri, Sep 13, 2019 at 2:48 PM Lucio Seki wrote: > >> Hmm OK, I'll try to figure out what hacking >> create_neutron_initial_network does... >> >> BTW, I noticed that I can ping6 the router interface at private subnet >> from the DevStack host: >> >> $ ping6 fd12:67:1:1::1 >> PING fd12:67:1:1::1(fd12:67:1:1::1) 56 data bytes >> 64 bytes from fd12:67:1:1::1: icmp_seq=1 ttl=64 time=0.646 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=2 ttl=64 time=0.095 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=3 ttl=64 time=0.106 ms >> 64 bytes from fd12:67:1:1::1: icmp_seq=4 ttl=64 time=0.129 ms >> >> And also I can ping6 the public subnet interface from the VM: >> >> root at ubuntu:~# ping6 fd12:67:1::3c >> PING fd12:67:1::3c (fd12:67:1::3c): 56 data bytes >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=0 ttl=64 time=2.079 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=1 ttl=64 time=1.385 ms >> ping: getnameinfo: Temporary failure in name resolution >> 64 bytes from unknown: icmp_seq=2 ttl=64 time=0.881 ms >> >> Not sure if it means that there's something missing within the router >> itself... >> >> On Fri, Sep 13, 2019 at 2:24 PM Donny Davis wrote: >> >>> Also I have no v6 address on my br-ex >>> >>> On Fri, Sep 13, 2019 at 1:22 PM Donny Davis >>> wrote: >>> >>>> Well here is the output from my rule list that is in prod right now >>>> with ipv6 >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | ID | IP Protocol | IP Range | Port >>>> Range | Remote Security Group | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> | 9ab00b6f-2bc2-4554-818d-eff6e0570943 | None | 0.0.0.0/0 | >>>> | None | >>>> | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa | icmp | ::/0 | >>>> | None | >>>> | e7fd4840-5fbd-4709-b918-f80eac5cb6da | None | ::/0 | >>>> | None | >>>> | e9968d53-7efe-4a9e-ad42-1092ffaf52e7 | None | None | >>>> | None | >>>> | ec1ea961-9025-4229-92cf-618026a1851b | None | None | >>>> | None | >>>> >>>> +--------------------------------------+-------------+-----------+------------+-----------------------+ >>>> >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | created_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> | description | >>>> >>>> | >>>> | direction | ingress >>>> >>>> | >>>> | ether_type | IPv6 >>>> >>>> | >>>> | id | b6df5801-8c2c-4ba4-afe1-2cbaa2922dfa >>>> >>>> | >>>> | location | Munch({'cloud': '', 'region_name': 'regionOne', >>>> 'zone': None, 'project': Munch({'id': 'e8fd161dc34c421a979a9e6421f823e9', >>>> 'name': 'openstackzuul', 'domain_id': None, 'domain_name': 'Default'})}) | >>>> | name | None >>>> >>>> | >>>> | port_range_max | None >>>> >>>> | >>>> | port_range_min | None >>>> >>>> | >>>> | project_id | e8fd161dc34c421a979a9e6421f823e9 >>>> >>>> | >>>> | protocol | icmp >>>> >>>> | >>>> | remote_group_id | None >>>> >>>> | >>>> | remote_ip_prefix | ::/0 >>>> >>>> | >>>> | revision_number | 0 >>>> >>>> | >>>> | security_group_id | bcedc0e0-e2e8-41fc-aeaa-afd2e10c7ab6 >>>> >>>> | >>>> | tags | [] >>>> >>>> | >>>> | updated_at | 2019-07-30T00:50:25Z >>>> >>>> | >>>> >>>> +-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> >>>> >>>> >>>> >>>> On Fri, Sep 13, 2019 at 9:24 AM Lucio Seki wrote: >>>> >>>>> Hi Donny, following are the rules: >>>>> >>>>> $ openstack security group list --project admin >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | ID | Name | Description >>>>> | Project | Tags | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | default | Default security >>>>> group | 68e3942285a24fb5bd1aed30e166aaee | [] | >>>>> >>>>> +--------------------------------------+---------+------------------------+----------------------------------+------+ >>>>> >>>>> $ openstack security group rule list >>>>> d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | ID | IP Protocol | IP Range | Port >>>>> Range | Remote Security Group | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> | 38394345-3e44-4284-a519-cdd8af020f30 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 40881f76-c87f-4685-b3af-c3497dd44837 | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 56d4ae52-195e-48df-871e-dc70b899b7ba | None | None | >>>>> | d0136b0e-ee51-461c-afa0-c5adb88dd0dd | >>>>> | 759edd06-b698-45ca-94cd-44e0cc2cc848 | ipv6-icmp | None | >>>>> | None | >>>>> | 762effae-b8e5-42ac-ba99-e85a7bc42455 | tcp | ::/0 | >>>>> 22:22 | None | >>>>> | 81f3588d-4159-4af2-ad50-ff6b76add9cf | ipv6-icmp | None | >>>>> | None | >>>>> >>>>> +--------------------------------------+-------------+----------+------------+--------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | egress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 759edd06-b698-45ca-94cd-44e0cc2cc848 >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:41Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> $ openstack security group rule show >>>>> 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | created_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> | description | >>>>> >>>>> | >>>>> | direction | ingress >>>>> >>>>> | >>>>> | ether_type | IPv6 >>>>> >>>>> | >>>>> | id | 81f3588d-4159-4af2-ad50-ff6b76add9cf >>>>> >>>>> | >>>>> | location | Munch({'project': Munch({'domain_id': 'default', >>>>> 'id': u'68e3942285a24fb5bd1aed30e166aaee', 'name': 'admin', 'domain_name': >>>>> None}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None}) | >>>>> | name | None >>>>> >>>>> | >>>>> | port_range_max | None >>>>> >>>>> | >>>>> | port_range_min | None >>>>> >>>>> | >>>>> | project_id | 68e3942285a24fb5bd1aed30e166aaee >>>>> >>>>> | >>>>> | protocol | ipv6-icmp >>>>> >>>>> | >>>>> | remote_group_id | None >>>>> >>>>> | >>>>> | remote_ip_prefix | None >>>>> >>>>> | >>>>> | revision_number | 0 >>>>> >>>>> | >>>>> | security_group_id | d0136b0e-ee51-461c-afa0-c5adb88dd0dd >>>>> >>>>> | >>>>> | tags | [] >>>>> >>>>> | >>>>> | updated_at | 2019-09-03T16:51:30Z >>>>> >>>>> | >>>>> >>>>> +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> >>>>> On Fri, Sep 13, 2019 at 10:16 AM Donny Davis >>>>> wrote: >>>>> >>>>>> Security group rules? >>>>>> >>>>>> Donny Davis >>>>>> c: 805 814 6800 >>>>>> >>>>>> On Thu, Sep 12, 2019, 5:53 PM Lucio Seki wrote: >>>>>> >>>>>>> Hi folks, I'm having troubles to ping6 a VM running over DevStack >>>>>>> from its hypervisor. >>>>>>> Could you please help me troubleshooting it? >>>>>>> >>>>>>> I deployed DevStack with NEUTRON_CREATE_INITIAL_NETWORKS=False, >>>>>>> and manually created the networks, subnets and router. Following is >>>>>>> my router: >>>>>>> >>>>>>> $ openstack router show router1 -c external_gateway_info -c >>>>>>> interfaces_info >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | Field | Value >>>>>>> >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | external_gateway_info | {"network_id": >>>>>>> "b87048ed-1be9-4f31-8d7e-fe74921aeec4", "enable_snat": true, >>>>>>> "external_fixed_ips": [{"subnet_id": >>>>>>> "28a00bc3-b30b-456f-b26a-44b50d37183f", "ip_address": "10.2.0.199"}, >>>>>>> {"subnet_id": "a9729beb-b297-4fec-8ec3-7703f7f6f4bc", "ip_address": >>>>>>> "fd12:67:1::3c"}]} | >>>>>>> | interfaces_info | [{"subnet_id": >>>>>>> "081e8508-4ceb-4aaf-bf91-36a1e22a768c", "ip_address": "fd12:67:1:1::1", >>>>>>> "port_id": "75391abd-8ac8-41f8-acf8-3dfaf2a6b08f"}] >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> >>>>>>> I'm trying to ping6 the following VM: >>>>>>> >>>>>>> $ openstack server list >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | ID | Name | Status | Networks >>>>>>> | Image | Flavor | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> | 938854d0-80e9-45b2-bc29-8fe7651ffa93 | manila1 | ACTIVE | >>>>>>> private1=fd12:67:1:1:f816:3eff:fe0e:17c3 | manila | manila | >>>>>>> >>>>>>> +--------------------------------------+---------+--------+------------------------------------------+--------+--------+ >>>>>>> >>>>>>> I intend to reach it via br-ex interface of the hypervisor: >>>>>>> >>>>>>> $ ip a show dev br-ex >>>>>>> 9: br-ex: mtu 1500 qdisc noqueue >>>>>>> state UNKNOWN group default qlen 1000 >>>>>>> link/ether 0e:82:a1:ba:77:4c brd ff:ff:ff:ff:ff:ff >>>>>>> inet6 fd12:67:1::1/64 scope global >>>>>>> valid_lft forever preferred_lft forever >>>>>>> inet6 fe80::c82:a1ff:feba:774c/64 scope link >>>>>>> valid_lft forever preferred_lft forever >>>>>>> >>>>>>> The hypervisor has the following routes: >>>>>>> >>>>>>> $ ip -6 route >>>>>>> fd12:67:1:1::/64 via fd12:67:1::3c dev br-ex metric 1024 pref medium >>>>>>> fe80::/64 dev ens3 proto kernel metric 256 pref medium >>>>>>> fe80::/64 dev b